00:00:00.000 Started by upstream project "autotest-spdk-v24.05-vs-dpdk-v23.11" build number 88 00:00:00.000 originally caused by: 00:00:00.000 Started by upstream project "nightly-trigger" build number 3266 00:00:00.000 originally caused by: 00:00:00.000 Started by timer 00:00:00.000 Started by timer 00:00:00.030 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.031 The recommended git tool is: git 00:00:00.031 using credential 00000000-0000-0000-0000-000000000002 00:00:00.033 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.046 Fetching changes from the remote Git repository 00:00:00.050 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.062 Using shallow fetch with depth 1 00:00:00.062 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.062 > git --version # timeout=10 00:00:00.075 > git --version # 'git version 2.39.2' 00:00:00.075 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.088 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.088 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.628 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.638 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.651 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:02.651 > git config core.sparsecheckout # timeout=10 00:00:02.663 > git read-tree -mu HEAD # timeout=10 00:00:02.682 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:02.703 Commit message: "inventory: add WCP3 to free inventory" 00:00:02.703 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:02.811 [Pipeline] Start of Pipeline 00:00:02.824 [Pipeline] library 00:00:02.825 Loading library shm_lib@master 00:00:02.826 Library shm_lib@master is cached. Copying from home. 00:00:02.838 [Pipeline] node 00:00:02.845 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:02.846 [Pipeline] { 00:00:02.854 [Pipeline] catchError 00:00:02.855 [Pipeline] { 00:00:02.864 [Pipeline] wrap 00:00:02.871 [Pipeline] { 00:00:02.876 [Pipeline] stage 00:00:02.877 [Pipeline] { (Prologue) 00:00:03.037 [Pipeline] sh 00:00:03.318 + logger -p user.info -t JENKINS-CI 00:00:03.341 [Pipeline] echo 00:00:03.342 Node: GP11 00:00:03.349 [Pipeline] sh 00:00:03.644 [Pipeline] setCustomBuildProperty 00:00:03.656 [Pipeline] echo 00:00:03.658 Cleanup processes 00:00:03.661 [Pipeline] sh 00:00:03.943 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:03.943 2993666 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:03.956 [Pipeline] sh 00:00:04.238 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.238 ++ grep -v 'sudo pgrep' 00:00:04.238 ++ awk '{print $1}' 00:00:04.238 + sudo kill -9 00:00:04.238 + true 00:00:04.254 [Pipeline] cleanWs 00:00:04.263 [WS-CLEANUP] Deleting project workspace... 00:00:04.264 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.270 [WS-CLEANUP] done 00:00:04.273 [Pipeline] setCustomBuildProperty 00:00:04.287 [Pipeline] sh 00:00:04.564 + sudo git config --global --replace-all safe.directory '*' 00:00:04.647 [Pipeline] httpRequest 00:00:04.675 [Pipeline] echo 00:00:04.676 Sorcerer 10.211.164.101 is alive 00:00:04.684 [Pipeline] httpRequest 00:00:04.689 HttpMethod: GET 00:00:04.689 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:04.690 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:04.708 Response Code: HTTP/1.1 200 OK 00:00:04.709 Success: Status code 200 is in the accepted range: 200,404 00:00:04.709 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:10.866 [Pipeline] sh 00:00:11.150 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:11.168 [Pipeline] httpRequest 00:00:11.209 [Pipeline] echo 00:00:11.211 Sorcerer 10.211.164.101 is alive 00:00:11.219 [Pipeline] httpRequest 00:00:11.224 HttpMethod: GET 00:00:11.225 URL: http://10.211.164.101/packages/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:00:11.225 Sending request to url: http://10.211.164.101/packages/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:00:11.228 Response Code: HTTP/1.1 200 OK 00:00:11.229 Success: Status code 200 is in the accepted range: 200,404 00:00:11.229 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:00:27.541 [Pipeline] sh 00:00:27.826 + tar --no-same-owner -xf spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:00:31.127 [Pipeline] sh 00:00:31.422 + git -C spdk log --oneline -n5 00:00:31.422 5fa2f5086 nvme: add lock_depth for ctrlr_lock 00:00:31.422 330a4f94d nvme: check pthread_mutex_destroy() return value 00:00:31.422 7b72c3ced nvme: add nvme_ctrlr_lock 00:00:31.422 fc7a37019 nvme: always use nvme_robust_mutex_lock for ctrlr_lock 00:00:31.422 3e04ecdd1 bdev_nvme: use spdk_nvme_ctrlr_fail() on ctrlr_loss_timeout 00:00:31.443 [Pipeline] withCredentials 00:00:31.455 > git --version # timeout=10 00:00:31.467 > git --version # 'git version 2.39.2' 00:00:31.485 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:31.487 [Pipeline] { 00:00:31.497 [Pipeline] retry 00:00:31.499 [Pipeline] { 00:00:31.516 [Pipeline] sh 00:00:31.799 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:00:32.382 [Pipeline] } 00:00:32.400 [Pipeline] // retry 00:00:32.406 [Pipeline] } 00:00:32.423 [Pipeline] // withCredentials 00:00:32.432 [Pipeline] httpRequest 00:00:32.454 [Pipeline] echo 00:00:32.456 Sorcerer 10.211.164.101 is alive 00:00:32.464 [Pipeline] httpRequest 00:00:32.470 HttpMethod: GET 00:00:32.471 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:32.471 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:32.486 Response Code: HTTP/1.1 200 OK 00:00:32.487 Success: Status code 200 is in the accepted range: 200,404 00:00:32.487 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:01.112 [Pipeline] sh 00:01:01.398 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:03.335 [Pipeline] sh 00:01:03.615 + git -C dpdk log --oneline -n5 00:01:03.615 eeb0605f11 version: 23.11.0 00:01:03.615 238778122a doc: update release notes for 23.11 00:01:03.615 46aa6b3cfc doc: fix description of RSS features 00:01:03.615 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:03.615 7e421ae345 devtools: support skipping forbid rule check 00:01:03.625 [Pipeline] } 00:01:03.640 [Pipeline] // stage 00:01:03.648 [Pipeline] stage 00:01:03.650 [Pipeline] { (Prepare) 00:01:03.669 [Pipeline] writeFile 00:01:03.681 [Pipeline] sh 00:01:03.955 + logger -p user.info -t JENKINS-CI 00:01:03.968 [Pipeline] sh 00:01:04.247 + logger -p user.info -t JENKINS-CI 00:01:04.260 [Pipeline] sh 00:01:04.541 + cat autorun-spdk.conf 00:01:04.541 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:04.541 SPDK_TEST_NVMF=1 00:01:04.541 SPDK_TEST_NVME_CLI=1 00:01:04.541 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:04.541 SPDK_TEST_NVMF_NICS=e810 00:01:04.541 SPDK_TEST_VFIOUSER=1 00:01:04.541 SPDK_RUN_UBSAN=1 00:01:04.541 NET_TYPE=phy 00:01:04.541 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:04.541 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:04.548 RUN_NIGHTLY=1 00:01:04.553 [Pipeline] readFile 00:01:04.579 [Pipeline] withEnv 00:01:04.581 [Pipeline] { 00:01:04.594 [Pipeline] sh 00:01:04.870 + set -ex 00:01:04.870 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:04.870 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:04.870 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:04.870 ++ SPDK_TEST_NVMF=1 00:01:04.870 ++ SPDK_TEST_NVME_CLI=1 00:01:04.870 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:04.870 ++ SPDK_TEST_NVMF_NICS=e810 00:01:04.870 ++ SPDK_TEST_VFIOUSER=1 00:01:04.870 ++ SPDK_RUN_UBSAN=1 00:01:04.870 ++ NET_TYPE=phy 00:01:04.870 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:04.870 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:04.870 ++ RUN_NIGHTLY=1 00:01:04.870 + case $SPDK_TEST_NVMF_NICS in 00:01:04.870 + DRIVERS=ice 00:01:04.870 + [[ tcp == \r\d\m\a ]] 00:01:04.870 + [[ -n ice ]] 00:01:04.870 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:04.870 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:04.870 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:04.870 rmmod: ERROR: Module irdma is not currently loaded 00:01:04.870 rmmod: ERROR: Module i40iw is not currently loaded 00:01:04.870 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:04.870 + true 00:01:04.870 + for D in $DRIVERS 00:01:04.870 + sudo modprobe ice 00:01:04.870 + exit 0 00:01:04.879 [Pipeline] } 00:01:04.899 [Pipeline] // withEnv 00:01:04.907 [Pipeline] } 00:01:04.926 [Pipeline] // stage 00:01:04.938 [Pipeline] catchError 00:01:04.940 [Pipeline] { 00:01:04.958 [Pipeline] timeout 00:01:04.958 Timeout set to expire in 50 min 00:01:04.960 [Pipeline] { 00:01:04.976 [Pipeline] stage 00:01:04.978 [Pipeline] { (Tests) 00:01:04.995 [Pipeline] sh 00:01:05.275 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:05.275 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:05.275 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:05.275 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:05.275 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:05.275 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:05.275 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:05.275 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:05.275 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:05.275 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:05.275 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:05.275 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:05.275 + source /etc/os-release 00:01:05.275 ++ NAME='Fedora Linux' 00:01:05.275 ++ VERSION='38 (Cloud Edition)' 00:01:05.275 ++ ID=fedora 00:01:05.275 ++ VERSION_ID=38 00:01:05.275 ++ VERSION_CODENAME= 00:01:05.275 ++ PLATFORM_ID=platform:f38 00:01:05.275 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:05.275 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:05.275 ++ LOGO=fedora-logo-icon 00:01:05.275 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:05.275 ++ HOME_URL=https://fedoraproject.org/ 00:01:05.275 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:05.275 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:05.275 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:05.275 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:05.275 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:05.275 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:05.275 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:05.275 ++ SUPPORT_END=2024-05-14 00:01:05.275 ++ VARIANT='Cloud Edition' 00:01:05.275 ++ VARIANT_ID=cloud 00:01:05.275 + uname -a 00:01:05.275 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:05.275 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:06.212 Hugepages 00:01:06.212 node hugesize free / total 00:01:06.212 node0 1048576kB 0 / 0 00:01:06.212 node0 2048kB 0 / 0 00:01:06.212 node1 1048576kB 0 / 0 00:01:06.212 node1 2048kB 0 / 0 00:01:06.212 00:01:06.212 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:06.212 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:06.212 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:06.212 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:06.212 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:06.212 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:06.212 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:06.212 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:06.212 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:06.212 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:06.212 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:06.212 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:06.212 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:06.212 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:06.212 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:06.212 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:06.212 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:06.212 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:06.212 + rm -f /tmp/spdk-ld-path 00:01:06.212 + source autorun-spdk.conf 00:01:06.212 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:06.212 ++ SPDK_TEST_NVMF=1 00:01:06.212 ++ SPDK_TEST_NVME_CLI=1 00:01:06.212 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:06.212 ++ SPDK_TEST_NVMF_NICS=e810 00:01:06.212 ++ SPDK_TEST_VFIOUSER=1 00:01:06.212 ++ SPDK_RUN_UBSAN=1 00:01:06.212 ++ NET_TYPE=phy 00:01:06.212 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:06.212 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:06.212 ++ RUN_NIGHTLY=1 00:01:06.212 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:06.212 + [[ -n '' ]] 00:01:06.212 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:06.212 + for M in /var/spdk/build-*-manifest.txt 00:01:06.212 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:06.212 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:06.212 + for M in /var/spdk/build-*-manifest.txt 00:01:06.212 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:06.212 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:06.212 ++ uname 00:01:06.212 + [[ Linux == \L\i\n\u\x ]] 00:01:06.212 + sudo dmesg -T 00:01:06.470 + sudo dmesg --clear 00:01:06.470 + dmesg_pid=2994994 00:01:06.470 + [[ Fedora Linux == FreeBSD ]] 00:01:06.470 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:06.470 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:06.470 + sudo dmesg -Tw 00:01:06.470 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:06.470 + [[ -x /usr/src/fio-static/fio ]] 00:01:06.470 + export FIO_BIN=/usr/src/fio-static/fio 00:01:06.470 + FIO_BIN=/usr/src/fio-static/fio 00:01:06.470 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:06.470 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:06.470 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:06.470 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:06.470 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:06.470 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:06.470 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:06.470 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:06.470 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:06.470 Test configuration: 00:01:06.470 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:06.470 SPDK_TEST_NVMF=1 00:01:06.470 SPDK_TEST_NVME_CLI=1 00:01:06.470 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:06.470 SPDK_TEST_NVMF_NICS=e810 00:01:06.470 SPDK_TEST_VFIOUSER=1 00:01:06.470 SPDK_RUN_UBSAN=1 00:01:06.470 NET_TYPE=phy 00:01:06.470 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:06.470 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:06.470 RUN_NIGHTLY=1 05:15:13 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:06.470 05:15:13 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:06.470 05:15:13 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:06.470 05:15:13 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:06.470 05:15:13 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:06.470 05:15:13 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:06.470 05:15:13 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:06.470 05:15:13 -- paths/export.sh@5 -- $ export PATH 00:01:06.470 05:15:13 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:06.470 05:15:13 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:06.470 05:15:13 -- common/autobuild_common.sh@437 -- $ date +%s 00:01:06.470 05:15:13 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1720926913.XXXXXX 00:01:06.470 05:15:13 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1720926913.s4i9yl 00:01:06.470 05:15:13 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:01:06.470 05:15:13 -- common/autobuild_common.sh@443 -- $ '[' -n v23.11 ']' 00:01:06.470 05:15:13 -- common/autobuild_common.sh@444 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:06.470 05:15:13 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:06.470 05:15:13 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:06.470 05:15:13 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:06.470 05:15:13 -- common/autobuild_common.sh@453 -- $ get_config_params 00:01:06.470 05:15:13 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:01:06.470 05:15:13 -- common/autotest_common.sh@10 -- $ set +x 00:01:06.470 05:15:13 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:06.470 05:15:13 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:01:06.470 05:15:13 -- pm/common@17 -- $ local monitor 00:01:06.470 05:15:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:06.470 05:15:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:06.470 05:15:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:06.470 05:15:13 -- pm/common@21 -- $ date +%s 00:01:06.470 05:15:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:06.470 05:15:13 -- pm/common@21 -- $ date +%s 00:01:06.470 05:15:13 -- pm/common@25 -- $ sleep 1 00:01:06.470 05:15:13 -- pm/common@21 -- $ date +%s 00:01:06.470 05:15:13 -- pm/common@21 -- $ date +%s 00:01:06.470 05:15:13 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720926913 00:01:06.470 05:15:13 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720926913 00:01:06.470 05:15:13 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720926913 00:01:06.470 05:15:13 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720926913 00:01:06.470 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720926913_collect-vmstat.pm.log 00:01:06.470 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720926913_collect-cpu-load.pm.log 00:01:06.470 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720926913_collect-cpu-temp.pm.log 00:01:06.470 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720926913_collect-bmc-pm.bmc.pm.log 00:01:07.405 05:15:14 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:01:07.405 05:15:14 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:07.405 05:15:14 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:07.405 05:15:14 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:07.405 05:15:14 -- spdk/autobuild.sh@16 -- $ date -u 00:01:07.405 Sun Jul 14 03:15:14 AM UTC 2024 00:01:07.405 05:15:14 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:07.405 v24.05-13-g5fa2f5086 00:01:07.405 05:15:14 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:07.405 05:15:14 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:07.405 05:15:14 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:07.405 05:15:14 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:01:07.405 05:15:14 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:07.405 05:15:14 -- common/autotest_common.sh@10 -- $ set +x 00:01:07.405 ************************************ 00:01:07.405 START TEST ubsan 00:01:07.405 ************************************ 00:01:07.405 05:15:14 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:01:07.405 using ubsan 00:01:07.405 00:01:07.405 real 0m0.000s 00:01:07.405 user 0m0.000s 00:01:07.405 sys 0m0.000s 00:01:07.405 05:15:14 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:01:07.405 05:15:14 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:07.405 ************************************ 00:01:07.405 END TEST ubsan 00:01:07.405 ************************************ 00:01:07.405 05:15:14 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:01:07.405 05:15:14 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:07.405 05:15:14 -- common/autobuild_common.sh@429 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:07.405 05:15:14 -- common/autotest_common.sh@1097 -- $ '[' 2 -le 1 ']' 00:01:07.405 05:15:14 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:07.405 05:15:14 -- common/autotest_common.sh@10 -- $ set +x 00:01:07.405 ************************************ 00:01:07.405 START TEST build_native_dpdk 00:01:07.405 ************************************ 00:01:07.405 05:15:14 build_native_dpdk -- common/autotest_common.sh@1121 -- $ _build_native_dpdk 00:01:07.405 05:15:14 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:07.405 05:15:14 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:07.405 05:15:14 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:07.405 05:15:14 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:07.405 05:15:14 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:07.405 05:15:14 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:07.405 05:15:14 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:07.405 05:15:14 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:07.405 05:15:14 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:07.405 05:15:14 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:07.405 05:15:14 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:07.405 05:15:14 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:07.405 05:15:14 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:07.405 05:15:14 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:07.405 05:15:14 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:07.664 05:15:14 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:07.664 05:15:14 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:07.664 05:15:14 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:07.664 05:15:14 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:07.664 05:15:14 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:07.664 eeb0605f11 version: 23.11.0 00:01:07.664 238778122a doc: update release notes for 23.11 00:01:07.664 46aa6b3cfc doc: fix description of RSS features 00:01:07.664 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:07.664 7e421ae345 devtools: support skipping forbid rule check 00:01:07.664 05:15:14 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:07.664 05:15:14 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:07.664 05:15:14 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:01:07.664 05:15:14 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:07.664 05:15:14 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:07.664 05:15:14 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:07.664 05:15:14 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:07.664 05:15:14 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:07.664 05:15:14 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:07.664 05:15:14 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:07.664 05:15:14 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:07.664 05:15:14 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:07.665 05:15:14 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:07.665 05:15:14 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:07.665 05:15:14 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:07.665 05:15:14 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:07.665 05:15:14 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:07.665 05:15:14 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:01:07.665 05:15:14 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:01:07.665 05:15:14 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:07.665 05:15:14 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:07.665 05:15:14 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:07.665 05:15:14 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:07.665 05:15:14 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:07.665 05:15:14 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:07.665 05:15:14 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:07.665 05:15:14 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:01:07.665 05:15:14 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:07.665 05:15:14 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:07.665 05:15:14 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:07.665 05:15:14 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:07.665 05:15:14 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:07.665 05:15:14 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:07.665 05:15:14 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 23 00:01:07.665 05:15:14 build_native_dpdk -- scripts/common.sh@350 -- $ local d=23 00:01:07.665 05:15:14 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:07.665 05:15:14 build_native_dpdk -- scripts/common.sh@352 -- $ echo 23 00:01:07.665 05:15:14 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=23 00:01:07.665 05:15:14 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:01:07.665 05:15:14 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:01:07.665 05:15:14 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:07.665 05:15:14 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:01:07.665 05:15:14 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:01:07.665 05:15:14 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:07.665 05:15:14 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:01:07.665 05:15:14 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:07.665 patching file config/rte_config.h 00:01:07.665 Hunk #1 succeeded at 60 (offset 1 line). 00:01:07.665 05:15:14 build_native_dpdk -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:01:07.665 05:15:14 build_native_dpdk -- common/autobuild_common.sh@178 -- $ uname -s 00:01:07.665 05:15:14 build_native_dpdk -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:01:07.665 05:15:14 build_native_dpdk -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:07.665 05:15:14 build_native_dpdk -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:11.855 The Meson build system 00:01:11.855 Version: 1.3.1 00:01:11.855 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:11.855 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:11.855 Build type: native build 00:01:11.855 Program cat found: YES (/usr/bin/cat) 00:01:11.855 Project name: DPDK 00:01:11.855 Project version: 23.11.0 00:01:11.855 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:11.855 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:11.855 Host machine cpu family: x86_64 00:01:11.855 Host machine cpu: x86_64 00:01:11.855 Message: ## Building in Developer Mode ## 00:01:11.855 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:11.855 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:11.855 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:11.855 Program python3 found: YES (/usr/bin/python3) 00:01:11.855 Program cat found: YES (/usr/bin/cat) 00:01:11.855 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:11.855 Compiler for C supports arguments -march=native: YES 00:01:11.855 Checking for size of "void *" : 8 00:01:11.855 Checking for size of "void *" : 8 (cached) 00:01:11.855 Library m found: YES 00:01:11.855 Library numa found: YES 00:01:11.855 Has header "numaif.h" : YES 00:01:11.855 Library fdt found: NO 00:01:11.855 Library execinfo found: NO 00:01:11.855 Has header "execinfo.h" : YES 00:01:11.855 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:11.855 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:11.855 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:11.855 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:11.855 Run-time dependency openssl found: YES 3.0.9 00:01:11.855 Run-time dependency libpcap found: YES 1.10.4 00:01:11.855 Has header "pcap.h" with dependency libpcap: YES 00:01:11.855 Compiler for C supports arguments -Wcast-qual: YES 00:01:11.855 Compiler for C supports arguments -Wdeprecated: YES 00:01:11.855 Compiler for C supports arguments -Wformat: YES 00:01:11.855 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:11.855 Compiler for C supports arguments -Wformat-security: NO 00:01:11.855 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:11.855 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:11.855 Compiler for C supports arguments -Wnested-externs: YES 00:01:11.855 Compiler for C supports arguments -Wold-style-definition: YES 00:01:11.855 Compiler for C supports arguments -Wpointer-arith: YES 00:01:11.855 Compiler for C supports arguments -Wsign-compare: YES 00:01:11.855 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:11.855 Compiler for C supports arguments -Wundef: YES 00:01:11.855 Compiler for C supports arguments -Wwrite-strings: YES 00:01:11.855 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:11.855 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:11.855 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:11.855 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:11.855 Program objdump found: YES (/usr/bin/objdump) 00:01:11.855 Compiler for C supports arguments -mavx512f: YES 00:01:11.855 Checking if "AVX512 checking" compiles: YES 00:01:11.855 Fetching value of define "__SSE4_2__" : 1 00:01:11.855 Fetching value of define "__AES__" : 1 00:01:11.855 Fetching value of define "__AVX__" : 1 00:01:11.855 Fetching value of define "__AVX2__" : (undefined) 00:01:11.855 Fetching value of define "__AVX512BW__" : (undefined) 00:01:11.855 Fetching value of define "__AVX512CD__" : (undefined) 00:01:11.855 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:11.855 Fetching value of define "__AVX512F__" : (undefined) 00:01:11.855 Fetching value of define "__AVX512VL__" : (undefined) 00:01:11.855 Fetching value of define "__PCLMUL__" : 1 00:01:11.855 Fetching value of define "__RDRND__" : 1 00:01:11.855 Fetching value of define "__RDSEED__" : (undefined) 00:01:11.855 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:11.855 Fetching value of define "__znver1__" : (undefined) 00:01:11.855 Fetching value of define "__znver2__" : (undefined) 00:01:11.855 Fetching value of define "__znver3__" : (undefined) 00:01:11.855 Fetching value of define "__znver4__" : (undefined) 00:01:11.855 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:11.855 Message: lib/log: Defining dependency "log" 00:01:11.855 Message: lib/kvargs: Defining dependency "kvargs" 00:01:11.855 Message: lib/telemetry: Defining dependency "telemetry" 00:01:11.855 Checking for function "getentropy" : NO 00:01:11.855 Message: lib/eal: Defining dependency "eal" 00:01:11.855 Message: lib/ring: Defining dependency "ring" 00:01:11.855 Message: lib/rcu: Defining dependency "rcu" 00:01:11.855 Message: lib/mempool: Defining dependency "mempool" 00:01:11.855 Message: lib/mbuf: Defining dependency "mbuf" 00:01:11.855 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:11.855 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:11.855 Compiler for C supports arguments -mpclmul: YES 00:01:11.855 Compiler for C supports arguments -maes: YES 00:01:11.855 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:11.855 Compiler for C supports arguments -mavx512bw: YES 00:01:11.855 Compiler for C supports arguments -mavx512dq: YES 00:01:11.855 Compiler for C supports arguments -mavx512vl: YES 00:01:11.855 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:11.855 Compiler for C supports arguments -mavx2: YES 00:01:11.855 Compiler for C supports arguments -mavx: YES 00:01:11.855 Message: lib/net: Defining dependency "net" 00:01:11.855 Message: lib/meter: Defining dependency "meter" 00:01:11.855 Message: lib/ethdev: Defining dependency "ethdev" 00:01:11.855 Message: lib/pci: Defining dependency "pci" 00:01:11.855 Message: lib/cmdline: Defining dependency "cmdline" 00:01:11.855 Message: lib/metrics: Defining dependency "metrics" 00:01:11.855 Message: lib/hash: Defining dependency "hash" 00:01:11.855 Message: lib/timer: Defining dependency "timer" 00:01:11.855 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:11.855 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:01:11.855 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:01:11.855 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:01:11.855 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:01:11.855 Message: lib/acl: Defining dependency "acl" 00:01:11.855 Message: lib/bbdev: Defining dependency "bbdev" 00:01:11.855 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:11.855 Run-time dependency libelf found: YES 0.190 00:01:11.855 Message: lib/bpf: Defining dependency "bpf" 00:01:11.855 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:11.855 Message: lib/compressdev: Defining dependency "compressdev" 00:01:11.855 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:11.855 Message: lib/distributor: Defining dependency "distributor" 00:01:11.855 Message: lib/dmadev: Defining dependency "dmadev" 00:01:11.855 Message: lib/efd: Defining dependency "efd" 00:01:11.855 Message: lib/eventdev: Defining dependency "eventdev" 00:01:11.855 Message: lib/dispatcher: Defining dependency "dispatcher" 00:01:11.856 Message: lib/gpudev: Defining dependency "gpudev" 00:01:11.856 Message: lib/gro: Defining dependency "gro" 00:01:11.856 Message: lib/gso: Defining dependency "gso" 00:01:11.856 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:11.856 Message: lib/jobstats: Defining dependency "jobstats" 00:01:11.856 Message: lib/latencystats: Defining dependency "latencystats" 00:01:11.856 Message: lib/lpm: Defining dependency "lpm" 00:01:11.856 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:11.856 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:11.856 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:11.856 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:11.856 Message: lib/member: Defining dependency "member" 00:01:11.856 Message: lib/pcapng: Defining dependency "pcapng" 00:01:11.856 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:11.856 Message: lib/power: Defining dependency "power" 00:01:11.856 Message: lib/rawdev: Defining dependency "rawdev" 00:01:11.856 Message: lib/regexdev: Defining dependency "regexdev" 00:01:11.856 Message: lib/mldev: Defining dependency "mldev" 00:01:11.856 Message: lib/rib: Defining dependency "rib" 00:01:11.856 Message: lib/reorder: Defining dependency "reorder" 00:01:11.856 Message: lib/sched: Defining dependency "sched" 00:01:11.856 Message: lib/security: Defining dependency "security" 00:01:11.856 Message: lib/stack: Defining dependency "stack" 00:01:11.856 Has header "linux/userfaultfd.h" : YES 00:01:11.856 Has header "linux/vduse.h" : YES 00:01:11.856 Message: lib/vhost: Defining dependency "vhost" 00:01:11.856 Message: lib/ipsec: Defining dependency "ipsec" 00:01:11.856 Message: lib/pdcp: Defining dependency "pdcp" 00:01:11.856 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:11.856 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:11.856 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:01:11.856 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:11.856 Message: lib/fib: Defining dependency "fib" 00:01:11.856 Message: lib/port: Defining dependency "port" 00:01:11.856 Message: lib/pdump: Defining dependency "pdump" 00:01:11.856 Message: lib/table: Defining dependency "table" 00:01:11.856 Message: lib/pipeline: Defining dependency "pipeline" 00:01:11.856 Message: lib/graph: Defining dependency "graph" 00:01:11.856 Message: lib/node: Defining dependency "node" 00:01:13.238 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:13.238 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:13.238 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:13.238 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:13.238 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:13.238 Compiler for C supports arguments -Wno-unused-value: YES 00:01:13.238 Compiler for C supports arguments -Wno-format: YES 00:01:13.238 Compiler for C supports arguments -Wno-format-security: YES 00:01:13.238 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:13.238 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:13.238 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:13.238 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:13.238 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:13.238 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:13.238 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:13.238 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:13.238 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:13.238 Has header "sys/epoll.h" : YES 00:01:13.238 Program doxygen found: YES (/usr/bin/doxygen) 00:01:13.238 Configuring doxy-api-html.conf using configuration 00:01:13.238 Configuring doxy-api-man.conf using configuration 00:01:13.238 Program mandb found: YES (/usr/bin/mandb) 00:01:13.238 Program sphinx-build found: NO 00:01:13.238 Configuring rte_build_config.h using configuration 00:01:13.238 Message: 00:01:13.238 ================= 00:01:13.238 Applications Enabled 00:01:13.238 ================= 00:01:13.238 00:01:13.238 apps: 00:01:13.238 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:01:13.238 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:01:13.238 test-pmd, test-regex, test-sad, test-security-perf, 00:01:13.238 00:01:13.238 Message: 00:01:13.238 ================= 00:01:13.238 Libraries Enabled 00:01:13.238 ================= 00:01:13.238 00:01:13.238 libs: 00:01:13.238 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:13.238 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:01:13.238 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:01:13.238 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:01:13.238 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:01:13.238 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:01:13.238 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:01:13.238 00:01:13.238 00:01:13.238 Message: 00:01:13.238 =============== 00:01:13.238 Drivers Enabled 00:01:13.238 =============== 00:01:13.238 00:01:13.238 common: 00:01:13.238 00:01:13.238 bus: 00:01:13.238 pci, vdev, 00:01:13.238 mempool: 00:01:13.238 ring, 00:01:13.238 dma: 00:01:13.238 00:01:13.238 net: 00:01:13.238 i40e, 00:01:13.238 raw: 00:01:13.238 00:01:13.238 crypto: 00:01:13.238 00:01:13.238 compress: 00:01:13.238 00:01:13.238 regex: 00:01:13.238 00:01:13.238 ml: 00:01:13.238 00:01:13.238 vdpa: 00:01:13.238 00:01:13.238 event: 00:01:13.238 00:01:13.238 baseband: 00:01:13.238 00:01:13.238 gpu: 00:01:13.238 00:01:13.238 00:01:13.238 Message: 00:01:13.238 ================= 00:01:13.238 Content Skipped 00:01:13.238 ================= 00:01:13.238 00:01:13.238 apps: 00:01:13.238 00:01:13.238 libs: 00:01:13.238 00:01:13.238 drivers: 00:01:13.238 common/cpt: not in enabled drivers build config 00:01:13.238 common/dpaax: not in enabled drivers build config 00:01:13.238 common/iavf: not in enabled drivers build config 00:01:13.238 common/idpf: not in enabled drivers build config 00:01:13.238 common/mvep: not in enabled drivers build config 00:01:13.238 common/octeontx: not in enabled drivers build config 00:01:13.238 bus/auxiliary: not in enabled drivers build config 00:01:13.238 bus/cdx: not in enabled drivers build config 00:01:13.238 bus/dpaa: not in enabled drivers build config 00:01:13.238 bus/fslmc: not in enabled drivers build config 00:01:13.238 bus/ifpga: not in enabled drivers build config 00:01:13.238 bus/platform: not in enabled drivers build config 00:01:13.238 bus/vmbus: not in enabled drivers build config 00:01:13.238 common/cnxk: not in enabled drivers build config 00:01:13.238 common/mlx5: not in enabled drivers build config 00:01:13.238 common/nfp: not in enabled drivers build config 00:01:13.238 common/qat: not in enabled drivers build config 00:01:13.238 common/sfc_efx: not in enabled drivers build config 00:01:13.238 mempool/bucket: not in enabled drivers build config 00:01:13.238 mempool/cnxk: not in enabled drivers build config 00:01:13.238 mempool/dpaa: not in enabled drivers build config 00:01:13.238 mempool/dpaa2: not in enabled drivers build config 00:01:13.238 mempool/octeontx: not in enabled drivers build config 00:01:13.238 mempool/stack: not in enabled drivers build config 00:01:13.238 dma/cnxk: not in enabled drivers build config 00:01:13.238 dma/dpaa: not in enabled drivers build config 00:01:13.238 dma/dpaa2: not in enabled drivers build config 00:01:13.238 dma/hisilicon: not in enabled drivers build config 00:01:13.238 dma/idxd: not in enabled drivers build config 00:01:13.238 dma/ioat: not in enabled drivers build config 00:01:13.238 dma/skeleton: not in enabled drivers build config 00:01:13.238 net/af_packet: not in enabled drivers build config 00:01:13.238 net/af_xdp: not in enabled drivers build config 00:01:13.238 net/ark: not in enabled drivers build config 00:01:13.238 net/atlantic: not in enabled drivers build config 00:01:13.238 net/avp: not in enabled drivers build config 00:01:13.238 net/axgbe: not in enabled drivers build config 00:01:13.238 net/bnx2x: not in enabled drivers build config 00:01:13.238 net/bnxt: not in enabled drivers build config 00:01:13.238 net/bonding: not in enabled drivers build config 00:01:13.238 net/cnxk: not in enabled drivers build config 00:01:13.238 net/cpfl: not in enabled drivers build config 00:01:13.238 net/cxgbe: not in enabled drivers build config 00:01:13.238 net/dpaa: not in enabled drivers build config 00:01:13.238 net/dpaa2: not in enabled drivers build config 00:01:13.238 net/e1000: not in enabled drivers build config 00:01:13.238 net/ena: not in enabled drivers build config 00:01:13.238 net/enetc: not in enabled drivers build config 00:01:13.238 net/enetfec: not in enabled drivers build config 00:01:13.238 net/enic: not in enabled drivers build config 00:01:13.238 net/failsafe: not in enabled drivers build config 00:01:13.238 net/fm10k: not in enabled drivers build config 00:01:13.238 net/gve: not in enabled drivers build config 00:01:13.238 net/hinic: not in enabled drivers build config 00:01:13.238 net/hns3: not in enabled drivers build config 00:01:13.238 net/iavf: not in enabled drivers build config 00:01:13.238 net/ice: not in enabled drivers build config 00:01:13.238 net/idpf: not in enabled drivers build config 00:01:13.238 net/igc: not in enabled drivers build config 00:01:13.238 net/ionic: not in enabled drivers build config 00:01:13.238 net/ipn3ke: not in enabled drivers build config 00:01:13.238 net/ixgbe: not in enabled drivers build config 00:01:13.238 net/mana: not in enabled drivers build config 00:01:13.238 net/memif: not in enabled drivers build config 00:01:13.238 net/mlx4: not in enabled drivers build config 00:01:13.238 net/mlx5: not in enabled drivers build config 00:01:13.238 net/mvneta: not in enabled drivers build config 00:01:13.238 net/mvpp2: not in enabled drivers build config 00:01:13.238 net/netvsc: not in enabled drivers build config 00:01:13.238 net/nfb: not in enabled drivers build config 00:01:13.238 net/nfp: not in enabled drivers build config 00:01:13.238 net/ngbe: not in enabled drivers build config 00:01:13.238 net/null: not in enabled drivers build config 00:01:13.238 net/octeontx: not in enabled drivers build config 00:01:13.238 net/octeon_ep: not in enabled drivers build config 00:01:13.238 net/pcap: not in enabled drivers build config 00:01:13.238 net/pfe: not in enabled drivers build config 00:01:13.238 net/qede: not in enabled drivers build config 00:01:13.238 net/ring: not in enabled drivers build config 00:01:13.238 net/sfc: not in enabled drivers build config 00:01:13.238 net/softnic: not in enabled drivers build config 00:01:13.238 net/tap: not in enabled drivers build config 00:01:13.238 net/thunderx: not in enabled drivers build config 00:01:13.238 net/txgbe: not in enabled drivers build config 00:01:13.238 net/vdev_netvsc: not in enabled drivers build config 00:01:13.238 net/vhost: not in enabled drivers build config 00:01:13.238 net/virtio: not in enabled drivers build config 00:01:13.238 net/vmxnet3: not in enabled drivers build config 00:01:13.238 raw/cnxk_bphy: not in enabled drivers build config 00:01:13.238 raw/cnxk_gpio: not in enabled drivers build config 00:01:13.238 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:13.238 raw/ifpga: not in enabled drivers build config 00:01:13.238 raw/ntb: not in enabled drivers build config 00:01:13.238 raw/skeleton: not in enabled drivers build config 00:01:13.238 crypto/armv8: not in enabled drivers build config 00:01:13.238 crypto/bcmfs: not in enabled drivers build config 00:01:13.238 crypto/caam_jr: not in enabled drivers build config 00:01:13.238 crypto/ccp: not in enabled drivers build config 00:01:13.238 crypto/cnxk: not in enabled drivers build config 00:01:13.238 crypto/dpaa_sec: not in enabled drivers build config 00:01:13.238 crypto/dpaa2_sec: not in enabled drivers build config 00:01:13.238 crypto/ipsec_mb: not in enabled drivers build config 00:01:13.238 crypto/mlx5: not in enabled drivers build config 00:01:13.238 crypto/mvsam: not in enabled drivers build config 00:01:13.238 crypto/nitrox: not in enabled drivers build config 00:01:13.239 crypto/null: not in enabled drivers build config 00:01:13.239 crypto/octeontx: not in enabled drivers build config 00:01:13.239 crypto/openssl: not in enabled drivers build config 00:01:13.239 crypto/scheduler: not in enabled drivers build config 00:01:13.239 crypto/uadk: not in enabled drivers build config 00:01:13.239 crypto/virtio: not in enabled drivers build config 00:01:13.239 compress/isal: not in enabled drivers build config 00:01:13.239 compress/mlx5: not in enabled drivers build config 00:01:13.239 compress/octeontx: not in enabled drivers build config 00:01:13.239 compress/zlib: not in enabled drivers build config 00:01:13.239 regex/mlx5: not in enabled drivers build config 00:01:13.239 regex/cn9k: not in enabled drivers build config 00:01:13.239 ml/cnxk: not in enabled drivers build config 00:01:13.239 vdpa/ifc: not in enabled drivers build config 00:01:13.239 vdpa/mlx5: not in enabled drivers build config 00:01:13.239 vdpa/nfp: not in enabled drivers build config 00:01:13.239 vdpa/sfc: not in enabled drivers build config 00:01:13.239 event/cnxk: not in enabled drivers build config 00:01:13.239 event/dlb2: not in enabled drivers build config 00:01:13.239 event/dpaa: not in enabled drivers build config 00:01:13.239 event/dpaa2: not in enabled drivers build config 00:01:13.239 event/dsw: not in enabled drivers build config 00:01:13.239 event/opdl: not in enabled drivers build config 00:01:13.239 event/skeleton: not in enabled drivers build config 00:01:13.239 event/sw: not in enabled drivers build config 00:01:13.239 event/octeontx: not in enabled drivers build config 00:01:13.239 baseband/acc: not in enabled drivers build config 00:01:13.239 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:13.239 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:13.239 baseband/la12xx: not in enabled drivers build config 00:01:13.239 baseband/null: not in enabled drivers build config 00:01:13.239 baseband/turbo_sw: not in enabled drivers build config 00:01:13.239 gpu/cuda: not in enabled drivers build config 00:01:13.239 00:01:13.239 00:01:13.239 Build targets in project: 220 00:01:13.239 00:01:13.239 DPDK 23.11.0 00:01:13.239 00:01:13.239 User defined options 00:01:13.239 libdir : lib 00:01:13.239 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:13.239 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:13.239 c_link_args : 00:01:13.239 enable_docs : false 00:01:13.239 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:13.239 enable_kmods : false 00:01:13.239 machine : native 00:01:13.239 tests : false 00:01:13.239 00:01:13.239 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:13.239 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:13.239 05:15:20 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:01:13.239 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:13.239 [1/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:13.239 [2/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:13.239 [3/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:13.239 [4/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:13.239 [5/710] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:13.500 [6/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:13.500 [7/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:13.500 [8/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:13.500 [9/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:13.500 [10/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:13.500 [11/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:13.500 [12/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:13.500 [13/710] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:13.500 [14/710] Linking static target lib/librte_kvargs.a 00:01:13.500 [15/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:13.500 [16/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:13.500 [17/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:13.500 [18/710] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:13.500 [19/710] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:13.765 [20/710] Linking static target lib/librte_log.a 00:01:13.765 [21/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:13.766 [22/710] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.343 [23/710] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.343 [24/710] Linking target lib/librte_log.so.24.0 00:01:14.343 [25/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:14.343 [26/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:14.343 [27/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:14.343 [28/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:14.343 [29/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:14.343 [30/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:14.343 [31/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:14.343 [32/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:14.343 [33/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:14.343 [34/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:14.603 [35/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:14.603 [36/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:14.603 [37/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:14.603 [38/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:14.603 [39/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:14.603 [40/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:14.603 [41/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:14.603 [42/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:14.603 [43/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:14.603 [44/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:14.603 [45/710] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:14.603 [46/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:14.603 [47/710] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:14.603 [48/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:14.603 [49/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:14.603 [50/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:14.603 [51/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:14.603 [52/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:14.603 [53/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:14.603 [54/710] Linking target lib/librte_kvargs.so.24.0 00:01:14.603 [55/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:14.603 [56/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:14.603 [57/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:14.603 [58/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:14.603 [59/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:14.603 [60/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:14.866 [61/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:14.866 [62/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:14.866 [63/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:14.866 [64/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:14.866 [65/710] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:15.129 [66/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:15.129 [67/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:15.129 [68/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:15.129 [69/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:15.129 [70/710] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:15.129 [71/710] Linking static target lib/librte_pci.a 00:01:15.129 [72/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:15.391 [73/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:15.391 [74/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:15.391 [75/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:15.391 [76/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:15.391 [77/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:15.391 [78/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:15.391 [79/710] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.649 [80/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:15.649 [81/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:15.649 [82/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:15.649 [83/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:15.649 [84/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:15.649 [85/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:15.649 [86/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:15.650 [87/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:15.650 [88/710] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:15.650 [89/710] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:15.650 [90/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:15.650 [91/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:15.650 [92/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:15.650 [93/710] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:15.650 [94/710] Linking static target lib/librte_ring.a 00:01:15.650 [95/710] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:15.650 [96/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:15.650 [97/710] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:15.911 [98/710] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:15.911 [99/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:15.911 [100/710] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:15.911 [101/710] Linking static target lib/librte_meter.a 00:01:15.911 [102/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:15.911 [103/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:15.911 [104/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:15.911 [105/710] Linking static target lib/librte_telemetry.a 00:01:15.911 [106/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:15.911 [107/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:15.911 [108/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:15.911 [109/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:16.173 [110/710] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:16.173 [111/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:16.173 [112/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:16.173 [113/710] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:16.173 [114/710] Linking static target lib/librte_eal.a 00:01:16.173 [115/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:16.173 [116/710] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.173 [117/710] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.173 [118/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:16.173 [119/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:16.173 [120/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:16.431 [121/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:16.431 [122/710] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:16.431 [123/710] Linking static target lib/librte_net.a 00:01:16.431 [124/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:16.431 [125/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:16.431 [126/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:16.692 [127/710] Linking static target lib/librte_cmdline.a 00:01:16.692 [128/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:16.692 [129/710] Linking static target lib/librte_mempool.a 00:01:16.692 [130/710] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.692 [131/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:16.692 [132/710] Linking target lib/librte_telemetry.so.24.0 00:01:16.692 [133/710] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:16.692 [134/710] Linking static target lib/librte_cfgfile.a 00:01:16.692 [135/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:16.692 [136/710] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.956 [137/710] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:16.956 [138/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:16.956 [139/710] Linking static target lib/librte_metrics.a 00:01:16.956 [140/710] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:16.956 [141/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:16.956 [142/710] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:16.956 [143/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:17.219 [144/710] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:17.219 [145/710] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:17.219 [146/710] Linking static target lib/librte_bitratestats.a 00:01:17.219 [147/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:17.219 [148/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:17.219 [149/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:17.219 [150/710] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:17.219 [151/710] Linking static target lib/librte_rcu.a 00:01:17.219 [152/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:17.219 [153/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:17.219 [154/710] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.479 [155/710] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:17.479 [156/710] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:17.479 [157/710] Linking static target lib/librte_timer.a 00:01:17.479 [158/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:17.479 [159/710] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.479 [160/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:17.479 [161/710] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.479 [162/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:17.479 [163/710] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.479 [164/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:17.744 [165/710] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.744 [166/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:17.744 [167/710] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:17.744 [168/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:17.744 [169/710] Linking static target lib/librte_bbdev.a 00:01:17.744 [170/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:18.005 [171/710] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.005 [172/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:18.005 [173/710] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.005 [174/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:18.005 [175/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:18.005 [176/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:18.005 [177/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:18.005 [178/710] Linking static target lib/librte_compressdev.a 00:01:18.005 [179/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:18.267 [180/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:18.267 [181/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:18.532 [182/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:18.532 [183/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:18.532 [184/710] Linking static target lib/librte_distributor.a 00:01:18.532 [185/710] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:18.532 [186/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:18.792 [187/710] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.792 [188/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:18.792 [189/710] Linking static target lib/librte_bpf.a 00:01:18.792 [190/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:18.792 [191/710] Linking static target lib/librte_dmadev.a 00:01:18.792 [192/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:19.058 [193/710] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:19.058 [194/710] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:19.058 [195/710] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.058 [196/710] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.058 [197/710] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:01:19.058 [198/710] Linking static target lib/librte_dispatcher.a 00:01:19.058 [199/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:01:19.058 [200/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:19.058 [201/710] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:19.058 [202/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:19.058 [203/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:19.058 [204/710] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:19.058 [205/710] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:19.058 [206/710] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:19.058 [207/710] Linking static target lib/librte_gpudev.a 00:01:19.320 [208/710] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:19.320 [209/710] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.320 [210/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:19.320 [211/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:19.320 [212/710] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:19.320 [213/710] Linking static target lib/librte_gro.a 00:01:19.320 [214/710] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:19.320 [215/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:19.320 [216/710] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.320 [217/710] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:19.320 [218/710] Linking static target lib/librte_jobstats.a 00:01:19.585 [219/710] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:19.585 [220/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:19.585 [221/710] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.585 [222/710] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.843 [223/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:01:19.843 [224/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:19.844 [225/710] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:19.844 [226/710] Linking static target lib/librte_latencystats.a 00:01:20.109 [227/710] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:20.109 [228/710] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.109 [229/710] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:20.109 [230/710] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:20.109 [231/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:20.109 [232/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:20.109 [233/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:20.109 [234/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:20.109 [235/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:20.109 [236/710] Linking static target lib/librte_ip_frag.a 00:01:20.372 [237/710] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:20.372 [238/710] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.372 [239/710] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:20.372 [240/710] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:20.372 [241/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:20.372 [242/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:01:20.634 [243/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:20.634 [244/710] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:20.634 [245/710] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.634 [246/710] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.634 [247/710] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:20.634 [248/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:01:20.634 [249/710] Linking static target lib/librte_gso.a 00:01:20.895 [250/710] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:20.895 [251/710] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:20.895 [252/710] Linking static target lib/librte_regexdev.a 00:01:20.895 [253/710] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:20.895 [254/710] Linking static target lib/librte_rawdev.a 00:01:20.895 [255/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:20.895 [256/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:01:20.895 [257/710] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:20.895 [258/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:01:21.156 [259/710] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.156 [260/710] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:21.157 [261/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:01:21.157 [262/710] Linking static target lib/librte_mldev.a 00:01:21.157 [263/710] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:21.157 [264/710] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:21.157 [265/710] Linking static target lib/librte_efd.a 00:01:21.157 [266/710] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:21.157 [267/710] Linking static target lib/librte_pcapng.a 00:01:21.157 [268/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:21.422 [269/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:21.422 [270/710] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:21.422 [271/710] Linking static target lib/acl/libavx2_tmp.a 00:01:21.422 [272/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:21.422 [273/710] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:21.422 [274/710] Linking static target lib/librte_stack.a 00:01:21.422 [275/710] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:21.422 [276/710] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:21.422 [277/710] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:21.422 [278/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:21.681 [279/710] Linking static target lib/librte_lpm.a 00:01:21.681 [280/710] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.681 [281/710] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.681 [282/710] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:21.681 [283/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:21.681 [284/710] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.681 [285/710] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.681 [286/710] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:21.681 [287/710] Linking static target lib/librte_hash.a 00:01:21.944 [288/710] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:21.944 [289/710] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:21.944 [290/710] Linking static target lib/librte_power.a 00:01:21.944 [291/710] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:21.944 [292/710] Linking static target lib/librte_reorder.a 00:01:21.944 [293/710] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:01:21.944 [294/710] Linking static target lib/acl/libavx512_tmp.a 00:01:21.945 [295/710] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:21.945 [296/710] Linking static target lib/librte_acl.a 00:01:21.945 [297/710] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.945 [298/710] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:21.945 [299/710] Linking static target lib/librte_security.a 00:01:22.206 [300/710] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.206 [301/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:22.206 [302/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:22.517 [303/710] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:22.517 [304/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:22.517 [305/710] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:22.517 [306/710] Linking static target lib/librte_rib.a 00:01:22.517 [307/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:22.517 [308/710] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.517 [309/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:01:22.517 [310/710] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.517 [311/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:01:22.517 [312/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:01:22.517 [313/710] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:22.517 [314/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:22.517 [315/710] Linking static target lib/librte_mbuf.a 00:01:22.517 [316/710] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.783 [317/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:01:22.783 [318/710] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.783 [319/710] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:01:22.783 [320/710] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:01:22.783 [321/710] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:01:22.783 [322/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:22.783 [323/710] Linking static target lib/fib/libtrie_avx512_tmp.a 00:01:22.783 [324/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:22.783 [325/710] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.783 [326/710] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:23.043 [327/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:23.043 [328/710] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.304 [329/710] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.304 [330/710] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:23.304 [331/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:23.304 [332/710] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:01:23.567 [333/710] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.567 [334/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:23.567 [335/710] Linking static target lib/librte_eventdev.a 00:01:23.567 [336/710] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:23.567 [337/710] Linking static target lib/librte_member.a 00:01:23.567 [338/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:23.828 [339/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:23.828 [340/710] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:23.828 [341/710] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:23.828 [342/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:23.828 [343/710] Linking static target lib/librte_cryptodev.a 00:01:23.828 [344/710] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:24.090 [345/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:24.090 [346/710] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:24.090 [347/710] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:24.090 [348/710] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:24.090 [349/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:24.090 [350/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:24.090 [351/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:24.090 [352/710] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:24.090 [353/710] Linking static target lib/librte_sched.a 00:01:24.090 [354/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:24.090 [355/710] Linking static target lib/librte_ethdev.a 00:01:24.090 [356/710] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.090 [357/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:24.090 [358/710] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:24.090 [359/710] Linking static target lib/librte_fib.a 00:01:24.349 [360/710] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:24.349 [361/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:24.349 [362/710] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:24.349 [363/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:24.349 [364/710] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:24.349 [365/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:24.612 [366/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:24.612 [367/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:24.612 [368/710] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:24.612 [369/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:24.612 [370/710] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.875 [371/710] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.875 [372/710] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:24.875 [373/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:25.137 [374/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:25.137 [375/710] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:25.137 [376/710] Linking static target lib/librte_pdump.a 00:01:25.137 [377/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:25.137 [378/710] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:25.137 [379/710] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:25.137 [380/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:01:25.396 [381/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:25.396 [382/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:25.396 [383/710] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:25.396 [384/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:25.396 [385/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:25.396 [386/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:25.396 [387/710] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:25.396 [388/710] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:01:25.396 [389/710] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.396 [390/710] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:25.659 [391/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:25.659 [392/710] Linking static target lib/librte_ipsec.a 00:01:25.659 [393/710] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:25.659 [394/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:25.659 [395/710] Linking static target lib/librte_table.a 00:01:25.659 [396/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:25.919 [397/710] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.919 [398/710] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:25.919 [399/710] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:26.185 [400/710] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:01:26.185 [401/710] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.185 [402/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:26.446 [403/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:26.446 [404/710] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:26.446 [405/710] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:26.712 [406/710] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:26.712 [407/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:26.712 [408/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:26.712 [409/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:26.712 [410/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:26.712 [411/710] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:26.712 [412/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:26.972 [413/710] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.972 [414/710] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.972 [415/710] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.972 [416/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:01:26.972 [417/710] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:26.972 [418/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:26.972 [419/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:26.972 [420/710] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:26.972 [421/710] Linking target lib/librte_eal.so.24.0 00:01:27.234 [422/710] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:27.234 [423/710] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:27.234 [424/710] Linking static target drivers/librte_bus_vdev.a 00:01:27.235 [425/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:27.235 [426/710] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:27.235 [427/710] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:27.235 [428/710] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:27.235 [429/710] Linking static target lib/librte_port.a 00:01:27.497 [430/710] Linking target lib/librte_ring.so.24.0 00:01:27.497 [431/710] Linking target lib/librte_meter.so.24.0 00:01:27.497 [432/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:27.497 [433/710] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:27.497 [434/710] Linking target lib/librte_pci.so.24.0 00:01:27.497 [435/710] Linking target lib/librte_timer.so.24.0 00:01:27.497 [436/710] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.758 [437/710] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:27.758 [438/710] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:27.758 [439/710] Linking target lib/librte_acl.so.24.0 00:01:27.758 [440/710] Linking target lib/librte_cfgfile.so.24.0 00:01:27.758 [441/710] Linking target lib/librte_rcu.so.24.0 00:01:27.758 [442/710] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:01:27.758 [443/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:01:27.758 [444/710] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:01:27.758 [445/710] Linking target lib/librte_mempool.so.24.0 00:01:27.758 [446/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:27.758 [447/710] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:27.758 [448/710] Linking target lib/librte_dmadev.so.24.0 00:01:27.758 [449/710] Linking target lib/librte_jobstats.so.24.0 00:01:27.758 [450/710] Linking static target lib/librte_graph.a 00:01:27.758 [451/710] Linking target lib/librte_stack.so.24.0 00:01:27.758 [452/710] Linking target lib/librte_rawdev.so.24.0 00:01:27.758 [453/710] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:27.758 [454/710] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:27.758 [455/710] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:28.021 [456/710] Linking static target drivers/librte_bus_pci.a 00:01:28.021 [457/710] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:01:28.021 [458/710] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:28.021 [459/710] Linking target drivers/librte_bus_vdev.so.24.0 00:01:28.021 [460/710] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:28.021 [461/710] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:28.021 [462/710] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:28.021 [463/710] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:01:28.021 [464/710] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:28.021 [465/710] Linking target lib/librte_rib.so.24.0 00:01:28.021 [466/710] Linking target lib/librte_mbuf.so.24.0 00:01:28.281 [467/710] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:01:28.281 [468/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:28.281 [469/710] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:01:28.281 [470/710] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:01:28.281 [471/710] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:01:28.281 [472/710] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:01:28.281 [473/710] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:28.281 [474/710] Linking target lib/librte_fib.so.24.0 00:01:28.281 [475/710] Linking target lib/librte_net.so.24.0 00:01:28.545 [476/710] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:28.545 [477/710] Linking target lib/librte_bbdev.so.24.0 00:01:28.545 [478/710] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:01:28.545 [479/710] Linking target lib/librte_compressdev.so.24.0 00:01:28.545 [480/710] Linking target lib/librte_cryptodev.so.24.0 00:01:28.545 [481/710] Linking target lib/librte_distributor.so.24.0 00:01:28.545 [482/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:28.545 [483/710] Linking target lib/librte_gpudev.so.24.0 00:01:28.545 [484/710] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:01:28.545 [485/710] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.545 [486/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:28.545 [487/710] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:28.545 [488/710] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:28.545 [489/710] Linking target lib/librte_regexdev.so.24.0 00:01:28.546 [490/710] Linking static target drivers/librte_mempool_ring.a 00:01:28.546 [491/710] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:28.546 [492/710] Linking target lib/librte_mldev.so.24.0 00:01:28.546 [493/710] Linking target lib/librte_reorder.so.24.0 00:01:28.546 [494/710] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:01:28.546 [495/710] Linking target lib/librte_sched.so.24.0 00:01:28.812 [496/710] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.812 [497/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:28.812 [498/710] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:28.812 [499/710] Linking target drivers/librte_mempool_ring.so.24.0 00:01:28.812 [500/710] Linking target lib/librte_hash.so.24.0 00:01:28.812 [501/710] Linking target lib/librte_cmdline.so.24.0 00:01:28.812 [502/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:01:28.812 [503/710] Linking target drivers/librte_bus_pci.so.24.0 00:01:28.812 [504/710] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:28.812 [505/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:28.812 [506/710] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.812 [507/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:28.812 [508/710] Linking target lib/librte_security.so.24.0 00:01:28.812 [509/710] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:01:28.812 [510/710] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:01:28.812 [511/710] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:01:29.088 [512/710] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:29.088 [513/710] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:01:29.088 [514/710] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:01:29.088 [515/710] Linking target lib/librte_lpm.so.24.0 00:01:29.088 [516/710] Linking target lib/librte_efd.so.24.0 00:01:29.088 [517/710] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:01:29.088 [518/710] Linking target lib/librte_member.so.24.0 00:01:29.088 [519/710] Linking target lib/librte_ipsec.so.24.0 00:01:29.088 [520/710] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:01:29.088 [521/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:29.349 [522/710] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:01:29.349 [523/710] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:01:29.349 [524/710] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:01:29.349 [525/710] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:01:29.612 [526/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:29.612 [527/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:29.612 [528/710] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:01:29.612 [529/710] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:01:29.874 [530/710] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:29.874 [531/710] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:29.874 [532/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:30.137 [533/710] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:30.137 [534/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:30.137 [535/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:30.137 [536/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:30.137 [537/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:30.400 [538/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:30.400 [539/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:30.400 [540/710] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:01:30.400 [541/710] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:01:30.661 [542/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:30.661 [543/710] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:30.661 [544/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:30.661 [545/710] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:01:30.661 [546/710] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:30.930 [547/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:30.930 [548/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:30.930 [549/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:30.930 [550/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:30.930 [551/710] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:30.930 [552/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:01:30.930 [553/710] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:30.930 [554/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:01:30.930 [555/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:31.190 [556/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:31.190 [557/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:31.190 [558/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:01:31.449 [559/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:31.716 [560/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:31.975 [561/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:31.975 [562/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:31.975 [563/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:01:31.975 [564/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:32.237 [565/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:32.237 [566/710] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:32.237 [567/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:01:32.237 [568/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:01:32.237 [569/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:01:32.506 [570/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:32.506 [571/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:32.506 [572/710] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.506 [573/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:01:32.506 [574/710] Linking target lib/librte_ethdev.so.24.0 00:01:32.765 [575/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:01:32.765 [576/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:01:32.765 [577/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:32.765 [578/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:01:32.765 [579/710] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:33.027 [580/710] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:33.027 [581/710] Linking target lib/librte_metrics.so.24.0 00:01:33.027 [582/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:33.027 [583/710] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:33.027 [584/710] Linking target lib/librte_bpf.so.24.0 00:01:33.027 [585/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:33.027 [586/710] Linking target lib/librte_gro.so.24.0 00:01:33.027 [587/710] Linking target lib/librte_eventdev.so.24.0 00:01:33.027 [588/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:01:33.027 [589/710] Linking target lib/librte_gso.so.24.0 00:01:33.027 [590/710] Linking target lib/librte_ip_frag.so.24.0 00:01:33.292 [591/710] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:01:33.292 [592/710] Linking target lib/librte_pcapng.so.24.0 00:01:33.292 [593/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:01:33.292 [594/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:33.292 [595/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:01:33.292 [596/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:33.292 [597/710] Linking static target lib/librte_pdcp.a 00:01:33.292 [598/710] Linking target lib/librte_bitratestats.so.24.0 00:01:33.292 [599/710] Linking target lib/librte_latencystats.so.24.0 00:01:33.292 [600/710] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:01:33.292 [601/710] Linking target lib/librte_power.so.24.0 00:01:33.292 [602/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:33.292 [603/710] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:01:33.292 [604/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:33.292 [605/710] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:01:33.557 [606/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:33.557 [607/710] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:01:33.557 [608/710] Linking target lib/librte_dispatcher.so.24.0 00:01:33.557 [609/710] Linking target lib/librte_pdump.so.24.0 00:01:33.557 [610/710] Linking target lib/librte_port.so.24.0 00:01:33.557 [611/710] Linking target lib/librte_graph.so.24.0 00:01:33.557 [612/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:33.557 [613/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:33.557 [614/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:33.818 [615/710] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.818 [616/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:33.818 [617/710] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:01:33.818 [618/710] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:01:33.818 [619/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:33.818 [620/710] Linking target lib/librte_pdcp.so.24.0 00:01:33.818 [621/710] Linking target lib/librte_table.so.24.0 00:01:33.818 [622/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:33.818 [623/710] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:33.818 [624/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:33.818 [625/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:01:34.078 [626/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:34.078 [627/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:34.078 [628/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:34.078 [629/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:34.078 [630/710] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:01:34.724 [631/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:34.724 [632/710] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:34.724 [633/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:34.724 [634/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:34.724 [635/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:34.724 [636/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:34.724 [637/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:34.983 [638/710] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:34.983 [639/710] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:34.983 [640/710] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:01:34.983 [641/710] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:34.983 [642/710] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:35.242 [643/710] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:35.242 [644/710] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:35.242 [645/710] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:35.242 [646/710] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:35.501 [647/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:35.501 [648/710] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:35.501 [649/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:35.501 [650/710] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:35.760 [651/710] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:35.760 [652/710] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:35.760 [653/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:36.019 [654/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:36.019 [655/710] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:36.019 [656/710] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:36.019 [657/710] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:36.019 [658/710] Linking static target drivers/librte_net_i40e.a 00:01:36.019 [659/710] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:36.019 [660/710] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:36.276 [661/710] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:36.535 [662/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:36.535 [663/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:36.535 [664/710] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:36.535 [665/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:36.535 [666/710] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.793 [667/710] Linking target drivers/librte_net_i40e.so.24.0 00:01:36.793 [668/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:36.793 [669/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:37.050 [670/710] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:37.307 [671/710] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:01:37.564 [672/710] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:37.564 [673/710] Linking static target lib/librte_node.a 00:01:37.564 [674/710] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:37.821 [675/710] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.821 [676/710] Linking target lib/librte_node.so.24.0 00:01:39.193 [677/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:01:39.193 [678/710] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:01:39.193 [679/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:01:41.090 [680/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:01:41.090 [681/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:01:47.639 [682/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:19.699 [683/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:19.699 [684/710] Linking static target lib/librte_vhost.a 00:02:19.699 [685/710] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.699 [686/710] Linking target lib/librte_vhost.so.24.0 00:02:37.786 [687/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:37.786 [688/710] Linking static target lib/librte_pipeline.a 00:02:38.045 [689/710] Linking target app/dpdk-dumpcap 00:02:38.045 [690/710] Linking target app/dpdk-test-acl 00:02:38.045 [691/710] Linking target app/dpdk-proc-info 00:02:38.045 [692/710] Linking target app/dpdk-test-cmdline 00:02:38.045 [693/710] Linking target app/dpdk-test-regex 00:02:38.045 [694/710] Linking target app/dpdk-test-dma-perf 00:02:38.045 [695/710] Linking target app/dpdk-test-security-perf 00:02:38.045 [696/710] Linking target app/dpdk-test-mldev 00:02:38.045 [697/710] Linking target app/dpdk-test-compress-perf 00:02:38.045 [698/710] Linking target app/dpdk-graph 00:02:38.045 [699/710] Linking target app/dpdk-test-gpudev 00:02:38.045 [700/710] Linking target app/dpdk-test-sad 00:02:38.045 [701/710] Linking target app/dpdk-test-fib 00:02:38.045 [702/710] Linking target app/dpdk-test-flow-perf 00:02:38.045 [703/710] Linking target app/dpdk-pdump 00:02:38.045 [704/710] Linking target app/dpdk-test-pipeline 00:02:38.045 [705/710] Linking target app/dpdk-test-bbdev 00:02:38.045 [706/710] Linking target app/dpdk-test-eventdev 00:02:38.045 [707/710] Linking target app/dpdk-test-crypto-perf 00:02:38.045 [708/710] Linking target app/dpdk-testpmd 00:02:39.946 [709/710] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.946 [710/710] Linking target lib/librte_pipeline.so.24.0 00:02:39.946 05:16:47 build_native_dpdk -- common/autobuild_common.sh@187 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:02:40.204 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:40.204 [0/1] Installing files. 00:02:40.466 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:02:40.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:40.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:40.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:40.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:40.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:40.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:40.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:40.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:40.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:40.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:40.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:40.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:40.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:40.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:40.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:40.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:40.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:40.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:40.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:40.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:40.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:40.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:40.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:40.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:40.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:40.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:40.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:40.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:40.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:40.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:40.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:40.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:40.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:40.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:40.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:40.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:40.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:40.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:40.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:40.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:40.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:40.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:40.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:40.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:40.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:40.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:40.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:40.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:40.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:40.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:40.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:40.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:40.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:40.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:40.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:40.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:40.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:40.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:40.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:40.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:40.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:40.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:40.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:40.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:40.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:40.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:40.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:40.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:40.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:40.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:40.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:40.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:40.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:40.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:40.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:40.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:40.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:40.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:40.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:40.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:40.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:40.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:40.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:40.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:40.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:40.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:40.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:40.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:40.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:40.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:40.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:40.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:40.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:40.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:40.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:40.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:40.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:40.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:40.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:40.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:40.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:40.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:40.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:40.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:40.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:40.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:40.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:40.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:40.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:40.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:40.472 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.472 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.043 Installing lib/librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.043 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.043 Installing lib/librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.043 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.043 Installing lib/librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.043 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.043 Installing lib/librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.043 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.043 Installing lib/librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.043 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.043 Installing lib/librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.043 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.043 Installing lib/librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.043 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.043 Installing lib/librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.043 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.043 Installing lib/librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.043 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.043 Installing lib/librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.043 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.043 Installing lib/librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.043 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.043 Installing lib/librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.043 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.043 Installing lib/librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.043 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.043 Installing lib/librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.043 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.043 Installing drivers/librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:41.043 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.043 Installing drivers/librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:41.043 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.043 Installing drivers/librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:41.043 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.043 Installing drivers/librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:41.043 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.043 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.043 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.043 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.043 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.043 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.043 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.043 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.043 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.043 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.043 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.043 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.043 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.043 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.043 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.043 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.043 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.044 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.044 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.044 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:41.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:41.309 Installing symlink pointing to librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:02:41.309 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:02:41.309 Installing symlink pointing to librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:02:41.309 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:41.309 Installing symlink pointing to librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:02:41.309 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:41.309 Installing symlink pointing to librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:02:41.309 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:41.309 Installing symlink pointing to librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:02:41.309 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:41.309 Installing symlink pointing to librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:02:41.309 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:41.309 Installing symlink pointing to librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:02:41.309 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:41.309 Installing symlink pointing to librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:02:41.309 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:41.309 Installing symlink pointing to librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:02:41.309 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:02:41.309 Installing symlink pointing to librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:02:41.309 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:41.309 Installing symlink pointing to librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:02:41.309 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:41.309 Installing symlink pointing to librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:02:41.309 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:41.309 Installing symlink pointing to librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:02:41.309 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:41.309 Installing symlink pointing to librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:02:41.309 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:41.310 Installing symlink pointing to librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:02:41.310 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:41.310 Installing symlink pointing to librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:02:41.310 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:41.310 Installing symlink pointing to librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:02:41.310 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:41.310 Installing symlink pointing to librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:02:41.310 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:41.310 Installing symlink pointing to librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:02:41.310 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:41.310 Installing symlink pointing to librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:02:41.310 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:41.310 Installing symlink pointing to librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:02:41.310 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:41.310 Installing symlink pointing to librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:02:41.310 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:41.310 Installing symlink pointing to librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:02:41.310 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:41.310 Installing symlink pointing to librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:02:41.310 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:41.310 Installing symlink pointing to librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:02:41.310 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:41.310 Installing symlink pointing to librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:02:41.310 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:41.310 Installing symlink pointing to librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:02:41.310 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:41.310 Installing symlink pointing to librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:02:41.310 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:02:41.310 Installing symlink pointing to librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:02:41.310 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:41.310 Installing symlink pointing to librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:02:41.310 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:41.310 Installing symlink pointing to librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:02:41.310 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:41.310 Installing symlink pointing to librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:02:41.310 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:41.310 Installing symlink pointing to librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:02:41.310 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:41.310 Installing symlink pointing to librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:02:41.310 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:41.310 Installing symlink pointing to librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:02:41.310 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:41.310 Installing symlink pointing to librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:02:41.310 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:02:41.310 Installing symlink pointing to librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:02:41.310 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:41.310 Installing symlink pointing to librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:02:41.310 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:02:41.310 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:02:41.310 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:02:41.310 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:02:41.310 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:02:41.310 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:02:41.310 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:02:41.310 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:02:41.310 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:02:41.310 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:02:41.310 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:02:41.310 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:02:41.310 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:02:41.310 Installing symlink pointing to librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:02:41.310 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:41.310 Installing symlink pointing to librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:02:41.310 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:41.310 Installing symlink pointing to librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:02:41.310 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:02:41.310 Installing symlink pointing to librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:02:41.310 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:41.310 Installing symlink pointing to librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:02:41.310 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:41.310 Installing symlink pointing to librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:02:41.310 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:41.310 Installing symlink pointing to librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:02:41.310 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:02:41.310 Installing symlink pointing to librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:02:41.310 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:41.310 Installing symlink pointing to librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:02:41.310 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:41.310 Installing symlink pointing to librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:02:41.310 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:41.310 Installing symlink pointing to librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:02:41.310 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:02:41.310 Installing symlink pointing to librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:02:41.310 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:41.310 Installing symlink pointing to librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:02:41.310 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:02:41.310 Installing symlink pointing to librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:02:41.310 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:41.310 Installing symlink pointing to librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:02:41.310 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:02:41.310 Installing symlink pointing to librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:02:41.310 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:41.310 Installing symlink pointing to librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:02:41.310 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:41.310 Installing symlink pointing to librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:02:41.310 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:02:41.310 Installing symlink pointing to librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:02:41.310 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:02:41.310 Installing symlink pointing to librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:02:41.310 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:02:41.310 Installing symlink pointing to librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:02:41.310 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:02:41.311 Installing symlink pointing to librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:02:41.311 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:02:41.311 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:02:41.311 05:16:48 build_native_dpdk -- common/autobuild_common.sh@189 -- $ uname -s 00:02:41.311 05:16:48 build_native_dpdk -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:41.311 05:16:48 build_native_dpdk -- common/autobuild_common.sh@200 -- $ cat 00:02:41.311 05:16:48 build_native_dpdk -- common/autobuild_common.sh@205 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:41.311 00:02:41.311 real 1m33.706s 00:02:41.311 user 18m6.151s 00:02:41.311 sys 2m6.981s 00:02:41.311 05:16:48 build_native_dpdk -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:02:41.311 05:16:48 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:41.311 ************************************ 00:02:41.311 END TEST build_native_dpdk 00:02:41.311 ************************************ 00:02:41.311 05:16:48 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:41.311 05:16:48 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:41.311 05:16:48 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:41.311 05:16:48 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:41.311 05:16:48 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:41.311 05:16:48 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:41.311 05:16:48 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:41.311 05:16:48 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:02:41.311 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:41.311 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.311 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.573 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:41.833 Using 'verbs' RDMA provider 00:02:52.380 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:00.500 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:00.500 Creating mk/config.mk...done. 00:03:00.500 Creating mk/cc.flags.mk...done. 00:03:00.500 Type 'make' to build. 00:03:00.500 05:17:07 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:03:00.500 05:17:07 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:03:00.500 05:17:07 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:03:00.500 05:17:07 -- common/autotest_common.sh@10 -- $ set +x 00:03:00.500 ************************************ 00:03:00.500 START TEST make 00:03:00.500 ************************************ 00:03:00.500 05:17:07 make -- common/autotest_common.sh@1121 -- $ make -j48 00:03:00.758 make[1]: Nothing to be done for 'all'. 00:03:02.144 The Meson build system 00:03:02.144 Version: 1.3.1 00:03:02.144 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:02.144 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:02.144 Build type: native build 00:03:02.144 Project name: libvfio-user 00:03:02.144 Project version: 0.0.1 00:03:02.144 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:02.144 C linker for the host machine: gcc ld.bfd 2.39-16 00:03:02.144 Host machine cpu family: x86_64 00:03:02.144 Host machine cpu: x86_64 00:03:02.144 Run-time dependency threads found: YES 00:03:02.144 Library dl found: YES 00:03:02.144 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:02.144 Run-time dependency json-c found: YES 0.17 00:03:02.144 Run-time dependency cmocka found: YES 1.1.7 00:03:02.144 Program pytest-3 found: NO 00:03:02.144 Program flake8 found: NO 00:03:02.144 Program misspell-fixer found: NO 00:03:02.144 Program restructuredtext-lint found: NO 00:03:02.144 Program valgrind found: YES (/usr/bin/valgrind) 00:03:02.144 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:02.144 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:02.144 Compiler for C supports arguments -Wwrite-strings: YES 00:03:02.144 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:02.144 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:02.144 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:02.144 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:02.144 Build targets in project: 8 00:03:02.144 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:02.144 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:02.144 00:03:02.144 libvfio-user 0.0.1 00:03:02.144 00:03:02.144 User defined options 00:03:02.144 buildtype : debug 00:03:02.144 default_library: shared 00:03:02.144 libdir : /usr/local/lib 00:03:02.144 00:03:02.144 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:03.090 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:03.090 [1/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:03.353 [2/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:03.353 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:03.353 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:03.353 [5/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:03.353 [6/37] Compiling C object samples/null.p/null.c.o 00:03:03.353 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:03.353 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:03.353 [9/37] Compiling C object samples/server.p/server.c.o 00:03:03.353 [10/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:03.353 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:03.353 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:03.353 [13/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:03.353 [14/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:03.353 [15/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:03.353 [16/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:03.353 [17/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:03.353 [18/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:03.353 [19/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:03.353 [20/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:03.353 [21/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:03.353 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:03.353 [23/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:03.353 [24/37] Compiling C object samples/client.p/client.c.o 00:03:03.353 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:03.614 [26/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:03.614 [27/37] Linking target samples/client 00:03:03.614 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:03.614 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:03:03.614 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:03.614 [31/37] Linking target test/unit_tests 00:03:03.875 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:03.875 [33/37] Linking target samples/null 00:03:03.875 [34/37] Linking target samples/server 00:03:03.875 [35/37] Linking target samples/gpio-pci-idio-16 00:03:03.875 [36/37] Linking target samples/lspci 00:03:03.875 [37/37] Linking target samples/shadow_ioeventfd_server 00:03:03.875 INFO: autodetecting backend as ninja 00:03:03.875 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:04.136 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:04.706 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:04.706 ninja: no work to do. 00:03:16.904 CC lib/ut_mock/mock.o 00:03:16.904 CC lib/log/log.o 00:03:16.904 CC lib/ut/ut.o 00:03:16.904 CC lib/log/log_flags.o 00:03:16.904 CC lib/log/log_deprecated.o 00:03:16.904 LIB libspdk_log.a 00:03:16.904 LIB libspdk_ut.a 00:03:16.904 LIB libspdk_ut_mock.a 00:03:17.162 SO libspdk_ut.so.2.0 00:03:17.162 SO libspdk_log.so.7.0 00:03:17.162 SO libspdk_ut_mock.so.6.0 00:03:17.162 SYMLINK libspdk_ut.so 00:03:17.162 SYMLINK libspdk_ut_mock.so 00:03:17.162 SYMLINK libspdk_log.so 00:03:17.162 CC lib/ioat/ioat.o 00:03:17.162 CC lib/dma/dma.o 00:03:17.162 CC lib/util/base64.o 00:03:17.162 CXX lib/trace_parser/trace.o 00:03:17.162 CC lib/util/bit_array.o 00:03:17.162 CC lib/util/cpuset.o 00:03:17.162 CC lib/util/crc16.o 00:03:17.162 CC lib/util/crc32.o 00:03:17.162 CC lib/util/crc32c.o 00:03:17.162 CC lib/util/crc32_ieee.o 00:03:17.162 CC lib/util/crc64.o 00:03:17.162 CC lib/util/dif.o 00:03:17.162 CC lib/util/fd.o 00:03:17.162 CC lib/util/file.o 00:03:17.162 CC lib/util/hexlify.o 00:03:17.162 CC lib/util/iov.o 00:03:17.162 CC lib/util/math.o 00:03:17.162 CC lib/util/pipe.o 00:03:17.162 CC lib/util/strerror_tls.o 00:03:17.162 CC lib/util/string.o 00:03:17.162 CC lib/util/uuid.o 00:03:17.162 CC lib/util/fd_group.o 00:03:17.162 CC lib/util/xor.o 00:03:17.162 CC lib/util/zipf.o 00:03:17.421 CC lib/vfio_user/host/vfio_user_pci.o 00:03:17.421 CC lib/vfio_user/host/vfio_user.o 00:03:17.421 LIB libspdk_dma.a 00:03:17.421 SO libspdk_dma.so.4.0 00:03:17.679 SYMLINK libspdk_dma.so 00:03:17.679 LIB libspdk_ioat.a 00:03:17.679 SO libspdk_ioat.so.7.0 00:03:17.679 SYMLINK libspdk_ioat.so 00:03:17.679 LIB libspdk_vfio_user.a 00:03:17.679 SO libspdk_vfio_user.so.5.0 00:03:17.937 SYMLINK libspdk_vfio_user.so 00:03:17.937 LIB libspdk_util.a 00:03:17.937 SO libspdk_util.so.9.0 00:03:17.937 SYMLINK libspdk_util.so 00:03:18.195 CC lib/conf/conf.o 00:03:18.195 CC lib/idxd/idxd.o 00:03:18.195 CC lib/json/json_parse.o 00:03:18.195 CC lib/idxd/idxd_user.o 00:03:18.195 CC lib/json/json_util.o 00:03:18.196 CC lib/rdma/common.o 00:03:18.196 CC lib/idxd/idxd_kernel.o 00:03:18.196 CC lib/vmd/vmd.o 00:03:18.196 CC lib/json/json_write.o 00:03:18.196 CC lib/env_dpdk/env.o 00:03:18.196 CC lib/rdma/rdma_verbs.o 00:03:18.196 CC lib/vmd/led.o 00:03:18.196 CC lib/env_dpdk/memory.o 00:03:18.196 CC lib/env_dpdk/pci.o 00:03:18.196 CC lib/env_dpdk/init.o 00:03:18.196 CC lib/env_dpdk/threads.o 00:03:18.196 CC lib/env_dpdk/pci_ioat.o 00:03:18.196 CC lib/env_dpdk/pci_virtio.o 00:03:18.196 CC lib/env_dpdk/pci_vmd.o 00:03:18.196 CC lib/env_dpdk/pci_idxd.o 00:03:18.196 CC lib/env_dpdk/pci_event.o 00:03:18.196 CC lib/env_dpdk/sigbus_handler.o 00:03:18.196 CC lib/env_dpdk/pci_dpdk.o 00:03:18.196 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:18.196 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:18.196 LIB libspdk_trace_parser.a 00:03:18.196 SO libspdk_trace_parser.so.5.0 00:03:18.454 SYMLINK libspdk_trace_parser.so 00:03:18.454 LIB libspdk_conf.a 00:03:18.454 SO libspdk_conf.so.6.0 00:03:18.454 LIB libspdk_json.a 00:03:18.454 SYMLINK libspdk_conf.so 00:03:18.454 LIB libspdk_rdma.a 00:03:18.454 SO libspdk_json.so.6.0 00:03:18.454 SO libspdk_rdma.so.6.0 00:03:18.712 SYMLINK libspdk_rdma.so 00:03:18.712 SYMLINK libspdk_json.so 00:03:18.712 CC lib/jsonrpc/jsonrpc_server.o 00:03:18.712 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:18.712 CC lib/jsonrpc/jsonrpc_client.o 00:03:18.712 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:18.712 LIB libspdk_idxd.a 00:03:18.712 SO libspdk_idxd.so.12.0 00:03:18.970 SYMLINK libspdk_idxd.so 00:03:18.970 LIB libspdk_vmd.a 00:03:18.970 SO libspdk_vmd.so.6.0 00:03:18.970 LIB libspdk_jsonrpc.a 00:03:18.970 SYMLINK libspdk_vmd.so 00:03:18.970 SO libspdk_jsonrpc.so.6.0 00:03:19.227 SYMLINK libspdk_jsonrpc.so 00:03:19.227 CC lib/rpc/rpc.o 00:03:19.486 LIB libspdk_rpc.a 00:03:19.486 SO libspdk_rpc.so.6.0 00:03:19.486 SYMLINK libspdk_rpc.so 00:03:19.744 CC lib/trace/trace.o 00:03:19.744 CC lib/trace/trace_flags.o 00:03:19.744 CC lib/trace/trace_rpc.o 00:03:19.744 CC lib/keyring/keyring.o 00:03:19.744 CC lib/keyring/keyring_rpc.o 00:03:19.744 CC lib/notify/notify.o 00:03:19.744 CC lib/notify/notify_rpc.o 00:03:20.002 LIB libspdk_notify.a 00:03:20.002 SO libspdk_notify.so.6.0 00:03:20.002 LIB libspdk_keyring.a 00:03:20.002 SYMLINK libspdk_notify.so 00:03:20.002 LIB libspdk_trace.a 00:03:20.002 SO libspdk_keyring.so.1.0 00:03:20.002 SO libspdk_trace.so.10.0 00:03:20.002 SYMLINK libspdk_keyring.so 00:03:20.002 SYMLINK libspdk_trace.so 00:03:20.260 LIB libspdk_env_dpdk.a 00:03:20.260 CC lib/sock/sock.o 00:03:20.260 CC lib/sock/sock_rpc.o 00:03:20.260 CC lib/thread/thread.o 00:03:20.260 CC lib/thread/iobuf.o 00:03:20.260 SO libspdk_env_dpdk.so.14.0 00:03:20.518 SYMLINK libspdk_env_dpdk.so 00:03:20.776 LIB libspdk_sock.a 00:03:20.776 SO libspdk_sock.so.9.0 00:03:20.776 SYMLINK libspdk_sock.so 00:03:21.036 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:21.036 CC lib/nvme/nvme_ctrlr.o 00:03:21.036 CC lib/nvme/nvme_fabric.o 00:03:21.036 CC lib/nvme/nvme_ns_cmd.o 00:03:21.036 CC lib/nvme/nvme_ns.o 00:03:21.036 CC lib/nvme/nvme_pcie_common.o 00:03:21.036 CC lib/nvme/nvme_pcie.o 00:03:21.036 CC lib/nvme/nvme_qpair.o 00:03:21.036 CC lib/nvme/nvme.o 00:03:21.036 CC lib/nvme/nvme_quirks.o 00:03:21.036 CC lib/nvme/nvme_transport.o 00:03:21.036 CC lib/nvme/nvme_discovery.o 00:03:21.036 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:21.036 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:21.036 CC lib/nvme/nvme_tcp.o 00:03:21.036 CC lib/nvme/nvme_opal.o 00:03:21.036 CC lib/nvme/nvme_io_msg.o 00:03:21.036 CC lib/nvme/nvme_poll_group.o 00:03:21.036 CC lib/nvme/nvme_zns.o 00:03:21.036 CC lib/nvme/nvme_stubs.o 00:03:21.036 CC lib/nvme/nvme_auth.o 00:03:21.036 CC lib/nvme/nvme_cuse.o 00:03:21.036 CC lib/nvme/nvme_vfio_user.o 00:03:21.036 CC lib/nvme/nvme_rdma.o 00:03:22.001 LIB libspdk_thread.a 00:03:22.001 SO libspdk_thread.so.10.0 00:03:22.001 SYMLINK libspdk_thread.so 00:03:22.001 CC lib/accel/accel.o 00:03:22.001 CC lib/blob/blobstore.o 00:03:22.001 CC lib/accel/accel_rpc.o 00:03:22.001 CC lib/virtio/virtio.o 00:03:22.001 CC lib/accel/accel_sw.o 00:03:22.001 CC lib/blob/request.o 00:03:22.001 CC lib/virtio/virtio_vhost_user.o 00:03:22.001 CC lib/blob/zeroes.o 00:03:22.001 CC lib/virtio/virtio_vfio_user.o 00:03:22.001 CC lib/blob/blob_bs_dev.o 00:03:22.001 CC lib/virtio/virtio_pci.o 00:03:22.001 CC lib/vfu_tgt/tgt_endpoint.o 00:03:22.001 CC lib/init/json_config.o 00:03:22.001 CC lib/vfu_tgt/tgt_rpc.o 00:03:22.001 CC lib/init/subsystem.o 00:03:22.001 CC lib/init/subsystem_rpc.o 00:03:22.001 CC lib/init/rpc.o 00:03:22.260 LIB libspdk_init.a 00:03:22.260 SO libspdk_init.so.5.0 00:03:22.519 LIB libspdk_virtio.a 00:03:22.519 LIB libspdk_vfu_tgt.a 00:03:22.519 SYMLINK libspdk_init.so 00:03:22.519 SO libspdk_virtio.so.7.0 00:03:22.519 SO libspdk_vfu_tgt.so.3.0 00:03:22.519 SYMLINK libspdk_vfu_tgt.so 00:03:22.519 SYMLINK libspdk_virtio.so 00:03:22.519 CC lib/event/app.o 00:03:22.519 CC lib/event/reactor.o 00:03:22.519 CC lib/event/log_rpc.o 00:03:22.519 CC lib/event/app_rpc.o 00:03:22.519 CC lib/event/scheduler_static.o 00:03:23.085 LIB libspdk_event.a 00:03:23.085 SO libspdk_event.so.13.0 00:03:23.085 SYMLINK libspdk_event.so 00:03:23.085 LIB libspdk_accel.a 00:03:23.085 SO libspdk_accel.so.15.0 00:03:23.343 LIB libspdk_nvme.a 00:03:23.343 SYMLINK libspdk_accel.so 00:03:23.343 SO libspdk_nvme.so.13.0 00:03:23.343 CC lib/bdev/bdev.o 00:03:23.343 CC lib/bdev/bdev_rpc.o 00:03:23.343 CC lib/bdev/bdev_zone.o 00:03:23.343 CC lib/bdev/part.o 00:03:23.343 CC lib/bdev/scsi_nvme.o 00:03:23.601 SYMLINK libspdk_nvme.so 00:03:24.978 LIB libspdk_blob.a 00:03:24.978 SO libspdk_blob.so.11.0 00:03:24.978 SYMLINK libspdk_blob.so 00:03:25.237 CC lib/blobfs/blobfs.o 00:03:25.237 CC lib/blobfs/tree.o 00:03:25.237 CC lib/lvol/lvol.o 00:03:25.804 LIB libspdk_bdev.a 00:03:26.062 SO libspdk_bdev.so.15.0 00:03:26.062 SYMLINK libspdk_bdev.so 00:03:26.062 LIB libspdk_blobfs.a 00:03:26.062 SO libspdk_blobfs.so.10.0 00:03:26.062 SYMLINK libspdk_blobfs.so 00:03:26.062 LIB libspdk_lvol.a 00:03:26.326 SO libspdk_lvol.so.10.0 00:03:26.326 CC lib/ublk/ublk.o 00:03:26.326 CC lib/scsi/dev.o 00:03:26.326 CC lib/nbd/nbd.o 00:03:26.326 CC lib/nvmf/ctrlr.o 00:03:26.326 CC lib/nbd/nbd_rpc.o 00:03:26.326 CC lib/scsi/lun.o 00:03:26.326 CC lib/ublk/ublk_rpc.o 00:03:26.326 CC lib/ftl/ftl_core.o 00:03:26.326 CC lib/nvmf/ctrlr_discovery.o 00:03:26.326 CC lib/scsi/port.o 00:03:26.326 CC lib/ftl/ftl_init.o 00:03:26.326 CC lib/scsi/scsi.o 00:03:26.326 CC lib/nvmf/ctrlr_bdev.o 00:03:26.326 CC lib/scsi/scsi_bdev.o 00:03:26.326 CC lib/ftl/ftl_layout.o 00:03:26.326 CC lib/ftl/ftl_debug.o 00:03:26.326 CC lib/scsi/scsi_pr.o 00:03:26.326 CC lib/nvmf/subsystem.o 00:03:26.326 CC lib/nvmf/nvmf.o 00:03:26.326 CC lib/ftl/ftl_io.o 00:03:26.326 CC lib/scsi/scsi_rpc.o 00:03:26.326 CC lib/nvmf/nvmf_rpc.o 00:03:26.326 CC lib/ftl/ftl_sb.o 00:03:26.326 CC lib/scsi/task.o 00:03:26.326 CC lib/nvmf/transport.o 00:03:26.326 CC lib/ftl/ftl_l2p.o 00:03:26.326 CC lib/nvmf/tcp.o 00:03:26.326 CC lib/ftl/ftl_l2p_flat.o 00:03:26.326 CC lib/nvmf/stubs.o 00:03:26.326 CC lib/ftl/ftl_nv_cache.o 00:03:26.326 CC lib/ftl/ftl_band.o 00:03:26.326 CC lib/nvmf/mdns_server.o 00:03:26.326 CC lib/nvmf/vfio_user.o 00:03:26.326 CC lib/nvmf/rdma.o 00:03:26.326 CC lib/ftl/ftl_band_ops.o 00:03:26.326 CC lib/ftl/ftl_writer.o 00:03:26.326 CC lib/nvmf/auth.o 00:03:26.326 CC lib/ftl/ftl_rq.o 00:03:26.326 CC lib/ftl/ftl_reloc.o 00:03:26.326 CC lib/ftl/ftl_l2p_cache.o 00:03:26.326 CC lib/ftl/ftl_p2l.o 00:03:26.326 CC lib/ftl/mngt/ftl_mngt.o 00:03:26.326 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:26.326 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:26.326 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:26.326 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:26.326 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:26.326 SYMLINK libspdk_lvol.so 00:03:26.326 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:26.585 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:26.585 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:26.585 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:26.585 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:26.585 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:26.585 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:26.585 CC lib/ftl/utils/ftl_conf.o 00:03:26.585 CC lib/ftl/utils/ftl_md.o 00:03:26.585 CC lib/ftl/utils/ftl_mempool.o 00:03:26.585 CC lib/ftl/utils/ftl_bitmap.o 00:03:26.585 CC lib/ftl/utils/ftl_property.o 00:03:26.585 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:26.847 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:26.847 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:26.847 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:26.847 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:26.847 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:26.847 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:26.847 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:26.847 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:26.847 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:26.847 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:26.847 CC lib/ftl/base/ftl_base_dev.o 00:03:26.847 CC lib/ftl/base/ftl_base_bdev.o 00:03:26.847 CC lib/ftl/ftl_trace.o 00:03:27.107 LIB libspdk_nbd.a 00:03:27.107 SO libspdk_nbd.so.7.0 00:03:27.107 LIB libspdk_scsi.a 00:03:27.107 SYMLINK libspdk_nbd.so 00:03:27.107 SO libspdk_scsi.so.9.0 00:03:27.366 SYMLINK libspdk_scsi.so 00:03:27.366 LIB libspdk_ublk.a 00:03:27.366 SO libspdk_ublk.so.3.0 00:03:27.366 SYMLINK libspdk_ublk.so 00:03:27.366 CC lib/iscsi/conn.o 00:03:27.366 CC lib/vhost/vhost.o 00:03:27.366 CC lib/vhost/vhost_rpc.o 00:03:27.366 CC lib/iscsi/init_grp.o 00:03:27.366 CC lib/iscsi/iscsi.o 00:03:27.366 CC lib/vhost/vhost_scsi.o 00:03:27.366 CC lib/iscsi/md5.o 00:03:27.366 CC lib/vhost/vhost_blk.o 00:03:27.366 CC lib/iscsi/param.o 00:03:27.366 CC lib/vhost/rte_vhost_user.o 00:03:27.366 CC lib/iscsi/portal_grp.o 00:03:27.366 CC lib/iscsi/tgt_node.o 00:03:27.366 CC lib/iscsi/iscsi_subsystem.o 00:03:27.366 CC lib/iscsi/iscsi_rpc.o 00:03:27.366 CC lib/iscsi/task.o 00:03:27.624 LIB libspdk_ftl.a 00:03:27.882 SO libspdk_ftl.so.9.0 00:03:28.139 SYMLINK libspdk_ftl.so 00:03:28.704 LIB libspdk_vhost.a 00:03:28.704 SO libspdk_vhost.so.8.0 00:03:28.704 SYMLINK libspdk_vhost.so 00:03:28.704 LIB libspdk_nvmf.a 00:03:28.974 LIB libspdk_iscsi.a 00:03:28.974 SO libspdk_nvmf.so.18.0 00:03:28.974 SO libspdk_iscsi.so.8.0 00:03:28.974 SYMLINK libspdk_iscsi.so 00:03:29.236 SYMLINK libspdk_nvmf.so 00:03:29.493 CC module/vfu_device/vfu_virtio.o 00:03:29.493 CC module/env_dpdk/env_dpdk_rpc.o 00:03:29.493 CC module/vfu_device/vfu_virtio_blk.o 00:03:29.493 CC module/vfu_device/vfu_virtio_scsi.o 00:03:29.493 CC module/vfu_device/vfu_virtio_rpc.o 00:03:29.493 CC module/keyring/linux/keyring.o 00:03:29.493 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:29.493 CC module/blob/bdev/blob_bdev.o 00:03:29.493 CC module/keyring/linux/keyring_rpc.o 00:03:29.493 CC module/accel/iaa/accel_iaa.o 00:03:29.493 CC module/accel/iaa/accel_iaa_rpc.o 00:03:29.493 CC module/sock/posix/posix.o 00:03:29.493 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:29.493 CC module/accel/error/accel_error.o 00:03:29.493 CC module/accel/dsa/accel_dsa.o 00:03:29.493 CC module/accel/ioat/accel_ioat.o 00:03:29.493 CC module/scheduler/gscheduler/gscheduler.o 00:03:29.493 CC module/keyring/file/keyring.o 00:03:29.493 CC module/accel/dsa/accel_dsa_rpc.o 00:03:29.493 CC module/accel/error/accel_error_rpc.o 00:03:29.493 CC module/keyring/file/keyring_rpc.o 00:03:29.493 CC module/accel/ioat/accel_ioat_rpc.o 00:03:29.493 LIB libspdk_env_dpdk_rpc.a 00:03:29.493 SO libspdk_env_dpdk_rpc.so.6.0 00:03:29.493 SYMLINK libspdk_env_dpdk_rpc.so 00:03:29.804 LIB libspdk_keyring_linux.a 00:03:29.804 LIB libspdk_keyring_file.a 00:03:29.804 LIB libspdk_scheduler_gscheduler.a 00:03:29.804 LIB libspdk_scheduler_dpdk_governor.a 00:03:29.804 SO libspdk_keyring_file.so.1.0 00:03:29.804 SO libspdk_keyring_linux.so.1.0 00:03:29.804 SO libspdk_scheduler_gscheduler.so.4.0 00:03:29.804 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:29.804 LIB libspdk_scheduler_dynamic.a 00:03:29.804 LIB libspdk_accel_error.a 00:03:29.804 LIB libspdk_accel_ioat.a 00:03:29.804 SO libspdk_scheduler_dynamic.so.4.0 00:03:29.804 LIB libspdk_accel_iaa.a 00:03:29.804 SO libspdk_accel_error.so.2.0 00:03:29.804 SYMLINK libspdk_keyring_linux.so 00:03:29.804 SYMLINK libspdk_keyring_file.so 00:03:29.804 SO libspdk_accel_ioat.so.6.0 00:03:29.804 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:29.804 SYMLINK libspdk_scheduler_gscheduler.so 00:03:29.804 SO libspdk_accel_iaa.so.3.0 00:03:29.804 SYMLINK libspdk_scheduler_dynamic.so 00:03:29.804 LIB libspdk_accel_dsa.a 00:03:29.804 LIB libspdk_blob_bdev.a 00:03:29.804 SYMLINK libspdk_accel_error.so 00:03:29.804 SYMLINK libspdk_accel_ioat.so 00:03:29.804 SO libspdk_accel_dsa.so.5.0 00:03:29.804 SO libspdk_blob_bdev.so.11.0 00:03:29.804 SYMLINK libspdk_accel_iaa.so 00:03:29.804 SYMLINK libspdk_blob_bdev.so 00:03:29.804 SYMLINK libspdk_accel_dsa.so 00:03:30.061 LIB libspdk_vfu_device.a 00:03:30.061 SO libspdk_vfu_device.so.3.0 00:03:30.061 CC module/bdev/null/bdev_null.o 00:03:30.061 CC module/blobfs/bdev/blobfs_bdev.o 00:03:30.061 CC module/bdev/delay/vbdev_delay.o 00:03:30.061 CC module/bdev/iscsi/bdev_iscsi.o 00:03:30.061 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:30.061 CC module/bdev/null/bdev_null_rpc.o 00:03:30.061 CC module/bdev/split/vbdev_split.o 00:03:30.062 CC module/bdev/nvme/bdev_nvme.o 00:03:30.062 CC module/bdev/passthru/vbdev_passthru.o 00:03:30.062 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:30.062 CC module/bdev/lvol/vbdev_lvol.o 00:03:30.062 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:30.062 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:30.062 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:30.062 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:30.062 CC module/bdev/gpt/gpt.o 00:03:30.062 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:30.062 CC module/bdev/nvme/nvme_rpc.o 00:03:30.062 CC module/bdev/split/vbdev_split_rpc.o 00:03:30.062 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:30.062 CC module/bdev/raid/bdev_raid.o 00:03:30.062 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:30.062 CC module/bdev/gpt/vbdev_gpt.o 00:03:30.062 CC module/bdev/aio/bdev_aio.o 00:03:30.062 CC module/bdev/error/vbdev_error_rpc.o 00:03:30.062 CC module/bdev/error/vbdev_error.o 00:03:30.062 CC module/bdev/aio/bdev_aio_rpc.o 00:03:30.062 CC module/bdev/raid/bdev_raid_rpc.o 00:03:30.062 CC module/bdev/nvme/bdev_mdns_client.o 00:03:30.062 CC module/bdev/raid/bdev_raid_sb.o 00:03:30.062 CC module/bdev/malloc/bdev_malloc.o 00:03:30.062 CC module/bdev/nvme/vbdev_opal.o 00:03:30.062 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:30.062 CC module/bdev/raid/raid0.o 00:03:30.062 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:30.062 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:30.062 CC module/bdev/raid/raid1.o 00:03:30.062 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:30.062 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:30.062 CC module/bdev/ftl/bdev_ftl.o 00:03:30.062 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:30.062 CC module/bdev/raid/concat.o 00:03:30.319 SYMLINK libspdk_vfu_device.so 00:03:30.319 LIB libspdk_sock_posix.a 00:03:30.319 SO libspdk_sock_posix.so.6.0 00:03:30.577 LIB libspdk_blobfs_bdev.a 00:03:30.577 SYMLINK libspdk_sock_posix.so 00:03:30.577 SO libspdk_blobfs_bdev.so.6.0 00:03:30.577 LIB libspdk_bdev_error.a 00:03:30.577 LIB libspdk_bdev_split.a 00:03:30.577 SO libspdk_bdev_error.so.6.0 00:03:30.577 SYMLINK libspdk_blobfs_bdev.so 00:03:30.577 SO libspdk_bdev_split.so.6.0 00:03:30.577 LIB libspdk_bdev_passthru.a 00:03:30.577 SYMLINK libspdk_bdev_error.so 00:03:30.577 LIB libspdk_bdev_null.a 00:03:30.577 LIB libspdk_bdev_ftl.a 00:03:30.577 LIB libspdk_bdev_gpt.a 00:03:30.577 SO libspdk_bdev_passthru.so.6.0 00:03:30.577 SYMLINK libspdk_bdev_split.so 00:03:30.577 LIB libspdk_bdev_iscsi.a 00:03:30.577 SO libspdk_bdev_null.so.6.0 00:03:30.577 SO libspdk_bdev_ftl.so.6.0 00:03:30.577 SO libspdk_bdev_gpt.so.6.0 00:03:30.577 LIB libspdk_bdev_malloc.a 00:03:30.577 SO libspdk_bdev_iscsi.so.6.0 00:03:30.577 LIB libspdk_bdev_aio.a 00:03:30.577 SYMLINK libspdk_bdev_passthru.so 00:03:30.577 SO libspdk_bdev_malloc.so.6.0 00:03:30.577 LIB libspdk_bdev_zone_block.a 00:03:30.834 SYMLINK libspdk_bdev_null.so 00:03:30.834 SYMLINK libspdk_bdev_ftl.so 00:03:30.834 SYMLINK libspdk_bdev_gpt.so 00:03:30.834 SO libspdk_bdev_aio.so.6.0 00:03:30.834 SO libspdk_bdev_zone_block.so.6.0 00:03:30.834 SYMLINK libspdk_bdev_iscsi.so 00:03:30.834 LIB libspdk_bdev_delay.a 00:03:30.834 SYMLINK libspdk_bdev_malloc.so 00:03:30.834 SO libspdk_bdev_delay.so.6.0 00:03:30.834 SYMLINK libspdk_bdev_aio.so 00:03:30.834 SYMLINK libspdk_bdev_zone_block.so 00:03:30.834 SYMLINK libspdk_bdev_delay.so 00:03:30.834 LIB libspdk_bdev_virtio.a 00:03:30.834 SO libspdk_bdev_virtio.so.6.0 00:03:30.834 LIB libspdk_bdev_lvol.a 00:03:30.834 SO libspdk_bdev_lvol.so.6.0 00:03:30.834 SYMLINK libspdk_bdev_virtio.so 00:03:31.091 SYMLINK libspdk_bdev_lvol.so 00:03:31.348 LIB libspdk_bdev_raid.a 00:03:31.348 SO libspdk_bdev_raid.so.6.0 00:03:31.348 SYMLINK libspdk_bdev_raid.so 00:03:32.719 LIB libspdk_bdev_nvme.a 00:03:32.719 SO libspdk_bdev_nvme.so.7.0 00:03:32.719 SYMLINK libspdk_bdev_nvme.so 00:03:32.977 CC module/event/subsystems/iobuf/iobuf.o 00:03:32.977 CC module/event/subsystems/sock/sock.o 00:03:32.977 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:32.977 CC module/event/subsystems/keyring/keyring.o 00:03:32.977 CC module/event/subsystems/vmd/vmd.o 00:03:32.977 CC module/event/subsystems/scheduler/scheduler.o 00:03:32.977 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:32.977 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:32.977 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:32.977 LIB libspdk_event_keyring.a 00:03:33.236 LIB libspdk_event_vhost_blk.a 00:03:33.236 LIB libspdk_event_sock.a 00:03:33.236 LIB libspdk_event_vfu_tgt.a 00:03:33.236 LIB libspdk_event_scheduler.a 00:03:33.236 LIB libspdk_event_vmd.a 00:03:33.236 SO libspdk_event_keyring.so.1.0 00:03:33.236 LIB libspdk_event_iobuf.a 00:03:33.236 SO libspdk_event_vhost_blk.so.3.0 00:03:33.236 SO libspdk_event_sock.so.5.0 00:03:33.236 SO libspdk_event_vfu_tgt.so.3.0 00:03:33.236 SO libspdk_event_scheduler.so.4.0 00:03:33.236 SO libspdk_event_vmd.so.6.0 00:03:33.236 SO libspdk_event_iobuf.so.3.0 00:03:33.236 SYMLINK libspdk_event_keyring.so 00:03:33.236 SYMLINK libspdk_event_vhost_blk.so 00:03:33.236 SYMLINK libspdk_event_vfu_tgt.so 00:03:33.236 SYMLINK libspdk_event_sock.so 00:03:33.236 SYMLINK libspdk_event_scheduler.so 00:03:33.236 SYMLINK libspdk_event_vmd.so 00:03:33.236 SYMLINK libspdk_event_iobuf.so 00:03:33.494 CC module/event/subsystems/accel/accel.o 00:03:33.494 LIB libspdk_event_accel.a 00:03:33.494 SO libspdk_event_accel.so.6.0 00:03:33.752 SYMLINK libspdk_event_accel.so 00:03:33.752 CC module/event/subsystems/bdev/bdev.o 00:03:34.011 LIB libspdk_event_bdev.a 00:03:34.011 SO libspdk_event_bdev.so.6.0 00:03:34.011 SYMLINK libspdk_event_bdev.so 00:03:34.270 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:34.270 CC module/event/subsystems/nbd/nbd.o 00:03:34.270 CC module/event/subsystems/ublk/ublk.o 00:03:34.270 CC module/event/subsystems/scsi/scsi.o 00:03:34.270 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:34.270 LIB libspdk_event_ublk.a 00:03:34.270 LIB libspdk_event_nbd.a 00:03:34.270 LIB libspdk_event_scsi.a 00:03:34.528 SO libspdk_event_ublk.so.3.0 00:03:34.528 SO libspdk_event_nbd.so.6.0 00:03:34.528 SO libspdk_event_scsi.so.6.0 00:03:34.528 SYMLINK libspdk_event_nbd.so 00:03:34.528 SYMLINK libspdk_event_ublk.so 00:03:34.528 SYMLINK libspdk_event_scsi.so 00:03:34.528 LIB libspdk_event_nvmf.a 00:03:34.528 SO libspdk_event_nvmf.so.6.0 00:03:34.528 SYMLINK libspdk_event_nvmf.so 00:03:34.528 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:34.528 CC module/event/subsystems/iscsi/iscsi.o 00:03:34.802 LIB libspdk_event_vhost_scsi.a 00:03:34.802 LIB libspdk_event_iscsi.a 00:03:34.802 SO libspdk_event_vhost_scsi.so.3.0 00:03:34.802 SO libspdk_event_iscsi.so.6.0 00:03:34.802 SYMLINK libspdk_event_vhost_scsi.so 00:03:34.802 SYMLINK libspdk_event_iscsi.so 00:03:35.061 SO libspdk.so.6.0 00:03:35.061 SYMLINK libspdk.so 00:03:35.325 CC app/trace_record/trace_record.o 00:03:35.325 CC test/rpc_client/rpc_client_test.o 00:03:35.325 CXX app/trace/trace.o 00:03:35.325 TEST_HEADER include/spdk/accel.h 00:03:35.325 TEST_HEADER include/spdk/accel_module.h 00:03:35.325 CC app/spdk_top/spdk_top.o 00:03:35.325 TEST_HEADER include/spdk/assert.h 00:03:35.325 CC app/spdk_lspci/spdk_lspci.o 00:03:35.325 CC app/spdk_nvme_discover/discovery_aer.o 00:03:35.325 CC app/spdk_nvme_perf/perf.o 00:03:35.325 CC app/spdk_nvme_identify/identify.o 00:03:35.325 TEST_HEADER include/spdk/barrier.h 00:03:35.325 TEST_HEADER include/spdk/base64.h 00:03:35.325 TEST_HEADER include/spdk/bdev.h 00:03:35.325 TEST_HEADER include/spdk/bdev_module.h 00:03:35.325 TEST_HEADER include/spdk/bdev_zone.h 00:03:35.325 TEST_HEADER include/spdk/bit_array.h 00:03:35.325 TEST_HEADER include/spdk/bit_pool.h 00:03:35.325 TEST_HEADER include/spdk/blob_bdev.h 00:03:35.325 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:35.325 TEST_HEADER include/spdk/blobfs.h 00:03:35.325 TEST_HEADER include/spdk/blob.h 00:03:35.325 TEST_HEADER include/spdk/conf.h 00:03:35.325 TEST_HEADER include/spdk/config.h 00:03:35.325 TEST_HEADER include/spdk/cpuset.h 00:03:35.325 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:35.325 TEST_HEADER include/spdk/crc16.h 00:03:35.325 TEST_HEADER include/spdk/crc32.h 00:03:35.325 TEST_HEADER include/spdk/crc64.h 00:03:35.325 TEST_HEADER include/spdk/dif.h 00:03:35.325 CC app/spdk_dd/spdk_dd.o 00:03:35.325 TEST_HEADER include/spdk/dma.h 00:03:35.325 TEST_HEADER include/spdk/endian.h 00:03:35.326 TEST_HEADER include/spdk/env_dpdk.h 00:03:35.326 CC app/nvmf_tgt/nvmf_main.o 00:03:35.326 CC app/iscsi_tgt/iscsi_tgt.o 00:03:35.326 TEST_HEADER include/spdk/env.h 00:03:35.326 TEST_HEADER include/spdk/event.h 00:03:35.326 TEST_HEADER include/spdk/fd_group.h 00:03:35.326 CC app/vhost/vhost.o 00:03:35.326 TEST_HEADER include/spdk/fd.h 00:03:35.326 TEST_HEADER include/spdk/file.h 00:03:35.326 TEST_HEADER include/spdk/ftl.h 00:03:35.326 TEST_HEADER include/spdk/gpt_spec.h 00:03:35.326 TEST_HEADER include/spdk/hexlify.h 00:03:35.326 TEST_HEADER include/spdk/histogram_data.h 00:03:35.326 TEST_HEADER include/spdk/idxd.h 00:03:35.326 CC examples/ioat/perf/perf.o 00:03:35.326 TEST_HEADER include/spdk/idxd_spec.h 00:03:35.326 CC examples/sock/hello_world/hello_sock.o 00:03:35.326 TEST_HEADER include/spdk/init.h 00:03:35.326 CC examples/vmd/lsvmd/lsvmd.o 00:03:35.326 TEST_HEADER include/spdk/ioat.h 00:03:35.326 CC examples/nvme/reconnect/reconnect.o 00:03:35.326 CC app/spdk_tgt/spdk_tgt.o 00:03:35.326 TEST_HEADER include/spdk/ioat_spec.h 00:03:35.326 CC test/app/jsoncat/jsoncat.o 00:03:35.326 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:35.326 CC examples/ioat/verify/verify.o 00:03:35.326 CC test/app/stub/stub.o 00:03:35.326 CC test/app/histogram_perf/histogram_perf.o 00:03:35.326 CC examples/nvme/hello_world/hello_world.o 00:03:35.326 TEST_HEADER include/spdk/iscsi_spec.h 00:03:35.326 CC test/env/vtophys/vtophys.o 00:03:35.326 CC examples/vmd/led/led.o 00:03:35.326 CC test/nvme/aer/aer.o 00:03:35.326 CC test/event/event_perf/event_perf.o 00:03:35.326 CC examples/util/zipf/zipf.o 00:03:35.326 TEST_HEADER include/spdk/json.h 00:03:35.326 CC examples/accel/perf/accel_perf.o 00:03:35.326 TEST_HEADER include/spdk/jsonrpc.h 00:03:35.326 CC test/thread/poller_perf/poller_perf.o 00:03:35.326 CC examples/idxd/perf/perf.o 00:03:35.326 TEST_HEADER include/spdk/keyring.h 00:03:35.326 CC app/fio/nvme/fio_plugin.o 00:03:35.326 TEST_HEADER include/spdk/keyring_module.h 00:03:35.326 TEST_HEADER include/spdk/likely.h 00:03:35.326 TEST_HEADER include/spdk/log.h 00:03:35.326 TEST_HEADER include/spdk/lvol.h 00:03:35.326 TEST_HEADER include/spdk/memory.h 00:03:35.326 TEST_HEADER include/spdk/mmio.h 00:03:35.326 TEST_HEADER include/spdk/nbd.h 00:03:35.326 CC examples/blob/hello_world/hello_blob.o 00:03:35.326 CC test/blobfs/mkfs/mkfs.o 00:03:35.326 TEST_HEADER include/spdk/notify.h 00:03:35.326 CC test/bdev/bdevio/bdevio.o 00:03:35.326 TEST_HEADER include/spdk/nvme.h 00:03:35.326 CC examples/blob/cli/blobcli.o 00:03:35.326 CC examples/bdev/hello_world/hello_bdev.o 00:03:35.326 TEST_HEADER include/spdk/nvme_intel.h 00:03:35.326 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:35.326 CC examples/bdev/bdevperf/bdevperf.o 00:03:35.326 CC test/accel/dif/dif.o 00:03:35.326 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:35.589 TEST_HEADER include/spdk/nvme_spec.h 00:03:35.589 CC test/dma/test_dma/test_dma.o 00:03:35.589 TEST_HEADER include/spdk/nvme_zns.h 00:03:35.589 CC examples/thread/thread/thread_ex.o 00:03:35.589 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:35.589 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:35.589 CC test/app/bdev_svc/bdev_svc.o 00:03:35.589 TEST_HEADER include/spdk/nvmf.h 00:03:35.589 CC examples/nvmf/nvmf/nvmf.o 00:03:35.589 TEST_HEADER include/spdk/nvmf_spec.h 00:03:35.589 TEST_HEADER include/spdk/nvmf_transport.h 00:03:35.589 TEST_HEADER include/spdk/opal.h 00:03:35.589 TEST_HEADER include/spdk/opal_spec.h 00:03:35.589 TEST_HEADER include/spdk/pci_ids.h 00:03:35.589 TEST_HEADER include/spdk/pipe.h 00:03:35.589 TEST_HEADER include/spdk/queue.h 00:03:35.589 TEST_HEADER include/spdk/reduce.h 00:03:35.589 TEST_HEADER include/spdk/rpc.h 00:03:35.589 TEST_HEADER include/spdk/scheduler.h 00:03:35.589 TEST_HEADER include/spdk/scsi.h 00:03:35.589 TEST_HEADER include/spdk/scsi_spec.h 00:03:35.589 TEST_HEADER include/spdk/sock.h 00:03:35.589 CC test/env/mem_callbacks/mem_callbacks.o 00:03:35.589 TEST_HEADER include/spdk/stdinc.h 00:03:35.589 TEST_HEADER include/spdk/string.h 00:03:35.589 CC test/lvol/esnap/esnap.o 00:03:35.589 TEST_HEADER include/spdk/thread.h 00:03:35.589 TEST_HEADER include/spdk/trace.h 00:03:35.589 TEST_HEADER include/spdk/trace_parser.h 00:03:35.589 LINK spdk_lspci 00:03:35.589 TEST_HEADER include/spdk/tree.h 00:03:35.589 TEST_HEADER include/spdk/ublk.h 00:03:35.589 TEST_HEADER include/spdk/util.h 00:03:35.589 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:35.589 TEST_HEADER include/spdk/uuid.h 00:03:35.589 TEST_HEADER include/spdk/version.h 00:03:35.589 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:35.589 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:35.589 TEST_HEADER include/spdk/vhost.h 00:03:35.589 TEST_HEADER include/spdk/vmd.h 00:03:35.589 TEST_HEADER include/spdk/xor.h 00:03:35.589 TEST_HEADER include/spdk/zipf.h 00:03:35.589 CXX test/cpp_headers/accel.o 00:03:35.589 LINK rpc_client_test 00:03:35.589 LINK interrupt_tgt 00:03:35.589 LINK spdk_nvme_discover 00:03:35.589 LINK lsvmd 00:03:35.589 LINK jsoncat 00:03:35.590 LINK histogram_perf 00:03:35.590 LINK vtophys 00:03:35.590 LINK led 00:03:35.590 LINK nvmf_tgt 00:03:35.590 LINK zipf 00:03:35.590 LINK poller_perf 00:03:35.853 LINK event_perf 00:03:35.853 LINK vhost 00:03:35.853 LINK stub 00:03:35.853 LINK iscsi_tgt 00:03:35.853 LINK spdk_trace_record 00:03:35.853 LINK ioat_perf 00:03:35.853 LINK spdk_tgt 00:03:35.853 LINK verify 00:03:35.853 LINK hello_world 00:03:35.853 LINK mkfs 00:03:35.853 LINK bdev_svc 00:03:35.853 LINK hello_sock 00:03:35.853 LINK hello_blob 00:03:35.853 LINK hello_bdev 00:03:35.853 LINK aer 00:03:35.853 CXX test/cpp_headers/accel_module.o 00:03:36.113 LINK thread 00:03:36.113 CXX test/cpp_headers/assert.o 00:03:36.113 LINK spdk_dd 00:03:36.113 CC test/nvme/reset/reset.o 00:03:36.113 LINK reconnect 00:03:36.113 LINK nvmf 00:03:36.113 LINK idxd_perf 00:03:36.113 CC test/nvme/sgl/sgl.o 00:03:36.113 LINK spdk_trace 00:03:36.113 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:36.113 CC test/nvme/e2edp/nvme_dp.o 00:03:36.113 CXX test/cpp_headers/barrier.o 00:03:36.113 CC test/env/memory/memory_ut.o 00:03:36.113 CC test/event/reactor/reactor.o 00:03:36.113 LINK bdevio 00:03:36.113 LINK test_dma 00:03:36.113 CC test/nvme/overhead/overhead.o 00:03:36.113 CC examples/nvme/arbitration/arbitration.o 00:03:36.113 CC test/env/pci/pci_ut.o 00:03:36.113 CC test/nvme/err_injection/err_injection.o 00:03:36.113 CC examples/nvme/hotplug/hotplug.o 00:03:36.378 CXX test/cpp_headers/base64.o 00:03:36.378 CC test/nvme/startup/startup.o 00:03:36.378 CC app/fio/bdev/fio_plugin.o 00:03:36.378 CC test/nvme/reserve/reserve.o 00:03:36.378 CXX test/cpp_headers/bdev.o 00:03:36.378 CC test/nvme/simple_copy/simple_copy.o 00:03:36.378 LINK dif 00:03:36.378 CC test/event/reactor_perf/reactor_perf.o 00:03:36.378 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:36.378 LINK nvme_manage 00:03:36.378 LINK accel_perf 00:03:36.378 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:36.378 CXX test/cpp_headers/bdev_module.o 00:03:36.378 LINK nvme_fuzz 00:03:36.378 CXX test/cpp_headers/bdev_zone.o 00:03:36.378 CXX test/cpp_headers/bit_array.o 00:03:36.378 LINK blobcli 00:03:36.378 CC test/event/app_repeat/app_repeat.o 00:03:36.378 LINK env_dpdk_post_init 00:03:36.378 LINK spdk_nvme 00:03:36.378 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:36.378 CC test/nvme/connect_stress/connect_stress.o 00:03:36.378 LINK reactor 00:03:36.378 CC examples/nvme/abort/abort.o 00:03:36.640 CXX test/cpp_headers/bit_pool.o 00:03:36.640 CC test/event/scheduler/scheduler.o 00:03:36.640 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:36.640 CC test/nvme/compliance/nvme_compliance.o 00:03:36.640 LINK reset 00:03:36.640 LINK sgl 00:03:36.640 CC test/nvme/boot_partition/boot_partition.o 00:03:36.640 LINK err_injection 00:03:36.640 CC test/nvme/fused_ordering/fused_ordering.o 00:03:36.640 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:36.640 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:36.640 LINK reactor_perf 00:03:36.640 CXX test/cpp_headers/blob_bdev.o 00:03:36.640 CXX test/cpp_headers/blobfs_bdev.o 00:03:36.640 LINK startup 00:03:36.640 CXX test/cpp_headers/blobfs.o 00:03:36.640 LINK hotplug 00:03:36.640 CXX test/cpp_headers/blob.o 00:03:36.640 CXX test/cpp_headers/conf.o 00:03:36.640 LINK nvme_dp 00:03:36.640 CXX test/cpp_headers/config.o 00:03:36.640 CXX test/cpp_headers/cpuset.o 00:03:36.640 LINK app_repeat 00:03:36.640 LINK reserve 00:03:36.640 LINK mem_callbacks 00:03:36.640 CXX test/cpp_headers/crc16.o 00:03:36.901 CXX test/cpp_headers/crc32.o 00:03:36.901 LINK overhead 00:03:36.901 CC test/nvme/fdp/fdp.o 00:03:36.901 CXX test/cpp_headers/crc64.o 00:03:36.901 CC test/nvme/cuse/cuse.o 00:03:36.901 LINK simple_copy 00:03:36.901 LINK spdk_nvme_perf 00:03:36.901 CXX test/cpp_headers/dif.o 00:03:36.901 CXX test/cpp_headers/dma.o 00:03:36.901 CXX test/cpp_headers/endian.o 00:03:36.901 CXX test/cpp_headers/env_dpdk.o 00:03:36.901 CXX test/cpp_headers/env.o 00:03:36.901 LINK cmb_copy 00:03:36.901 LINK connect_stress 00:03:36.901 LINK spdk_nvme_identify 00:03:36.901 LINK bdevperf 00:03:36.901 CXX test/cpp_headers/event.o 00:03:36.901 LINK pmr_persistence 00:03:36.901 CXX test/cpp_headers/fd_group.o 00:03:36.901 LINK arbitration 00:03:36.901 CXX test/cpp_headers/fd.o 00:03:36.901 LINK spdk_top 00:03:36.901 CXX test/cpp_headers/file.o 00:03:36.901 LINK scheduler 00:03:36.901 LINK pci_ut 00:03:36.901 LINK boot_partition 00:03:36.901 CXX test/cpp_headers/ftl.o 00:03:36.901 CXX test/cpp_headers/gpt_spec.o 00:03:37.166 CXX test/cpp_headers/hexlify.o 00:03:37.166 CXX test/cpp_headers/histogram_data.o 00:03:37.166 LINK doorbell_aers 00:03:37.166 LINK fused_ordering 00:03:37.166 CXX test/cpp_headers/idxd.o 00:03:37.166 CXX test/cpp_headers/idxd_spec.o 00:03:37.166 CXX test/cpp_headers/init.o 00:03:37.166 CXX test/cpp_headers/ioat.o 00:03:37.166 CXX test/cpp_headers/ioat_spec.o 00:03:37.166 CXX test/cpp_headers/iscsi_spec.o 00:03:37.166 CXX test/cpp_headers/json.o 00:03:37.166 CXX test/cpp_headers/jsonrpc.o 00:03:37.166 CXX test/cpp_headers/keyring.o 00:03:37.166 CXX test/cpp_headers/keyring_module.o 00:03:37.166 CXX test/cpp_headers/likely.o 00:03:37.166 CXX test/cpp_headers/log.o 00:03:37.166 CXX test/cpp_headers/lvol.o 00:03:37.166 CXX test/cpp_headers/memory.o 00:03:37.166 CXX test/cpp_headers/mmio.o 00:03:37.166 CXX test/cpp_headers/nbd.o 00:03:37.166 CXX test/cpp_headers/notify.o 00:03:37.166 CXX test/cpp_headers/nvme.o 00:03:37.166 CXX test/cpp_headers/nvme_intel.o 00:03:37.166 CXX test/cpp_headers/nvme_ocssd.o 00:03:37.166 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:37.166 LINK abort 00:03:37.166 CXX test/cpp_headers/nvme_spec.o 00:03:37.166 LINK spdk_bdev 00:03:37.166 CXX test/cpp_headers/nvme_zns.o 00:03:37.166 LINK nvme_compliance 00:03:37.166 CXX test/cpp_headers/nvmf_cmd.o 00:03:37.166 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:37.166 CXX test/cpp_headers/nvmf.o 00:03:37.431 CXX test/cpp_headers/nvmf_spec.o 00:03:37.431 CXX test/cpp_headers/nvmf_transport.o 00:03:37.431 CXX test/cpp_headers/opal.o 00:03:37.431 CXX test/cpp_headers/opal_spec.o 00:03:37.431 CXX test/cpp_headers/pci_ids.o 00:03:37.431 CXX test/cpp_headers/pipe.o 00:03:37.431 CXX test/cpp_headers/queue.o 00:03:37.431 CXX test/cpp_headers/reduce.o 00:03:37.431 CXX test/cpp_headers/rpc.o 00:03:37.431 CXX test/cpp_headers/scheduler.o 00:03:37.431 CXX test/cpp_headers/scsi.o 00:03:37.431 CXX test/cpp_headers/scsi_spec.o 00:03:37.431 CXX test/cpp_headers/sock.o 00:03:37.431 CXX test/cpp_headers/stdinc.o 00:03:37.431 LINK fdp 00:03:37.431 LINK vhost_fuzz 00:03:37.431 CXX test/cpp_headers/string.o 00:03:37.431 CXX test/cpp_headers/thread.o 00:03:37.431 CXX test/cpp_headers/trace.o 00:03:37.431 CXX test/cpp_headers/trace_parser.o 00:03:37.431 CXX test/cpp_headers/tree.o 00:03:37.431 CXX test/cpp_headers/ublk.o 00:03:37.431 CXX test/cpp_headers/util.o 00:03:37.431 CXX test/cpp_headers/uuid.o 00:03:37.431 CXX test/cpp_headers/version.o 00:03:37.431 CXX test/cpp_headers/vfio_user_pci.o 00:03:37.432 CXX test/cpp_headers/vfio_user_spec.o 00:03:37.432 CXX test/cpp_headers/vhost.o 00:03:37.432 CXX test/cpp_headers/vmd.o 00:03:37.432 CXX test/cpp_headers/xor.o 00:03:37.432 CXX test/cpp_headers/zipf.o 00:03:37.998 LINK memory_ut 00:03:38.563 LINK iscsi_fuzz 00:03:38.563 LINK cuse 00:03:41.872 LINK esnap 00:03:41.872 00:03:41.872 real 0m41.138s 00:03:41.872 user 7m34.909s 00:03:41.872 sys 1m51.188s 00:03:41.872 05:17:48 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:03:41.872 05:17:48 make -- common/autotest_common.sh@10 -- $ set +x 00:03:41.872 ************************************ 00:03:41.872 END TEST make 00:03:41.872 ************************************ 00:03:41.872 05:17:48 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:41.872 05:17:48 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:41.872 05:17:48 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:41.872 05:17:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:41.872 05:17:48 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:41.872 05:17:48 -- pm/common@44 -- $ pid=2995031 00:03:41.872 05:17:48 -- pm/common@50 -- $ kill -TERM 2995031 00:03:41.872 05:17:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:41.872 05:17:48 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:41.872 05:17:48 -- pm/common@44 -- $ pid=2995033 00:03:41.872 05:17:48 -- pm/common@50 -- $ kill -TERM 2995033 00:03:41.872 05:17:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:41.872 05:17:48 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:41.872 05:17:48 -- pm/common@44 -- $ pid=2995035 00:03:41.872 05:17:48 -- pm/common@50 -- $ kill -TERM 2995035 00:03:41.872 05:17:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:41.872 05:17:48 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:41.872 05:17:48 -- pm/common@44 -- $ pid=2995063 00:03:41.872 05:17:48 -- pm/common@50 -- $ sudo -E kill -TERM 2995063 00:03:41.872 05:17:48 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:41.872 05:17:48 -- nvmf/common.sh@7 -- # uname -s 00:03:41.872 05:17:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:41.872 05:17:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:41.872 05:17:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:41.872 05:17:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:41.872 05:17:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:41.872 05:17:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:41.872 05:17:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:41.872 05:17:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:41.872 05:17:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:41.872 05:17:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:41.872 05:17:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:41.872 05:17:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:41.872 05:17:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:41.872 05:17:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:41.872 05:17:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:41.872 05:17:48 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:41.872 05:17:48 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:41.872 05:17:48 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:41.872 05:17:48 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:41.872 05:17:48 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:41.872 05:17:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:41.872 05:17:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:41.872 05:17:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:41.872 05:17:48 -- paths/export.sh@5 -- # export PATH 00:03:41.872 05:17:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:41.872 05:17:48 -- nvmf/common.sh@47 -- # : 0 00:03:41.872 05:17:48 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:41.872 05:17:48 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:41.872 05:17:48 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:41.872 05:17:48 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:41.872 05:17:48 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:41.872 05:17:48 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:41.872 05:17:48 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:41.872 05:17:48 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:41.872 05:17:48 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:41.872 05:17:48 -- spdk/autotest.sh@32 -- # uname -s 00:03:41.872 05:17:48 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:41.872 05:17:48 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:41.872 05:17:48 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:41.872 05:17:48 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:41.872 05:17:48 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:41.872 05:17:48 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:41.872 05:17:48 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:41.872 05:17:48 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:41.872 05:17:48 -- spdk/autotest.sh@48 -- # udevadm_pid=3071505 00:03:41.872 05:17:48 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:41.872 05:17:48 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:41.872 05:17:48 -- pm/common@17 -- # local monitor 00:03:41.872 05:17:48 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:41.872 05:17:48 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:41.872 05:17:48 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:41.872 05:17:48 -- pm/common@21 -- # date +%s 00:03:41.872 05:17:48 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:41.872 05:17:48 -- pm/common@21 -- # date +%s 00:03:41.872 05:17:48 -- pm/common@25 -- # sleep 1 00:03:41.872 05:17:48 -- pm/common@21 -- # date +%s 00:03:41.872 05:17:48 -- pm/common@21 -- # date +%s 00:03:41.872 05:17:48 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720927068 00:03:41.872 05:17:48 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720927068 00:03:41.872 05:17:48 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720927068 00:03:41.872 05:17:48 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720927068 00:03:41.872 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720927068_collect-vmstat.pm.log 00:03:41.872 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720927068_collect-cpu-load.pm.log 00:03:41.872 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720927068_collect-cpu-temp.pm.log 00:03:41.872 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720927068_collect-bmc-pm.bmc.pm.log 00:03:42.809 05:17:49 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:42.809 05:17:49 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:42.809 05:17:49 -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:42.809 05:17:49 -- common/autotest_common.sh@10 -- # set +x 00:03:42.809 05:17:49 -- spdk/autotest.sh@59 -- # create_test_list 00:03:42.809 05:17:49 -- common/autotest_common.sh@744 -- # xtrace_disable 00:03:42.809 05:17:49 -- common/autotest_common.sh@10 -- # set +x 00:03:42.809 05:17:49 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:42.809 05:17:49 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:42.809 05:17:49 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:42.809 05:17:49 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:42.809 05:17:49 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:42.809 05:17:49 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:42.809 05:17:49 -- common/autotest_common.sh@1451 -- # uname 00:03:42.809 05:17:49 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:03:42.809 05:17:49 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:42.809 05:17:49 -- common/autotest_common.sh@1471 -- # uname 00:03:42.809 05:17:49 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:03:42.809 05:17:49 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:42.809 05:17:49 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:42.809 05:17:49 -- spdk/autotest.sh@72 -- # hash lcov 00:03:42.809 05:17:49 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:42.809 05:17:49 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:42.809 --rc lcov_branch_coverage=1 00:03:42.809 --rc lcov_function_coverage=1 00:03:42.809 --rc genhtml_branch_coverage=1 00:03:42.809 --rc genhtml_function_coverage=1 00:03:42.809 --rc genhtml_legend=1 00:03:42.809 --rc geninfo_all_blocks=1 00:03:42.809 ' 00:03:42.809 05:17:49 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:42.809 --rc lcov_branch_coverage=1 00:03:42.809 --rc lcov_function_coverage=1 00:03:42.809 --rc genhtml_branch_coverage=1 00:03:42.809 --rc genhtml_function_coverage=1 00:03:42.809 --rc genhtml_legend=1 00:03:42.809 --rc geninfo_all_blocks=1 00:03:42.809 ' 00:03:42.809 05:17:49 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:42.809 --rc lcov_branch_coverage=1 00:03:42.809 --rc lcov_function_coverage=1 00:03:42.809 --rc genhtml_branch_coverage=1 00:03:42.809 --rc genhtml_function_coverage=1 00:03:42.809 --rc genhtml_legend=1 00:03:42.809 --rc geninfo_all_blocks=1 00:03:42.809 --no-external' 00:03:42.809 05:17:49 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:42.809 --rc lcov_branch_coverage=1 00:03:42.809 --rc lcov_function_coverage=1 00:03:42.809 --rc genhtml_branch_coverage=1 00:03:42.809 --rc genhtml_function_coverage=1 00:03:42.809 --rc genhtml_legend=1 00:03:42.809 --rc geninfo_all_blocks=1 00:03:42.809 --no-external' 00:03:42.809 05:17:49 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:42.809 lcov: LCOV version 1.14 00:03:43.067 05:17:49 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:57.944 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:57.944 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:12.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:12.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:04:12.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:12.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:04:12.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:12.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:04:12.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:12.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:04:12.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:12.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:04:12.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:12.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:04:12.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:12.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:04:12.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:12.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:04:12.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:12.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:04:12.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:12.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:04:12.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:12.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:04:12.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:12.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:12.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:12.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:04:12.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:12.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:04:12.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:12.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:04:12.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:04:12.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:04:12.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:12.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:04:12.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:12.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:04:12.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:12.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:04:12.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:12.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:04:12.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:12.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:04:12.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:12.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:04:12.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:12.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:04:12.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:12.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:04:12.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:04:12.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:04:12.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:04:12.855 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:04:12.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:12.855 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:04:12.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:12.855 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:04:12.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:04:12.855 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:04:12.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:12.855 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:04:12.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:12.855 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:04:12.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:12.855 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:04:12.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:12.855 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:04:12.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:12.855 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:04:12.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:04:12.855 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:04:12.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:12.855 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:04:12.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:12.855 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:04:12.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:12.855 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:04:12.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:12.855 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:12.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:04:12.855 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:04:12.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:12.855 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:04:12.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:12.855 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:04:12.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:12.855 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:04:12.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:12.855 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:04:12.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:04:12.855 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:04:12.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:12.855 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:04:12.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:12.855 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:04:12.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:12.855 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:04:12.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:12.855 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:04:12.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:12.855 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:04:12.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:12.855 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:04:12.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:12.855 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:04:12.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:12.855 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:12.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:12.855 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:12.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:12.855 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:04:12.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:12.855 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:04:12.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:12.855 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:12.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:12.855 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:12.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:12.855 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:04:12.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:12.855 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:12.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:12.855 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:12.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:12.855 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:04:12.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:12.855 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:04:12.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:12.855 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:04:12.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:12.855 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:04:12.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:12.855 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:04:12.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:12.855 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:04:12.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:12.855 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:04:12.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:12.855 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:04:12.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:12.855 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:04:12.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:12.855 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:04:12.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:12.855 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:04:12.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:12.855 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:04:12.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:04:12.855 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:04:12.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:12.855 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:04:12.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:12.855 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:04:12.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:12.855 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:04:12.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:12.855 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:04:12.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:12.855 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:04:12.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:04:12.855 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:04:12.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:04:12.855 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:04:12.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:12.855 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:04:12.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:12.855 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:12.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:12.855 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:12.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:12.855 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:04:12.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:12.855 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:04:12.856 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:12.856 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:04:12.856 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:12.856 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:04:15.382 05:18:22 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:15.382 05:18:22 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:15.382 05:18:22 -- common/autotest_common.sh@10 -- # set +x 00:04:15.382 05:18:22 -- spdk/autotest.sh@91 -- # rm -f 00:04:15.640 05:18:22 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:16.574 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:04:16.574 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:04:16.574 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:04:16.574 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:04:16.574 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:04:16.574 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:04:16.574 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:04:16.574 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:04:16.574 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:04:16.574 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:04:16.574 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:04:16.831 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:04:16.831 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:04:16.831 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:04:16.831 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:04:16.831 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:04:16.831 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:04:16.831 05:18:23 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:16.831 05:18:23 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:16.831 05:18:23 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:16.831 05:18:23 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:16.831 05:18:23 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:16.831 05:18:23 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:16.831 05:18:23 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:16.831 05:18:23 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:16.831 05:18:23 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:16.831 05:18:23 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:16.832 05:18:23 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:16.832 05:18:23 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:16.832 05:18:23 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:16.832 05:18:23 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:16.832 05:18:23 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:16.832 No valid GPT data, bailing 00:04:16.832 05:18:23 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:16.832 05:18:23 -- scripts/common.sh@391 -- # pt= 00:04:16.832 05:18:23 -- scripts/common.sh@392 -- # return 1 00:04:16.832 05:18:23 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:16.832 1+0 records in 00:04:16.832 1+0 records out 00:04:16.832 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00258897 s, 405 MB/s 00:04:16.832 05:18:23 -- spdk/autotest.sh@118 -- # sync 00:04:16.832 05:18:23 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:16.832 05:18:23 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:16.832 05:18:23 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:18.734 05:18:25 -- spdk/autotest.sh@124 -- # uname -s 00:04:18.734 05:18:25 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:18.734 05:18:25 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:18.734 05:18:25 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:18.734 05:18:25 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:18.734 05:18:25 -- common/autotest_common.sh@10 -- # set +x 00:04:18.734 ************************************ 00:04:18.734 START TEST setup.sh 00:04:18.734 ************************************ 00:04:18.734 05:18:25 setup.sh -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:18.734 * Looking for test storage... 00:04:18.734 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:18.734 05:18:25 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:18.734 05:18:25 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:18.734 05:18:25 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:18.734 05:18:25 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:18.734 05:18:25 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:18.734 05:18:25 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:18.734 ************************************ 00:04:18.734 START TEST acl 00:04:18.734 ************************************ 00:04:18.734 05:18:25 setup.sh.acl -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:18.734 * Looking for test storage... 00:04:18.734 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:18.734 05:18:25 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:18.734 05:18:25 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:18.734 05:18:25 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:18.734 05:18:25 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:18.734 05:18:25 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:18.734 05:18:25 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:18.734 05:18:25 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:18.734 05:18:25 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:18.734 05:18:25 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:18.734 05:18:25 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:18.734 05:18:25 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:18.734 05:18:25 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:18.734 05:18:25 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:18.734 05:18:25 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:18.734 05:18:25 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:18.734 05:18:25 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:20.110 05:18:27 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:20.110 05:18:27 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:20.110 05:18:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:20.110 05:18:27 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:20.110 05:18:27 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:20.110 05:18:27 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:21.485 Hugepages 00:04:21.485 node hugesize free / total 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:21.485 00:04:21.485 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:21.485 05:18:28 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:21.485 05:18:28 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:21.485 05:18:28 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:21.485 05:18:28 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:21.485 ************************************ 00:04:21.485 START TEST denied 00:04:21.485 ************************************ 00:04:21.485 05:18:28 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:04:21.485 05:18:28 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:04:21.485 05:18:28 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:21.485 05:18:28 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:04:21.485 05:18:28 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:21.485 05:18:28 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:22.858 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:04:22.858 05:18:29 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:04:22.858 05:18:29 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:22.858 05:18:29 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:22.858 05:18:29 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:04:22.858 05:18:29 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:04:22.858 05:18:29 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:22.858 05:18:29 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:22.858 05:18:29 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:22.858 05:18:29 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:22.858 05:18:29 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:25.389 00:04:25.389 real 0m3.787s 00:04:25.389 user 0m1.116s 00:04:25.389 sys 0m1.799s 00:04:25.389 05:18:32 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:25.389 05:18:32 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:25.389 ************************************ 00:04:25.389 END TEST denied 00:04:25.389 ************************************ 00:04:25.389 05:18:32 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:25.389 05:18:32 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:25.389 05:18:32 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:25.389 05:18:32 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:25.389 ************************************ 00:04:25.389 START TEST allowed 00:04:25.389 ************************************ 00:04:25.389 05:18:32 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:04:25.389 05:18:32 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:04:25.389 05:18:32 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:25.389 05:18:32 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:04:25.389 05:18:32 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:25.389 05:18:32 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:27.913 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:27.913 05:18:34 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:27.913 05:18:34 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:27.913 05:18:34 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:27.913 05:18:34 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:27.913 05:18:34 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:29.289 00:04:29.289 real 0m3.850s 00:04:29.289 user 0m0.988s 00:04:29.289 sys 0m1.690s 00:04:29.289 05:18:36 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:29.289 05:18:36 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:29.289 ************************************ 00:04:29.289 END TEST allowed 00:04:29.289 ************************************ 00:04:29.289 00:04:29.289 real 0m10.365s 00:04:29.289 user 0m3.166s 00:04:29.289 sys 0m5.220s 00:04:29.289 05:18:36 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:29.289 05:18:36 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:29.289 ************************************ 00:04:29.289 END TEST acl 00:04:29.289 ************************************ 00:04:29.289 05:18:36 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:29.289 05:18:36 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:29.289 05:18:36 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:29.289 05:18:36 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:29.289 ************************************ 00:04:29.289 START TEST hugepages 00:04:29.289 ************************************ 00:04:29.289 05:18:36 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:29.289 * Looking for test storage... 00:04:29.289 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:29.289 05:18:36 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:29.289 05:18:36 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:29.289 05:18:36 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:29.289 05:18:36 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:29.289 05:18:36 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:29.289 05:18:36 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:29.289 05:18:36 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:29.289 05:18:36 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:29.289 05:18:36 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:29.289 05:18:36 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:29.289 05:18:36 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.289 05:18:36 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.289 05:18:36 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.289 05:18:36 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.289 05:18:36 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.289 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:29.289 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41139532 kB' 'MemAvailable: 44649548 kB' 'Buffers: 2704 kB' 'Cached: 12826444 kB' 'SwapCached: 0 kB' 'Active: 9840804 kB' 'Inactive: 3506552 kB' 'Active(anon): 9446452 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521684 kB' 'Mapped: 181820 kB' 'Shmem: 8928244 kB' 'KReclaimable: 205472 kB' 'Slab: 583280 kB' 'SReclaimable: 205472 kB' 'SUnreclaim: 377808 kB' 'KernelStack: 12848 kB' 'PageTables: 8660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562304 kB' 'Committed_AS: 10611876 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196516 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1924700 kB' 'DirectMap2M: 15820800 kB' 'DirectMap1G: 51380224 kB' 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:29.290 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:29.291 05:18:36 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:29.291 05:18:36 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:29.291 05:18:36 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:29.291 05:18:36 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:29.291 ************************************ 00:04:29.291 START TEST default_setup 00:04:29.291 ************************************ 00:04:29.291 05:18:36 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:04:29.291 05:18:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:29.291 05:18:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:29.291 05:18:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:29.291 05:18:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:29.291 05:18:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:29.291 05:18:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:29.291 05:18:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:29.291 05:18:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:29.291 05:18:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:29.291 05:18:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:29.291 05:18:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:29.291 05:18:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:29.291 05:18:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:29.291 05:18:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:29.291 05:18:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:29.291 05:18:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:29.291 05:18:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:29.291 05:18:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:29.291 05:18:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:29.291 05:18:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:29.291 05:18:36 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:29.291 05:18:36 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:30.698 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:30.698 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:30.698 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:30.698 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:30.698 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:30.698 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:30.699 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:30.699 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:30.699 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:30.699 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:30.699 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:30.699 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:30.699 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:30.699 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:30.699 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:30.699 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:31.637 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:31.637 05:18:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:31.637 05:18:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:31.637 05:18:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:31.637 05:18:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:31.637 05:18:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:31.637 05:18:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:31.637 05:18:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:31.637 05:18:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:31.637 05:18:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:31.637 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:31.637 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:31.637 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:31.637 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:31.637 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.637 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.637 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.637 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.637 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.637 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.637 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.637 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43254620 kB' 'MemAvailable: 46764612 kB' 'Buffers: 2704 kB' 'Cached: 12826532 kB' 'SwapCached: 0 kB' 'Active: 9857812 kB' 'Inactive: 3506552 kB' 'Active(anon): 9463460 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538468 kB' 'Mapped: 182032 kB' 'Shmem: 8928332 kB' 'KReclaimable: 205424 kB' 'Slab: 583180 kB' 'SReclaimable: 205424 kB' 'SUnreclaim: 377756 kB' 'KernelStack: 12848 kB' 'PageTables: 8632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10628864 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196676 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1924700 kB' 'DirectMap2M: 15820800 kB' 'DirectMap1G: 51380224 kB' 00:04:31.637 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.637 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.638 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43254208 kB' 'MemAvailable: 46764200 kB' 'Buffers: 2704 kB' 'Cached: 12826536 kB' 'SwapCached: 0 kB' 'Active: 9857928 kB' 'Inactive: 3506552 kB' 'Active(anon): 9463576 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538560 kB' 'Mapped: 182032 kB' 'Shmem: 8928336 kB' 'KReclaimable: 205424 kB' 'Slab: 582964 kB' 'SReclaimable: 205424 kB' 'SUnreclaim: 377540 kB' 'KernelStack: 12848 kB' 'PageTables: 8776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10628880 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196676 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1924700 kB' 'DirectMap2M: 15820800 kB' 'DirectMap1G: 51380224 kB' 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.639 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.640 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43254084 kB' 'MemAvailable: 46764076 kB' 'Buffers: 2704 kB' 'Cached: 12826556 kB' 'SwapCached: 0 kB' 'Active: 9857664 kB' 'Inactive: 3506552 kB' 'Active(anon): 9463312 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538172 kB' 'Mapped: 181904 kB' 'Shmem: 8928356 kB' 'KReclaimable: 205424 kB' 'Slab: 582940 kB' 'SReclaimable: 205424 kB' 'SUnreclaim: 377516 kB' 'KernelStack: 12848 kB' 'PageTables: 8712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10628904 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196660 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1924700 kB' 'DirectMap2M: 15820800 kB' 'DirectMap1G: 51380224 kB' 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.641 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:31.642 05:18:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:31.642 nr_hugepages=1024 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:31.643 resv_hugepages=0 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:31.643 surplus_hugepages=0 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:31.643 anon_hugepages=0 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43255460 kB' 'MemAvailable: 46765452 kB' 'Buffers: 2704 kB' 'Cached: 12826560 kB' 'SwapCached: 0 kB' 'Active: 9857004 kB' 'Inactive: 3506552 kB' 'Active(anon): 9462652 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537524 kB' 'Mapped: 181844 kB' 'Shmem: 8928360 kB' 'KReclaimable: 205424 kB' 'Slab: 582940 kB' 'SReclaimable: 205424 kB' 'SUnreclaim: 377516 kB' 'KernelStack: 12816 kB' 'PageTables: 8340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10628924 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196660 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1924700 kB' 'DirectMap2M: 15820800 kB' 'DirectMap1G: 51380224 kB' 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.643 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.644 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.645 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 25679436 kB' 'MemUsed: 7150448 kB' 'SwapCached: 0 kB' 'Active: 3815376 kB' 'Inactive: 108696 kB' 'Active(anon): 3704488 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 108696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3661516 kB' 'Mapped: 61384 kB' 'AnonPages: 265740 kB' 'Shmem: 3441932 kB' 'KernelStack: 8136 kB' 'PageTables: 5360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 94196 kB' 'Slab: 319108 kB' 'SReclaimable: 94196 kB' 'SUnreclaim: 224912 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:31.645 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.645 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.645 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.645 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.645 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.645 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.645 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.645 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.645 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.645 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.645 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.645 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.645 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.645 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.645 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.645 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.645 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.645 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.645 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.645 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.645 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.645 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.645 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.645 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.645 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.645 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.645 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.645 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.645 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.645 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.645 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.645 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.645 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.645 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.645 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.645 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.645 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.645 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.645 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.645 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.645 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.645 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.645 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.645 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.645 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.645 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.645 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.645 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.645 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.645 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.645 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.645 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.645 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.645 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.904 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.904 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.904 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.904 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.904 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.904 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.904 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.904 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.904 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.904 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.904 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.904 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.904 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.904 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.904 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.904 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.904 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.904 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.904 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.904 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.904 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.904 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.904 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.904 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.904 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.904 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.904 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.904 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.904 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.904 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.904 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.904 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.904 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.904 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.904 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.904 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.904 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.904 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.904 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.904 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.904 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.904 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.904 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.904 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.904 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.904 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.904 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.904 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.904 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.904 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.904 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.904 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.904 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.904 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.904 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.904 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.905 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.905 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.905 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.905 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.905 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.905 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.905 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.905 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.905 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.905 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.905 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.905 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.905 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.905 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.905 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.905 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.905 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.905 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.905 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.905 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.905 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.905 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.905 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.905 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.905 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.905 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.905 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.905 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.905 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.905 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.905 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.905 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.905 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.905 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.905 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.905 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:31.905 05:18:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:31.905 05:18:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:31.905 05:18:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:31.905 05:18:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:31.905 05:18:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:31.905 05:18:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:31.905 node0=1024 expecting 1024 00:04:31.905 05:18:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:31.905 00:04:31.905 real 0m2.505s 00:04:31.905 user 0m0.670s 00:04:31.905 sys 0m0.896s 00:04:31.905 05:18:38 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:31.905 05:18:38 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:31.905 ************************************ 00:04:31.905 END TEST default_setup 00:04:31.905 ************************************ 00:04:31.905 05:18:38 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:31.905 05:18:38 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:31.905 05:18:38 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:31.905 05:18:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:31.905 ************************************ 00:04:31.905 START TEST per_node_1G_alloc 00:04:31.905 ************************************ 00:04:31.905 05:18:38 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:04:31.905 05:18:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:31.905 05:18:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:31.905 05:18:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:31.905 05:18:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:31.905 05:18:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:31.905 05:18:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:31.905 05:18:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:31.905 05:18:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:31.905 05:18:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:31.905 05:18:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:31.905 05:18:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:31.905 05:18:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:31.905 05:18:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:31.905 05:18:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:31.905 05:18:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:31.905 05:18:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:31.905 05:18:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:31.905 05:18:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:31.905 05:18:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:31.905 05:18:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:31.905 05:18:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:31.905 05:18:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:31.905 05:18:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:31.905 05:18:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:31.905 05:18:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:31.905 05:18:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:31.905 05:18:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:32.841 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:32.841 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:32.841 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:32.841 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:32.841 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:32.841 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:32.841 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:32.841 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:32.841 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:32.841 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:32.841 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:32.841 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:32.841 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:32.841 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:32.841 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:32.841 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:32.841 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43252592 kB' 'MemAvailable: 46762584 kB' 'Buffers: 2704 kB' 'Cached: 12826648 kB' 'SwapCached: 0 kB' 'Active: 9858256 kB' 'Inactive: 3506552 kB' 'Active(anon): 9463904 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538676 kB' 'Mapped: 181768 kB' 'Shmem: 8928448 kB' 'KReclaimable: 205424 kB' 'Slab: 583096 kB' 'SReclaimable: 205424 kB' 'SUnreclaim: 377672 kB' 'KernelStack: 12832 kB' 'PageTables: 8336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10629104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196756 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1924700 kB' 'DirectMap2M: 15820800 kB' 'DirectMap1G: 51380224 kB' 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.107 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43251584 kB' 'MemAvailable: 46761576 kB' 'Buffers: 2704 kB' 'Cached: 12826648 kB' 'SwapCached: 0 kB' 'Active: 9857700 kB' 'Inactive: 3506552 kB' 'Active(anon): 9463348 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538108 kB' 'Mapped: 181860 kB' 'Shmem: 8928448 kB' 'KReclaimable: 205424 kB' 'Slab: 583068 kB' 'SReclaimable: 205424 kB' 'SUnreclaim: 377644 kB' 'KernelStack: 12848 kB' 'PageTables: 8316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10629120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196756 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1924700 kB' 'DirectMap2M: 15820800 kB' 'DirectMap1G: 51380224 kB' 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.108 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.109 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43250704 kB' 'MemAvailable: 46760696 kB' 'Buffers: 2704 kB' 'Cached: 12826672 kB' 'SwapCached: 0 kB' 'Active: 9857756 kB' 'Inactive: 3506552 kB' 'Active(anon): 9463404 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538124 kB' 'Mapped: 181860 kB' 'Shmem: 8928472 kB' 'KReclaimable: 205424 kB' 'Slab: 583156 kB' 'SReclaimable: 205424 kB' 'SUnreclaim: 377732 kB' 'KernelStack: 12880 kB' 'PageTables: 8388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10628776 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196692 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1924700 kB' 'DirectMap2M: 15820800 kB' 'DirectMap1G: 51380224 kB' 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.110 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.111 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:33.112 nr_hugepages=1024 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:33.112 resv_hugepages=0 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:33.112 surplus_hugepages=0 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:33.112 anon_hugepages=0 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43250464 kB' 'MemAvailable: 46760456 kB' 'Buffers: 2704 kB' 'Cached: 12826676 kB' 'SwapCached: 0 kB' 'Active: 9857312 kB' 'Inactive: 3506552 kB' 'Active(anon): 9462960 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537676 kB' 'Mapped: 181860 kB' 'Shmem: 8928476 kB' 'KReclaimable: 205424 kB' 'Slab: 583148 kB' 'SReclaimable: 205424 kB' 'SUnreclaim: 377724 kB' 'KernelStack: 12816 kB' 'PageTables: 8180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10628804 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196692 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1924700 kB' 'DirectMap2M: 15820800 kB' 'DirectMap1G: 51380224 kB' 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.112 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.113 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 26719452 kB' 'MemUsed: 6110432 kB' 'SwapCached: 0 kB' 'Active: 3815420 kB' 'Inactive: 108696 kB' 'Active(anon): 3704532 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 108696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3661576 kB' 'Mapped: 61400 kB' 'AnonPages: 265748 kB' 'Shmem: 3441992 kB' 'KernelStack: 8168 kB' 'PageTables: 5360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 94196 kB' 'Slab: 319212 kB' 'SReclaimable: 94196 kB' 'SUnreclaim: 225016 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.114 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.115 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.116 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 16531444 kB' 'MemUsed: 11180380 kB' 'SwapCached: 0 kB' 'Active: 6042268 kB' 'Inactive: 3397856 kB' 'Active(anon): 5758804 kB' 'Inactive(anon): 0 kB' 'Active(file): 283464 kB' 'Inactive(file): 3397856 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9167812 kB' 'Mapped: 120460 kB' 'AnonPages: 272372 kB' 'Shmem: 5486492 kB' 'KernelStack: 4712 kB' 'PageTables: 3024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 111228 kB' 'Slab: 263936 kB' 'SReclaimable: 111228 kB' 'SUnreclaim: 152708 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:33.116 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.116 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.116 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.116 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.116 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.116 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.116 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.116 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.116 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.116 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.116 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.116 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.116 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.116 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.116 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.116 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.116 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.116 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.116 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.116 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.116 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.116 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.376 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.377 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.377 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.377 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.377 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.377 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.377 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.377 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.377 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.377 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.377 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.377 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.377 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.377 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.377 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.377 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.377 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.377 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.377 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:33.377 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:33.377 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:33.377 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:33.377 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:33.377 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:33.377 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:33.377 node0=512 expecting 512 00:04:33.377 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:33.377 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:33.377 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:33.377 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:33.377 node1=512 expecting 512 00:04:33.377 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:33.377 00:04:33.377 real 0m1.419s 00:04:33.377 user 0m0.569s 00:04:33.377 sys 0m0.812s 00:04:33.377 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:33.377 05:18:40 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:33.377 ************************************ 00:04:33.377 END TEST per_node_1G_alloc 00:04:33.377 ************************************ 00:04:33.377 05:18:40 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:33.377 05:18:40 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:33.377 05:18:40 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:33.377 05:18:40 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:33.377 ************************************ 00:04:33.377 START TEST even_2G_alloc 00:04:33.377 ************************************ 00:04:33.377 05:18:40 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:04:33.377 05:18:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:33.377 05:18:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:33.377 05:18:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:33.377 05:18:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:33.377 05:18:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:33.377 05:18:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:33.377 05:18:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:33.377 05:18:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:33.377 05:18:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:33.377 05:18:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:33.377 05:18:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:33.377 05:18:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:33.377 05:18:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:33.377 05:18:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:33.377 05:18:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:33.377 05:18:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:33.377 05:18:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:33.377 05:18:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:33.377 05:18:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:33.377 05:18:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:33.377 05:18:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:33.377 05:18:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:33.377 05:18:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:33.377 05:18:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:33.377 05:18:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:33.377 05:18:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:33.377 05:18:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:33.377 05:18:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:34.310 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:34.310 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:34.310 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:34.310 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:34.310 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:34.310 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:34.310 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:34.310 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:34.310 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:34.310 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:34.310 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:34.310 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:34.310 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:34.310 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:34.310 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:34.310 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:34.310 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:34.572 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:34.572 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:34.572 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:34.572 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:34.572 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:34.572 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:34.572 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:34.572 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:34.572 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:34.572 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:34.572 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:34.572 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:34.572 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:34.572 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.572 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.572 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.572 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.572 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.572 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.572 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.572 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43238088 kB' 'MemAvailable: 46748080 kB' 'Buffers: 2704 kB' 'Cached: 12826796 kB' 'SwapCached: 0 kB' 'Active: 9858372 kB' 'Inactive: 3506552 kB' 'Active(anon): 9464020 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538752 kB' 'Mapped: 181948 kB' 'Shmem: 8928596 kB' 'KReclaimable: 205424 kB' 'Slab: 583112 kB' 'SReclaimable: 205424 kB' 'SUnreclaim: 377688 kB' 'KernelStack: 12880 kB' 'PageTables: 8384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10629540 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196612 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1924700 kB' 'DirectMap2M: 15820800 kB' 'DirectMap1G: 51380224 kB' 00:04:34.572 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.572 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.572 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.572 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.572 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.572 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.572 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.572 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.572 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.573 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43238560 kB' 'MemAvailable: 46748552 kB' 'Buffers: 2704 kB' 'Cached: 12826800 kB' 'SwapCached: 0 kB' 'Active: 9857960 kB' 'Inactive: 3506552 kB' 'Active(anon): 9463608 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538292 kB' 'Mapped: 181872 kB' 'Shmem: 8928600 kB' 'KReclaimable: 205424 kB' 'Slab: 583112 kB' 'SReclaimable: 205424 kB' 'SUnreclaim: 377688 kB' 'KernelStack: 12880 kB' 'PageTables: 8380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10629556 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196580 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1924700 kB' 'DirectMap2M: 15820800 kB' 'DirectMap1G: 51380224 kB' 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.574 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.575 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.576 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43238764 kB' 'MemAvailable: 46748756 kB' 'Buffers: 2704 kB' 'Cached: 12826804 kB' 'SwapCached: 0 kB' 'Active: 9857696 kB' 'Inactive: 3506552 kB' 'Active(anon): 9463344 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538020 kB' 'Mapped: 181872 kB' 'Shmem: 8928604 kB' 'KReclaimable: 205424 kB' 'Slab: 583112 kB' 'SReclaimable: 205424 kB' 'SUnreclaim: 377688 kB' 'KernelStack: 12896 kB' 'PageTables: 8380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10629580 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196580 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1924700 kB' 'DirectMap2M: 15820800 kB' 'DirectMap1G: 51380224 kB' 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.577 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.578 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.578 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.578 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.578 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.578 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.578 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.578 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.578 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.578 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.578 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.578 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.578 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.578 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.578 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.578 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.578 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.578 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.578 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.578 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.578 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.578 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.578 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.578 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.578 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.578 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.578 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.578 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.578 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.578 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.578 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.578 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.578 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.578 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.578 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.578 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.578 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.578 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.578 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.578 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.578 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.578 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.578 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.578 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.578 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.578 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.578 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.578 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.578 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.578 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.578 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.578 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.578 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.578 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.578 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.578 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.578 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.578 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.578 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.578 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.578 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.578 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.578 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.578 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:34.579 nr_hugepages=1024 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:34.579 resv_hugepages=0 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:34.579 surplus_hugepages=0 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:34.579 anon_hugepages=0 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.579 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.580 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.580 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.580 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.580 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.580 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.580 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43238008 kB' 'MemAvailable: 46748000 kB' 'Buffers: 2704 kB' 'Cached: 12826836 kB' 'SwapCached: 0 kB' 'Active: 9857820 kB' 'Inactive: 3506552 kB' 'Active(anon): 9463468 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538064 kB' 'Mapped: 181872 kB' 'Shmem: 8928636 kB' 'KReclaimable: 205424 kB' 'Slab: 583112 kB' 'SReclaimable: 205424 kB' 'SUnreclaim: 377688 kB' 'KernelStack: 12880 kB' 'PageTables: 8324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10629600 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196596 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1924700 kB' 'DirectMap2M: 15820800 kB' 'DirectMap1G: 51380224 kB' 00:04:34.580 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.580 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.580 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.580 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.580 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.580 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.580 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.580 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.580 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.580 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.580 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.580 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.580 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.580 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.580 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.580 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.580 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.580 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.580 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.580 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.580 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.580 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.580 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.580 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.580 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.580 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.580 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.580 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.580 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.580 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.580 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.580 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.580 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.580 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.580 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.580 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.580 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.580 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.580 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.580 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.580 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.580 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.580 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.580 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.580 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.580 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.580 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.580 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.580 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.580 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.580 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.580 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.580 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.580 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.580 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.580 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.580 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.580 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.580 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.580 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.580 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.580 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.580 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.580 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.580 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.580 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.581 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 26714592 kB' 'MemUsed: 6115292 kB' 'SwapCached: 0 kB' 'Active: 3815280 kB' 'Inactive: 108696 kB' 'Active(anon): 3704392 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 108696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3661676 kB' 'Mapped: 61412 kB' 'AnonPages: 265420 kB' 'Shmem: 3442092 kB' 'KernelStack: 8168 kB' 'PageTables: 5308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 94196 kB' 'Slab: 319260 kB' 'SReclaimable: 94196 kB' 'SUnreclaim: 225064 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.582 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.583 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.584 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.584 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.584 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.584 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.584 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.584 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.584 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.584 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.584 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.584 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.584 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.584 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.584 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.584 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.584 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.584 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.584 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.584 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.584 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.584 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.584 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.584 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.584 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.584 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.584 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.584 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.584 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.584 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.584 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.584 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.584 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.584 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.584 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.584 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:34.584 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:34.584 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:34.584 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:34.584 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:34.584 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:34.584 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:34.584 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:34.584 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:34.584 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:34.584 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.584 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:34.584 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:34.584 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.584 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.584 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.584 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.584 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 16525476 kB' 'MemUsed: 11186348 kB' 'SwapCached: 0 kB' 'Active: 6042764 kB' 'Inactive: 3397856 kB' 'Active(anon): 5759300 kB' 'Inactive(anon): 0 kB' 'Active(file): 283464 kB' 'Inactive(file): 3397856 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9167908 kB' 'Mapped: 120460 kB' 'AnonPages: 272824 kB' 'Shmem: 5486588 kB' 'KernelStack: 4712 kB' 'PageTables: 3020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 111228 kB' 'Slab: 263852 kB' 'SReclaimable: 111228 kB' 'SUnreclaim: 152624 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.843 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.844 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.844 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.844 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.844 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.844 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.844 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.844 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.844 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.844 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.844 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.844 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.844 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.844 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.844 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.844 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.844 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.844 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.844 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.844 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.844 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.844 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.844 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.844 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:34.844 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:34.844 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:34.844 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:34.844 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:34.844 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:34.844 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:34.844 node0=512 expecting 512 00:04:34.844 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:34.844 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:34.844 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:34.844 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:34.844 node1=512 expecting 512 00:04:34.844 05:18:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:34.844 00:04:34.844 real 0m1.423s 00:04:34.844 user 0m0.620s 00:04:34.844 sys 0m0.766s 00:04:34.844 05:18:41 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:34.844 05:18:41 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:34.844 ************************************ 00:04:34.844 END TEST even_2G_alloc 00:04:34.844 ************************************ 00:04:34.844 05:18:41 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:34.844 05:18:41 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:34.844 05:18:41 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:34.844 05:18:41 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:34.844 ************************************ 00:04:34.844 START TEST odd_alloc 00:04:34.844 ************************************ 00:04:34.844 05:18:41 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:04:34.844 05:18:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:34.844 05:18:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:34.844 05:18:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:34.844 05:18:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:34.844 05:18:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:34.844 05:18:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:34.844 05:18:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:34.844 05:18:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:34.844 05:18:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:34.844 05:18:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:34.844 05:18:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:34.844 05:18:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:34.844 05:18:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:34.844 05:18:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:34.844 05:18:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:34.844 05:18:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:34.844 05:18:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:34.844 05:18:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:34.844 05:18:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:34.844 05:18:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:34.844 05:18:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:34.844 05:18:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:34.844 05:18:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:34.844 05:18:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:34.844 05:18:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:34.844 05:18:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:34.844 05:18:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:34.844 05:18:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:35.779 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:35.779 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:35.779 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:35.779 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:35.780 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:35.780 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:35.780 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:35.780 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:35.780 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:35.780 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:35.780 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:35.780 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:35.780 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:35.780 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:35.780 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:35.780 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:35.780 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:36.043 05:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:36.043 05:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:36.043 05:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:36.043 05:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:36.043 05:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:36.043 05:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:36.043 05:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:36.043 05:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:36.043 05:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:36.043 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:36.043 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:36.043 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:36.043 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:36.043 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.043 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43210180 kB' 'MemAvailable: 46720172 kB' 'Buffers: 2704 kB' 'Cached: 12826920 kB' 'SwapCached: 0 kB' 'Active: 9855352 kB' 'Inactive: 3506552 kB' 'Active(anon): 9461000 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536020 kB' 'Mapped: 179920 kB' 'Shmem: 8928720 kB' 'KReclaimable: 205424 kB' 'Slab: 583184 kB' 'SReclaimable: 205424 kB' 'SUnreclaim: 377760 kB' 'KernelStack: 13056 kB' 'PageTables: 9584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 10582316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196788 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1924700 kB' 'DirectMap2M: 15820800 kB' 'DirectMap1G: 51380224 kB' 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.044 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43209396 kB' 'MemAvailable: 46719388 kB' 'Buffers: 2704 kB' 'Cached: 12826924 kB' 'SwapCached: 0 kB' 'Active: 9855032 kB' 'Inactive: 3506552 kB' 'Active(anon): 9460680 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535132 kB' 'Mapped: 179988 kB' 'Shmem: 8928724 kB' 'KReclaimable: 205424 kB' 'Slab: 583184 kB' 'SReclaimable: 205424 kB' 'SUnreclaim: 377760 kB' 'KernelStack: 13104 kB' 'PageTables: 8864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 10583452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196788 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1924700 kB' 'DirectMap2M: 15820800 kB' 'DirectMap1G: 51380224 kB' 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.045 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.046 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43207844 kB' 'MemAvailable: 46717836 kB' 'Buffers: 2704 kB' 'Cached: 12826944 kB' 'SwapCached: 0 kB' 'Active: 9855220 kB' 'Inactive: 3506552 kB' 'Active(anon): 9460868 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535268 kB' 'Mapped: 179988 kB' 'Shmem: 8928744 kB' 'KReclaimable: 205424 kB' 'Slab: 583172 kB' 'SReclaimable: 205424 kB' 'SUnreclaim: 377748 kB' 'KernelStack: 13040 kB' 'PageTables: 8636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 10581352 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196676 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1924700 kB' 'DirectMap2M: 15820800 kB' 'DirectMap1G: 51380224 kB' 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.047 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.048 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:36.049 nr_hugepages=1025 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:36.049 resv_hugepages=0 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:36.049 surplus_hugepages=0 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:36.049 anon_hugepages=0 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43207884 kB' 'MemAvailable: 46717876 kB' 'Buffers: 2704 kB' 'Cached: 12826964 kB' 'SwapCached: 0 kB' 'Active: 9853804 kB' 'Inactive: 3506552 kB' 'Active(anon): 9459452 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533920 kB' 'Mapped: 179940 kB' 'Shmem: 8928764 kB' 'KReclaimable: 205424 kB' 'Slab: 583112 kB' 'SReclaimable: 205424 kB' 'SUnreclaim: 377688 kB' 'KernelStack: 12800 kB' 'PageTables: 7916 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 10581376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196628 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1924700 kB' 'DirectMap2M: 15820800 kB' 'DirectMap1G: 51380224 kB' 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.049 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:36.050 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:36.315 05:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:36.315 05:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:36.315 05:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:36.315 05:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:36.315 05:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:36.315 05:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:36.315 05:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:36.315 05:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:36.315 05:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:36.315 05:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:36.315 05:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:36.315 05:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:36.315 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:36.315 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:36.315 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:36.315 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:36.315 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.315 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:36.315 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:36.315 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.315 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.315 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.315 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.315 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 26705856 kB' 'MemUsed: 6124028 kB' 'SwapCached: 0 kB' 'Active: 3812356 kB' 'Inactive: 108696 kB' 'Active(anon): 3701468 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 108696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3661676 kB' 'Mapped: 60488 kB' 'AnonPages: 262532 kB' 'Shmem: 3442092 kB' 'KernelStack: 8120 kB' 'PageTables: 4920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 94196 kB' 'Slab: 319052 kB' 'SReclaimable: 94196 kB' 'SUnreclaim: 224856 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:36.315 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.315 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.315 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.315 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.315 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.315 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.315 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.315 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.315 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.315 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.315 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.315 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.315 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.315 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 16502716 kB' 'MemUsed: 11209108 kB' 'SwapCached: 0 kB' 'Active: 6041480 kB' 'Inactive: 3397856 kB' 'Active(anon): 5758016 kB' 'Inactive(anon): 0 kB' 'Active(file): 283464 kB' 'Inactive(file): 3397856 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9168032 kB' 'Mapped: 119452 kB' 'AnonPages: 271380 kB' 'Shmem: 5486712 kB' 'KernelStack: 4680 kB' 'PageTables: 2996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 111228 kB' 'Slab: 264060 kB' 'SReclaimable: 111228 kB' 'SUnreclaim: 152832 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.316 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:36.317 node0=512 expecting 513 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:36.317 node1=513 expecting 512 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:36.317 00:04:36.317 real 0m1.459s 00:04:36.317 user 0m0.623s 00:04:36.317 sys 0m0.797s 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:36.317 05:18:43 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:36.317 ************************************ 00:04:36.317 END TEST odd_alloc 00:04:36.317 ************************************ 00:04:36.317 05:18:43 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:36.317 05:18:43 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:36.317 05:18:43 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:36.317 05:18:43 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:36.317 ************************************ 00:04:36.317 START TEST custom_alloc 00:04:36.317 ************************************ 00:04:36.317 05:18:43 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:04:36.317 05:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:36.317 05:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:36.317 05:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:36.317 05:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:36.317 05:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:36.317 05:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:36.317 05:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:36.317 05:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:36.317 05:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:36.317 05:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:36.317 05:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:36.317 05:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:36.317 05:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:36.317 05:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:36.317 05:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:36.317 05:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:36.317 05:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:36.317 05:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:36.317 05:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:36.317 05:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:36.317 05:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:36.317 05:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:36.317 05:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:36.317 05:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:36.317 05:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:36.317 05:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:36.317 05:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:36.317 05:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:36.317 05:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:36.317 05:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:36.317 05:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:36.317 05:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:36.317 05:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:36.317 05:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:36.317 05:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:36.317 05:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:36.317 05:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:36.317 05:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:36.317 05:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:36.317 05:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:36.317 05:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:36.317 05:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:36.317 05:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:36.317 05:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:36.317 05:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:36.317 05:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:36.317 05:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:36.317 05:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:36.317 05:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:36.317 05:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:36.317 05:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:36.317 05:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:36.317 05:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:36.317 05:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:36.317 05:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:36.317 05:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:36.317 05:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:36.317 05:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:36.317 05:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:36.317 05:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:36.317 05:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:36.317 05:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:36.317 05:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:36.317 05:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:36.317 05:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:36.317 05:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:36.317 05:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:36.317 05:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:36.317 05:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:36.317 05:18:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:36.318 05:18:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:36.318 05:18:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:37.250 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:37.250 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:37.250 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:37.250 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:37.250 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:37.250 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:37.250 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:37.250 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:37.250 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:37.250 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:37.250 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:37.250 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:37.250 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:37.250 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:37.250 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:37.251 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:37.251 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:37.516 05:18:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:37.516 05:18:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:37.516 05:18:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:37.516 05:18:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:37.516 05:18:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:37.516 05:18:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:37.516 05:18:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:37.516 05:18:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:37.516 05:18:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:37.516 05:18:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:37.516 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:37.516 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:37.516 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:37.516 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:37.516 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.516 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.516 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.516 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.516 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.516 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.516 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.516 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 42169236 kB' 'MemAvailable: 45679228 kB' 'Buffers: 2704 kB' 'Cached: 12827048 kB' 'SwapCached: 0 kB' 'Active: 9855852 kB' 'Inactive: 3506552 kB' 'Active(anon): 9461500 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535544 kB' 'Mapped: 180008 kB' 'Shmem: 8928848 kB' 'KReclaimable: 205424 kB' 'Slab: 582972 kB' 'SReclaimable: 205424 kB' 'SUnreclaim: 377548 kB' 'KernelStack: 12896 kB' 'PageTables: 8232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 10581080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196660 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1924700 kB' 'DirectMap2M: 15820800 kB' 'DirectMap1G: 51380224 kB' 00:04:37.516 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.516 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.516 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.516 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.516 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.516 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.516 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.516 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.516 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.516 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.516 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.516 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.516 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.516 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.516 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.516 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.516 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.516 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.516 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.516 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.516 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.516 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.516 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.516 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.516 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.516 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.516 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.516 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.516 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.516 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.516 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.516 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.516 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.516 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.516 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.516 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.516 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:37.517 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 42175340 kB' 'MemAvailable: 45685332 kB' 'Buffers: 2704 kB' 'Cached: 12827048 kB' 'SwapCached: 0 kB' 'Active: 9854424 kB' 'Inactive: 3506552 kB' 'Active(anon): 9460072 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534448 kB' 'Mapped: 180036 kB' 'Shmem: 8928848 kB' 'KReclaimable: 205424 kB' 'Slab: 582996 kB' 'SReclaimable: 205424 kB' 'SUnreclaim: 377572 kB' 'KernelStack: 12800 kB' 'PageTables: 7892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 10581228 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196548 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1924700 kB' 'DirectMap2M: 15820800 kB' 'DirectMap1G: 51380224 kB' 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.518 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 42175296 kB' 'MemAvailable: 45685288 kB' 'Buffers: 2704 kB' 'Cached: 12827072 kB' 'SwapCached: 0 kB' 'Active: 9853728 kB' 'Inactive: 3506552 kB' 'Active(anon): 9459376 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533744 kB' 'Mapped: 179956 kB' 'Shmem: 8928872 kB' 'KReclaimable: 205424 kB' 'Slab: 582928 kB' 'SReclaimable: 205424 kB' 'SUnreclaim: 377504 kB' 'KernelStack: 12784 kB' 'PageTables: 7812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 10581256 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196532 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1924700 kB' 'DirectMap2M: 15820800 kB' 'DirectMap1G: 51380224 kB' 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.519 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.520 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:37.521 nr_hugepages=1536 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:37.521 resv_hugepages=0 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:37.521 surplus_hugepages=0 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:37.521 anon_hugepages=0 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 42174316 kB' 'MemAvailable: 45684308 kB' 'Buffers: 2704 kB' 'Cached: 12827100 kB' 'SwapCached: 0 kB' 'Active: 9854004 kB' 'Inactive: 3506552 kB' 'Active(anon): 9459652 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534072 kB' 'Mapped: 179956 kB' 'Shmem: 8928900 kB' 'KReclaimable: 205424 kB' 'Slab: 582928 kB' 'SReclaimable: 205424 kB' 'SUnreclaim: 377504 kB' 'KernelStack: 12816 kB' 'PageTables: 7908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 10581644 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196532 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1924700 kB' 'DirectMap2M: 15820800 kB' 'DirectMap1G: 51380224 kB' 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.521 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.522 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 26689572 kB' 'MemUsed: 6140312 kB' 'SwapCached: 0 kB' 'Active: 3812376 kB' 'Inactive: 108696 kB' 'Active(anon): 3701488 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 108696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3661700 kB' 'Mapped: 60504 kB' 'AnonPages: 262500 kB' 'Shmem: 3442116 kB' 'KernelStack: 8120 kB' 'PageTables: 4920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 94196 kB' 'Slab: 318960 kB' 'SReclaimable: 94196 kB' 'SUnreclaim: 224764 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.523 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 15484248 kB' 'MemUsed: 12227576 kB' 'SwapCached: 0 kB' 'Active: 6041988 kB' 'Inactive: 3397856 kB' 'Active(anon): 5758524 kB' 'Inactive(anon): 0 kB' 'Active(file): 283464 kB' 'Inactive(file): 3397856 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9168144 kB' 'Mapped: 119452 kB' 'AnonPages: 271884 kB' 'Shmem: 5486824 kB' 'KernelStack: 4712 kB' 'PageTables: 3040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 111228 kB' 'Slab: 263968 kB' 'SReclaimable: 111228 kB' 'SUnreclaim: 152740 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.524 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.525 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.526 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.526 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.526 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.526 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.526 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.526 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.526 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.526 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.526 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.526 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.526 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.526 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.526 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.526 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.526 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.526 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.526 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.526 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.526 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.526 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.526 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.526 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.526 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.526 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.526 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.526 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.526 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:37.526 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.526 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.526 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.526 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:37.526 05:18:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:37.526 05:18:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:37.526 05:18:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:37.526 05:18:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:37.526 05:18:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:37.526 05:18:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:37.526 node0=512 expecting 512 00:04:37.526 05:18:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:37.526 05:18:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:37.526 05:18:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:37.526 05:18:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:37.526 node1=1024 expecting 1024 00:04:37.526 05:18:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:37.526 00:04:37.526 real 0m1.368s 00:04:37.526 user 0m0.569s 00:04:37.526 sys 0m0.758s 00:04:37.526 05:18:44 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:37.526 05:18:44 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:37.526 ************************************ 00:04:37.526 END TEST custom_alloc 00:04:37.526 ************************************ 00:04:37.785 05:18:44 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:37.785 05:18:44 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:37.785 05:18:44 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:37.785 05:18:44 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:37.785 ************************************ 00:04:37.785 START TEST no_shrink_alloc 00:04:37.785 ************************************ 00:04:37.785 05:18:44 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:04:37.785 05:18:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:37.785 05:18:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:37.785 05:18:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:37.785 05:18:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:37.785 05:18:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:37.785 05:18:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:37.785 05:18:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:37.785 05:18:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:37.786 05:18:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:37.786 05:18:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:37.786 05:18:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:37.786 05:18:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:37.786 05:18:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:37.786 05:18:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:37.786 05:18:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:37.786 05:18:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:37.786 05:18:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:37.786 05:18:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:37.786 05:18:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:37.786 05:18:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:37.786 05:18:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:37.786 05:18:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:38.723 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:38.723 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:38.723 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:38.723 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:38.723 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:38.723 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:38.723 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:38.723 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:38.723 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:38.723 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:38.723 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:38.723 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:38.723 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:38.723 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:38.723 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:38.723 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:38.723 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:38.987 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:38.987 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:38.987 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:38.987 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43217284 kB' 'MemAvailable: 46727276 kB' 'Buffers: 2704 kB' 'Cached: 12827180 kB' 'SwapCached: 0 kB' 'Active: 9854780 kB' 'Inactive: 3506552 kB' 'Active(anon): 9460428 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534696 kB' 'Mapped: 180044 kB' 'Shmem: 8928980 kB' 'KReclaimable: 205424 kB' 'Slab: 582880 kB' 'SReclaimable: 205424 kB' 'SUnreclaim: 377456 kB' 'KernelStack: 12848 kB' 'PageTables: 8012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10581704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196724 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1924700 kB' 'DirectMap2M: 15820800 kB' 'DirectMap1G: 51380224 kB' 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.988 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43217524 kB' 'MemAvailable: 46727516 kB' 'Buffers: 2704 kB' 'Cached: 12827184 kB' 'SwapCached: 0 kB' 'Active: 9854780 kB' 'Inactive: 3506552 kB' 'Active(anon): 9460428 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534692 kB' 'Mapped: 180044 kB' 'Shmem: 8928984 kB' 'KReclaimable: 205424 kB' 'Slab: 582864 kB' 'SReclaimable: 205424 kB' 'SUnreclaim: 377440 kB' 'KernelStack: 12864 kB' 'PageTables: 8000 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10581724 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196660 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1924700 kB' 'DirectMap2M: 15820800 kB' 'DirectMap1G: 51380224 kB' 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.989 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.990 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43217876 kB' 'MemAvailable: 46727868 kB' 'Buffers: 2704 kB' 'Cached: 12827200 kB' 'SwapCached: 0 kB' 'Active: 9854356 kB' 'Inactive: 3506552 kB' 'Active(anon): 9460004 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534184 kB' 'Mapped: 179968 kB' 'Shmem: 8929000 kB' 'KReclaimable: 205424 kB' 'Slab: 582864 kB' 'SReclaimable: 205424 kB' 'SUnreclaim: 377440 kB' 'KernelStack: 12832 kB' 'PageTables: 7896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10581744 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196644 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1924700 kB' 'DirectMap2M: 15820800 kB' 'DirectMap1G: 51380224 kB' 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.991 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.992 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:38.993 nr_hugepages=1024 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:38.993 resv_hugepages=0 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:38.993 surplus_hugepages=0 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:38.993 anon_hugepages=0 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43217120 kB' 'MemAvailable: 46727112 kB' 'Buffers: 2704 kB' 'Cached: 12827224 kB' 'SwapCached: 0 kB' 'Active: 9854312 kB' 'Inactive: 3506552 kB' 'Active(anon): 9459960 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534184 kB' 'Mapped: 179968 kB' 'Shmem: 8929024 kB' 'KReclaimable: 205424 kB' 'Slab: 582864 kB' 'SReclaimable: 205424 kB' 'SUnreclaim: 377440 kB' 'KernelStack: 12832 kB' 'PageTables: 7896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10581768 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196660 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1924700 kB' 'DirectMap2M: 15820800 kB' 'DirectMap1G: 51380224 kB' 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.993 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.994 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.994 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.994 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.994 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.994 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.994 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.994 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.994 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.994 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.994 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.994 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.994 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.994 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.994 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.994 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.994 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.994 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.994 05:18:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.994 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 25639632 kB' 'MemUsed: 7190252 kB' 'SwapCached: 0 kB' 'Active: 3812768 kB' 'Inactive: 108696 kB' 'Active(anon): 3701880 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 108696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3661760 kB' 'Mapped: 60516 kB' 'AnonPages: 262876 kB' 'Shmem: 3442176 kB' 'KernelStack: 8152 kB' 'PageTables: 4916 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 94196 kB' 'Slab: 318908 kB' 'SReclaimable: 94196 kB' 'SUnreclaim: 224712 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.995 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.996 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.996 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.996 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.996 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.996 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.996 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.996 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.996 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.996 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.996 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.996 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.996 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.996 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.996 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.996 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.996 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.996 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.996 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.996 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.996 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.996 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.996 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.996 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.996 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.996 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.996 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.996 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.996 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.996 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.996 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.996 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.996 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.996 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.996 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.996 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.996 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.996 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.996 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.996 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.996 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.996 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.996 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.996 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.996 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.996 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.996 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.996 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.996 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.996 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.996 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.996 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.996 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.996 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.996 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.996 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.996 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.996 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.996 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.996 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.996 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.996 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.996 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:38.996 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:38.996 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:38.996 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:38.996 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:38.996 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:38.996 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:38.996 node0=1024 expecting 1024 00:04:38.996 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:38.996 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:38.996 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:38.996 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:38.996 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:38.996 05:18:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:40.379 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:40.379 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:40.379 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:40.379 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:40.379 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:40.379 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:40.379 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:40.379 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:40.379 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:40.379 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:40.379 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:40.379 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:40.379 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:40.379 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:40.379 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:40.379 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:40.379 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:40.379 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:40.379 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:40.379 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:40.379 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:40.379 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:40.379 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:40.379 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:40.379 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:40.379 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:40.379 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:40.379 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:40.379 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:40.379 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:40.379 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:40.379 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.379 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.379 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.379 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.379 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.379 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.379 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.379 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43212048 kB' 'MemAvailable: 46722040 kB' 'Buffers: 2704 kB' 'Cached: 12827292 kB' 'SwapCached: 0 kB' 'Active: 9855064 kB' 'Inactive: 3506552 kB' 'Active(anon): 9460712 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534812 kB' 'Mapped: 180100 kB' 'Shmem: 8929092 kB' 'KReclaimable: 205424 kB' 'Slab: 582936 kB' 'SReclaimable: 205424 kB' 'SUnreclaim: 377512 kB' 'KernelStack: 12832 kB' 'PageTables: 7888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10582148 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196756 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1924700 kB' 'DirectMap2M: 15820800 kB' 'DirectMap1G: 51380224 kB' 00:04:40.379 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.379 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.379 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.379 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.379 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.379 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.379 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.379 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.379 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.379 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.380 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43212180 kB' 'MemAvailable: 46722172 kB' 'Buffers: 2704 kB' 'Cached: 12827296 kB' 'SwapCached: 0 kB' 'Active: 9854960 kB' 'Inactive: 3506552 kB' 'Active(anon): 9460608 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534716 kB' 'Mapped: 180052 kB' 'Shmem: 8929096 kB' 'KReclaimable: 205424 kB' 'Slab: 582916 kB' 'SReclaimable: 205424 kB' 'SUnreclaim: 377492 kB' 'KernelStack: 12848 kB' 'PageTables: 7932 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10582168 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196724 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1924700 kB' 'DirectMap2M: 15820800 kB' 'DirectMap1G: 51380224 kB' 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.381 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43211928 kB' 'MemAvailable: 46721920 kB' 'Buffers: 2704 kB' 'Cached: 12827296 kB' 'SwapCached: 0 kB' 'Active: 9854548 kB' 'Inactive: 3506552 kB' 'Active(anon): 9460196 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534308 kB' 'Mapped: 179976 kB' 'Shmem: 8929096 kB' 'KReclaimable: 205424 kB' 'Slab: 582912 kB' 'SReclaimable: 205424 kB' 'SUnreclaim: 377488 kB' 'KernelStack: 12864 kB' 'PageTables: 7980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10582188 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196740 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1924700 kB' 'DirectMap2M: 15820800 kB' 'DirectMap1G: 51380224 kB' 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.382 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:40.383 nr_hugepages=1024 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:40.383 resv_hugepages=0 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:40.383 surplus_hugepages=0 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:40.383 anon_hugepages=0 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43211928 kB' 'MemAvailable: 46721920 kB' 'Buffers: 2704 kB' 'Cached: 12827336 kB' 'SwapCached: 0 kB' 'Active: 9854864 kB' 'Inactive: 3506552 kB' 'Active(anon): 9460512 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534592 kB' 'Mapped: 179976 kB' 'Shmem: 8929136 kB' 'KReclaimable: 205424 kB' 'Slab: 582912 kB' 'SReclaimable: 205424 kB' 'SUnreclaim: 377488 kB' 'KernelStack: 12864 kB' 'PageTables: 7980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10582212 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196740 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1924700 kB' 'DirectMap2M: 15820800 kB' 'DirectMap1G: 51380224 kB' 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.383 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.384 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 25641792 kB' 'MemUsed: 7188092 kB' 'SwapCached: 0 kB' 'Active: 3813536 kB' 'Inactive: 108696 kB' 'Active(anon): 3702648 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 108696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3661868 kB' 'Mapped: 60524 kB' 'AnonPages: 263552 kB' 'Shmem: 3442284 kB' 'KernelStack: 8200 kB' 'PageTables: 5084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 94196 kB' 'Slab: 318888 kB' 'SReclaimable: 94196 kB' 'SUnreclaim: 224692 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.385 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:40.386 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:40.386 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:40.386 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:40.386 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:40.386 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:40.386 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:40.386 node0=1024 expecting 1024 00:04:40.386 05:18:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:40.386 00:04:40.386 real 0m2.786s 00:04:40.386 user 0m1.183s 00:04:40.386 sys 0m1.527s 00:04:40.386 05:18:47 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:40.386 05:18:47 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:40.386 ************************************ 00:04:40.386 END TEST no_shrink_alloc 00:04:40.386 ************************************ 00:04:40.386 05:18:47 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:40.386 05:18:47 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:40.386 05:18:47 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:40.386 05:18:47 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:40.386 05:18:47 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:40.386 05:18:47 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:40.386 05:18:47 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:40.386 05:18:47 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:40.386 05:18:47 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:40.386 05:18:47 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:40.386 05:18:47 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:40.386 05:18:47 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:40.386 05:18:47 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:40.386 05:18:47 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:40.386 00:04:40.386 real 0m11.350s 00:04:40.386 user 0m4.402s 00:04:40.386 sys 0m5.801s 00:04:40.386 05:18:47 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:40.386 05:18:47 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:40.386 ************************************ 00:04:40.386 END TEST hugepages 00:04:40.386 ************************************ 00:04:40.644 05:18:47 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:40.644 05:18:47 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:40.644 05:18:47 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:40.644 05:18:47 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:40.644 ************************************ 00:04:40.644 START TEST driver 00:04:40.644 ************************************ 00:04:40.644 05:18:47 setup.sh.driver -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:40.644 * Looking for test storage... 00:04:40.644 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:40.644 05:18:47 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:40.644 05:18:47 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:40.644 05:18:47 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:43.209 05:18:50 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:43.209 05:18:50 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:43.209 05:18:50 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:43.209 05:18:50 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:43.209 ************************************ 00:04:43.209 START TEST guess_driver 00:04:43.209 ************************************ 00:04:43.209 05:18:50 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:04:43.209 05:18:50 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:43.209 05:18:50 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:43.209 05:18:50 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:43.209 05:18:50 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:43.209 05:18:50 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:43.209 05:18:50 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:43.209 05:18:50 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:43.209 05:18:50 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:43.209 05:18:50 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:43.209 05:18:50 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:04:43.209 05:18:50 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:43.209 05:18:50 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:43.209 05:18:50 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:43.209 05:18:50 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:43.209 05:18:50 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:43.209 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:43.209 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:43.209 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:43.209 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:43.209 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:43.209 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:43.209 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:43.209 05:18:50 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:43.209 05:18:50 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:43.209 05:18:50 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:43.209 05:18:50 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:43.209 05:18:50 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:43.209 Looking for driver=vfio-pci 00:04:43.209 05:18:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:43.209 05:18:50 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:43.210 05:18:50 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:43.210 05:18:50 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:44.144 05:18:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:44.144 05:18:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:44.144 05:18:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:44.144 05:18:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:44.144 05:18:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:44.144 05:18:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:44.144 05:18:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:44.144 05:18:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:44.144 05:18:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:44.144 05:18:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:44.144 05:18:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:44.144 05:18:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:44.144 05:18:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:44.144 05:18:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:44.144 05:18:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:44.144 05:18:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:44.144 05:18:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:44.144 05:18:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:44.404 05:18:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:44.404 05:18:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:44.404 05:18:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:44.404 05:18:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:44.404 05:18:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:44.404 05:18:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:44.404 05:18:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:44.404 05:18:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:44.404 05:18:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:44.404 05:18:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:44.404 05:18:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:44.404 05:18:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:44.404 05:18:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:44.404 05:18:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:44.404 05:18:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:44.404 05:18:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:44.404 05:18:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:44.404 05:18:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:44.404 05:18:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:44.404 05:18:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:44.404 05:18:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:44.404 05:18:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:44.404 05:18:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:44.404 05:18:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:44.404 05:18:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:44.404 05:18:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:44.404 05:18:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:44.404 05:18:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:44.404 05:18:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:44.404 05:18:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:45.341 05:18:52 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:45.342 05:18:52 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:45.342 05:18:52 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:45.342 05:18:52 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:45.342 05:18:52 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:45.342 05:18:52 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:45.342 05:18:52 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:47.890 00:04:47.890 real 0m4.680s 00:04:47.890 user 0m1.058s 00:04:47.890 sys 0m1.747s 00:04:47.890 05:18:54 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:47.890 05:18:54 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:47.890 ************************************ 00:04:47.890 END TEST guess_driver 00:04:47.890 ************************************ 00:04:47.890 00:04:47.890 real 0m7.276s 00:04:47.890 user 0m1.611s 00:04:47.890 sys 0m2.823s 00:04:47.890 05:18:54 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:47.890 05:18:54 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:47.890 ************************************ 00:04:47.890 END TEST driver 00:04:47.890 ************************************ 00:04:47.890 05:18:54 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:47.890 05:18:54 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:47.890 05:18:54 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:47.890 05:18:54 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:47.890 ************************************ 00:04:47.890 START TEST devices 00:04:47.890 ************************************ 00:04:47.890 05:18:54 setup.sh.devices -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:47.890 * Looking for test storage... 00:04:47.890 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:47.890 05:18:54 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:47.890 05:18:54 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:47.890 05:18:54 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:47.890 05:18:54 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:49.264 05:18:56 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:49.264 05:18:56 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:49.264 05:18:56 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:49.264 05:18:56 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:49.264 05:18:56 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:49.264 05:18:56 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:49.264 05:18:56 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:49.264 05:18:56 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:49.264 05:18:56 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:49.264 05:18:56 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:49.264 05:18:56 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:49.264 05:18:56 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:49.264 05:18:56 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:49.264 05:18:56 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:49.264 05:18:56 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:49.264 05:18:56 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:49.264 05:18:56 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:49.264 05:18:56 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:04:49.264 05:18:56 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:04:49.264 05:18:56 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:49.264 05:18:56 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:49.264 05:18:56 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:49.523 No valid GPT data, bailing 00:04:49.523 05:18:56 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:49.523 05:18:56 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:49.523 05:18:56 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:49.523 05:18:56 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:49.523 05:18:56 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:49.523 05:18:56 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:49.523 05:18:56 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:04:49.523 05:18:56 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:04:49.523 05:18:56 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:49.523 05:18:56 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:04:49.523 05:18:56 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:49.523 05:18:56 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:49.523 05:18:56 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:49.523 05:18:56 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:49.523 05:18:56 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:49.523 05:18:56 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:49.523 ************************************ 00:04:49.523 START TEST nvme_mount 00:04:49.523 ************************************ 00:04:49.523 05:18:56 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:04:49.523 05:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:49.523 05:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:49.523 05:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:49.523 05:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:49.523 05:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:49.523 05:18:56 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:49.523 05:18:56 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:49.523 05:18:56 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:49.523 05:18:56 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:49.523 05:18:56 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:49.523 05:18:56 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:49.523 05:18:56 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:49.523 05:18:56 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:49.523 05:18:56 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:49.523 05:18:56 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:49.523 05:18:56 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:49.523 05:18:56 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:49.523 05:18:56 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:49.523 05:18:56 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:50.459 Creating new GPT entries in memory. 00:04:50.459 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:50.459 other utilities. 00:04:50.460 05:18:57 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:50.460 05:18:57 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:50.460 05:18:57 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:50.460 05:18:57 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:50.460 05:18:57 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:51.394 Creating new GPT entries in memory. 00:04:51.394 The operation has completed successfully. 00:04:51.394 05:18:58 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:51.394 05:18:58 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:51.394 05:18:58 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 3092231 00:04:51.394 05:18:58 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:51.394 05:18:58 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:51.394 05:18:58 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:51.394 05:18:58 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:51.394 05:18:58 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:51.653 05:18:58 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:51.653 05:18:58 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:51.653 05:18:58 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:51.653 05:18:58 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:51.653 05:18:58 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:51.653 05:18:58 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:51.653 05:18:58 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:51.653 05:18:58 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:51.653 05:18:58 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:51.653 05:18:58 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:51.653 05:18:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.653 05:18:58 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:51.653 05:18:58 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:51.653 05:18:58 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:51.653 05:18:58 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:52.599 05:18:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:52.599 05:18:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:52.599 05:18:59 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:52.599 05:18:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.599 05:18:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:52.599 05:18:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.599 05:18:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:52.599 05:18:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.599 05:18:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:52.599 05:18:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.599 05:18:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:52.599 05:18:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.599 05:18:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:52.599 05:18:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.599 05:18:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:52.599 05:18:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.599 05:18:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:52.599 05:18:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.599 05:18:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:52.599 05:18:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.599 05:18:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:52.599 05:18:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.599 05:18:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:52.599 05:18:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.599 05:18:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:52.599 05:18:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.599 05:18:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:52.599 05:18:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.599 05:18:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:52.599 05:18:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.599 05:18:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:52.599 05:18:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.599 05:18:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:52.599 05:18:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.599 05:18:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:52.599 05:18:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.858 05:18:59 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:52.858 05:18:59 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:52.858 05:18:59 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:52.858 05:18:59 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:52.858 05:18:59 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:52.858 05:18:59 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:52.858 05:18:59 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:52.858 05:18:59 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:52.858 05:18:59 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:52.858 05:18:59 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:52.858 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:52.858 05:18:59 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:52.858 05:18:59 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:53.116 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:53.116 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:53.116 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:53.116 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:53.116 05:19:00 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:53.116 05:19:00 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:53.116 05:19:00 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:53.116 05:19:00 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:53.116 05:19:00 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:53.116 05:19:00 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:53.116 05:19:00 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:53.116 05:19:00 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:53.116 05:19:00 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:53.116 05:19:00 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:53.116 05:19:00 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:53.116 05:19:00 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:53.116 05:19:00 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:53.116 05:19:00 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:53.116 05:19:00 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:53.116 05:19:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.116 05:19:00 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:53.116 05:19:00 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:53.116 05:19:00 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:53.116 05:19:00 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:54.489 05:19:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:54.489 05:19:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:54.489 05:19:01 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:54.489 05:19:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.489 05:19:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:54.489 05:19:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.489 05:19:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:54.489 05:19:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.489 05:19:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:54.489 05:19:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.489 05:19:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:54.489 05:19:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.489 05:19:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:54.489 05:19:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.489 05:19:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:54.489 05:19:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.489 05:19:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:54.489 05:19:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.489 05:19:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:54.489 05:19:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.489 05:19:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:54.489 05:19:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.489 05:19:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:54.489 05:19:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.489 05:19:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:54.489 05:19:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.489 05:19:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:54.489 05:19:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.489 05:19:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:54.489 05:19:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.489 05:19:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:54.489 05:19:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.489 05:19:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:54.489 05:19:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.489 05:19:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:54.489 05:19:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.489 05:19:01 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:54.489 05:19:01 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:54.489 05:19:01 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:54.490 05:19:01 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:54.490 05:19:01 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:54.490 05:19:01 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:54.490 05:19:01 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:04:54.490 05:19:01 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:54.490 05:19:01 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:54.490 05:19:01 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:54.490 05:19:01 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:54.490 05:19:01 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:54.490 05:19:01 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:54.490 05:19:01 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:54.490 05:19:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.490 05:19:01 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:54.490 05:19:01 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:54.490 05:19:01 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:54.490 05:19:01 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:55.426 05:19:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:55.426 05:19:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:55.426 05:19:02 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:55.426 05:19:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.426 05:19:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:55.426 05:19:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.426 05:19:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:55.426 05:19:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.426 05:19:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:55.426 05:19:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.426 05:19:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:55.426 05:19:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.426 05:19:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:55.426 05:19:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.426 05:19:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:55.426 05:19:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.426 05:19:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:55.426 05:19:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.426 05:19:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:55.426 05:19:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.426 05:19:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:55.426 05:19:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.426 05:19:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:55.426 05:19:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.426 05:19:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:55.426 05:19:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.426 05:19:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:55.426 05:19:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.426 05:19:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:55.426 05:19:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.426 05:19:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:55.426 05:19:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.426 05:19:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:55.426 05:19:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.426 05:19:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:55.426 05:19:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.685 05:19:02 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:55.685 05:19:02 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:55.685 05:19:02 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:55.685 05:19:02 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:55.685 05:19:02 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:55.685 05:19:02 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:55.685 05:19:02 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:55.685 05:19:02 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:55.685 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:55.685 00:04:55.685 real 0m6.223s 00:04:55.685 user 0m1.439s 00:04:55.685 sys 0m2.375s 00:04:55.685 05:19:02 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:55.685 05:19:02 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:55.685 ************************************ 00:04:55.685 END TEST nvme_mount 00:04:55.685 ************************************ 00:04:55.685 05:19:02 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:55.685 05:19:02 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:55.685 05:19:02 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:55.685 05:19:02 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:55.685 ************************************ 00:04:55.685 START TEST dm_mount 00:04:55.685 ************************************ 00:04:55.685 05:19:02 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:04:55.685 05:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:55.685 05:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:55.685 05:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:55.685 05:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:55.685 05:19:02 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:55.685 05:19:02 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:55.685 05:19:02 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:55.685 05:19:02 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:55.685 05:19:02 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:55.685 05:19:02 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:55.685 05:19:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:55.685 05:19:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:55.685 05:19:02 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:55.685 05:19:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:55.685 05:19:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:55.685 05:19:02 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:55.685 05:19:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:55.685 05:19:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:55.685 05:19:02 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:55.685 05:19:02 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:55.685 05:19:02 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:56.620 Creating new GPT entries in memory. 00:04:56.620 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:56.620 other utilities. 00:04:56.620 05:19:03 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:56.620 05:19:03 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:56.620 05:19:03 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:56.620 05:19:03 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:56.620 05:19:03 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:57.995 Creating new GPT entries in memory. 00:04:57.995 The operation has completed successfully. 00:04:57.995 05:19:04 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:57.995 05:19:04 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:57.995 05:19:04 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:57.995 05:19:04 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:57.995 05:19:04 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:58.932 The operation has completed successfully. 00:04:58.932 05:19:05 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:58.932 05:19:05 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:58.932 05:19:05 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 3094620 00:04:58.932 05:19:05 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:58.932 05:19:05 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:58.932 05:19:05 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:58.932 05:19:05 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:58.932 05:19:05 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:58.932 05:19:05 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:58.932 05:19:05 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:58.932 05:19:05 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:58.932 05:19:05 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:58.932 05:19:05 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:58.932 05:19:05 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:58.932 05:19:05 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:58.932 05:19:05 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:58.932 05:19:05 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:58.932 05:19:05 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:58.932 05:19:05 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:58.932 05:19:05 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:58.932 05:19:05 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:58.932 05:19:05 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:58.932 05:19:05 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:58.932 05:19:05 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:58.932 05:19:05 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:58.932 05:19:05 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:58.932 05:19:05 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:58.932 05:19:05 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:58.932 05:19:05 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:58.932 05:19:05 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:58.932 05:19:05 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:58.932 05:19:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.932 05:19:05 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:58.932 05:19:05 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:58.932 05:19:05 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:58.932 05:19:05 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:59.932 05:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.932 05:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:59.932 05:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:59.932 05:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.932 05:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.932 05:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.932 05:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.932 05:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.932 05:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.932 05:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.932 05:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.932 05:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.932 05:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.932 05:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.932 05:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.932 05:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.932 05:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.932 05:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.932 05:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.932 05:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.932 05:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.932 05:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.932 05:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.932 05:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.932 05:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.932 05:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.932 05:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.932 05:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.932 05:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.932 05:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.932 05:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.932 05:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.932 05:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.932 05:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.932 05:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.932 05:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.191 05:19:07 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:00.191 05:19:07 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:00.191 05:19:07 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:00.191 05:19:07 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:00.191 05:19:07 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:00.191 05:19:07 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:00.191 05:19:07 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:00.191 05:19:07 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:00.191 05:19:07 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:00.191 05:19:07 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:00.191 05:19:07 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:00.191 05:19:07 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:00.191 05:19:07 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:00.191 05:19:07 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:00.191 05:19:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.191 05:19:07 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:00.191 05:19:07 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:00.191 05:19:07 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:00.191 05:19:07 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:01.127 05:19:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:01.127 05:19:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:01.127 05:19:08 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:01.127 05:19:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.127 05:19:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:01.127 05:19:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.127 05:19:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:01.127 05:19:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.127 05:19:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:01.127 05:19:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.127 05:19:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:01.127 05:19:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.127 05:19:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:01.127 05:19:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.127 05:19:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:01.127 05:19:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.127 05:19:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:01.127 05:19:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.127 05:19:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:01.127 05:19:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.127 05:19:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:01.127 05:19:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.127 05:19:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:01.127 05:19:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.127 05:19:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:01.127 05:19:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.127 05:19:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:01.127 05:19:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.127 05:19:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:01.127 05:19:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.127 05:19:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:01.127 05:19:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.127 05:19:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:01.127 05:19:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.127 05:19:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:01.127 05:19:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.387 05:19:08 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:01.387 05:19:08 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:01.387 05:19:08 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:01.387 05:19:08 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:01.387 05:19:08 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:01.387 05:19:08 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:01.387 05:19:08 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:01.387 05:19:08 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:01.387 05:19:08 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:01.387 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:01.387 05:19:08 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:01.387 05:19:08 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:01.387 00:05:01.387 real 0m5.604s 00:05:01.387 user 0m0.942s 00:05:01.387 sys 0m1.524s 00:05:01.387 05:19:08 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:01.387 05:19:08 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:01.387 ************************************ 00:05:01.387 END TEST dm_mount 00:05:01.387 ************************************ 00:05:01.387 05:19:08 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:01.387 05:19:08 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:01.387 05:19:08 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:01.387 05:19:08 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:01.387 05:19:08 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:01.387 05:19:08 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:01.387 05:19:08 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:01.646 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:01.646 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:01.646 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:01.646 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:01.646 05:19:08 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:01.646 05:19:08 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:01.646 05:19:08 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:01.646 05:19:08 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:01.646 05:19:08 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:01.646 05:19:08 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:01.646 05:19:08 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:01.646 00:05:01.646 real 0m13.771s 00:05:01.646 user 0m3.039s 00:05:01.646 sys 0m4.956s 00:05:01.646 05:19:08 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:01.646 05:19:08 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:01.646 ************************************ 00:05:01.646 END TEST devices 00:05:01.646 ************************************ 00:05:01.646 00:05:01.646 real 0m42.998s 00:05:01.646 user 0m12.302s 00:05:01.646 sys 0m18.969s 00:05:01.646 05:19:08 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:01.646 05:19:08 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:01.646 ************************************ 00:05:01.646 END TEST setup.sh 00:05:01.646 ************************************ 00:05:01.646 05:19:08 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:03.020 Hugepages 00:05:03.020 node hugesize free / total 00:05:03.020 node0 1048576kB 0 / 0 00:05:03.021 node0 2048kB 2048 / 2048 00:05:03.021 node1 1048576kB 0 / 0 00:05:03.021 node1 2048kB 0 / 0 00:05:03.021 00:05:03.021 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:03.021 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:05:03.021 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:05:03.021 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:05:03.021 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:05:03.021 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:05:03.021 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:05:03.021 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:05:03.021 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:05:03.021 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:05:03.021 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:05:03.021 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:05:03.021 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:05:03.021 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:05:03.021 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:05:03.021 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:05:03.021 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:05:03.021 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:03.021 05:19:09 -- spdk/autotest.sh@130 -- # uname -s 00:05:03.021 05:19:09 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:03.021 05:19:09 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:03.021 05:19:09 -- common/autotest_common.sh@1527 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:04.399 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:04.399 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:04.399 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:04.399 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:04.399 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:04.399 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:04.399 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:04.399 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:04.399 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:04.399 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:04.399 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:04.399 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:04.399 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:04.399 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:04.399 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:04.399 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:05.335 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:05.335 05:19:12 -- common/autotest_common.sh@1528 -- # sleep 1 00:05:06.270 05:19:13 -- common/autotest_common.sh@1529 -- # bdfs=() 00:05:06.270 05:19:13 -- common/autotest_common.sh@1529 -- # local bdfs 00:05:06.270 05:19:13 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:05:06.270 05:19:13 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:05:06.270 05:19:13 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:06.270 05:19:13 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:06.270 05:19:13 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:06.270 05:19:13 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:06.270 05:19:13 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:06.270 05:19:13 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:05:06.270 05:19:13 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:88:00.0 00:05:06.270 05:19:13 -- common/autotest_common.sh@1532 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:07.225 Waiting for block devices as requested 00:05:07.483 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:05:07.483 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:07.483 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:07.742 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:07.742 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:07.742 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:08.001 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:08.001 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:08.001 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:08.001 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:08.260 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:08.260 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:08.260 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:08.260 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:08.518 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:08.518 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:08.518 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:08.776 05:19:15 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:05:08.776 05:19:15 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:05:08.776 05:19:15 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 00:05:08.776 05:19:15 -- common/autotest_common.sh@1498 -- # grep 0000:88:00.0/nvme/nvme 00:05:08.776 05:19:15 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:08.776 05:19:15 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:05:08.776 05:19:15 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:08.776 05:19:15 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:05:08.776 05:19:15 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:05:08.776 05:19:15 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:05:08.776 05:19:15 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:05:08.776 05:19:15 -- common/autotest_common.sh@1541 -- # grep oacs 00:05:08.776 05:19:15 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:05:08.776 05:19:15 -- common/autotest_common.sh@1541 -- # oacs=' 0xf' 00:05:08.776 05:19:15 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:05:08.777 05:19:15 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:05:08.777 05:19:15 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:05:08.777 05:19:15 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:05:08.777 05:19:15 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:05:08.777 05:19:15 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:05:08.777 05:19:15 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:05:08.777 05:19:15 -- common/autotest_common.sh@1553 -- # continue 00:05:08.777 05:19:15 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:08.777 05:19:15 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:08.777 05:19:15 -- common/autotest_common.sh@10 -- # set +x 00:05:08.777 05:19:15 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:08.777 05:19:15 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:08.777 05:19:15 -- common/autotest_common.sh@10 -- # set +x 00:05:08.777 05:19:15 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:10.152 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:10.152 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:10.152 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:10.152 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:10.152 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:10.152 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:10.152 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:10.152 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:10.152 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:10.152 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:10.152 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:10.152 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:10.152 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:10.152 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:10.152 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:10.152 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:11.088 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:11.088 05:19:18 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:11.088 05:19:18 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:11.088 05:19:18 -- common/autotest_common.sh@10 -- # set +x 00:05:11.088 05:19:18 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:11.088 05:19:18 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:05:11.088 05:19:18 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:05:11.088 05:19:18 -- common/autotest_common.sh@1573 -- # bdfs=() 00:05:11.088 05:19:18 -- common/autotest_common.sh@1573 -- # local bdfs 00:05:11.088 05:19:18 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:05:11.088 05:19:18 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:11.088 05:19:18 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:11.088 05:19:18 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:11.088 05:19:18 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:11.088 05:19:18 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:11.348 05:19:18 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:05:11.348 05:19:18 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:88:00.0 00:05:11.348 05:19:18 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:05:11.348 05:19:18 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:05:11.348 05:19:18 -- common/autotest_common.sh@1576 -- # device=0x0a54 00:05:11.348 05:19:18 -- common/autotest_common.sh@1577 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:11.348 05:19:18 -- common/autotest_common.sh@1578 -- # bdfs+=($bdf) 00:05:11.348 05:19:18 -- common/autotest_common.sh@1582 -- # printf '%s\n' 0000:88:00.0 00:05:11.348 05:19:18 -- common/autotest_common.sh@1588 -- # [[ -z 0000:88:00.0 ]] 00:05:11.348 05:19:18 -- common/autotest_common.sh@1593 -- # spdk_tgt_pid=3099790 00:05:11.348 05:19:18 -- common/autotest_common.sh@1592 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:11.348 05:19:18 -- common/autotest_common.sh@1594 -- # waitforlisten 3099790 00:05:11.348 05:19:18 -- common/autotest_common.sh@827 -- # '[' -z 3099790 ']' 00:05:11.348 05:19:18 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.348 05:19:18 -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:11.348 05:19:18 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.348 05:19:18 -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:11.348 05:19:18 -- common/autotest_common.sh@10 -- # set +x 00:05:11.348 [2024-07-14 05:19:18.289777] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:11.348 [2024-07-14 05:19:18.289889] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3099790 ] 00:05:11.348 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.348 [2024-07-14 05:19:18.354319] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.348 [2024-07-14 05:19:18.443753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.607 05:19:18 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:11.607 05:19:18 -- common/autotest_common.sh@860 -- # return 0 00:05:11.607 05:19:18 -- common/autotest_common.sh@1596 -- # bdf_id=0 00:05:11.607 05:19:18 -- common/autotest_common.sh@1597 -- # for bdf in "${bdfs[@]}" 00:05:11.607 05:19:18 -- common/autotest_common.sh@1598 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:05:14.890 nvme0n1 00:05:14.890 05:19:21 -- common/autotest_common.sh@1600 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:15.184 [2024-07-14 05:19:22.018104] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:15.184 [2024-07-14 05:19:22.018143] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:15.184 request: 00:05:15.184 { 00:05:15.184 "nvme_ctrlr_name": "nvme0", 00:05:15.184 "password": "test", 00:05:15.184 "method": "bdev_nvme_opal_revert", 00:05:15.184 "req_id": 1 00:05:15.184 } 00:05:15.184 Got JSON-RPC error response 00:05:15.184 response: 00:05:15.184 { 00:05:15.184 "code": -32603, 00:05:15.184 "message": "Internal error" 00:05:15.184 } 00:05:15.184 05:19:22 -- common/autotest_common.sh@1600 -- # true 00:05:15.184 05:19:22 -- common/autotest_common.sh@1601 -- # (( ++bdf_id )) 00:05:15.184 05:19:22 -- common/autotest_common.sh@1604 -- # killprocess 3099790 00:05:15.184 05:19:22 -- common/autotest_common.sh@946 -- # '[' -z 3099790 ']' 00:05:15.184 05:19:22 -- common/autotest_common.sh@950 -- # kill -0 3099790 00:05:15.184 05:19:22 -- common/autotest_common.sh@951 -- # uname 00:05:15.184 05:19:22 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:15.184 05:19:22 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3099790 00:05:15.184 05:19:22 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:15.184 05:19:22 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:15.184 05:19:22 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3099790' 00:05:15.184 killing process with pid 3099790 00:05:15.184 05:19:22 -- common/autotest_common.sh@965 -- # kill 3099790 00:05:15.184 05:19:22 -- common/autotest_common.sh@970 -- # wait 3099790 00:05:17.101 05:19:23 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:17.101 05:19:23 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:17.101 05:19:23 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:17.101 05:19:23 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:17.101 05:19:23 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:17.101 05:19:23 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:17.101 05:19:23 -- common/autotest_common.sh@10 -- # set +x 00:05:17.101 05:19:23 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:17.101 05:19:23 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:17.101 05:19:23 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:17.101 05:19:23 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:17.101 05:19:23 -- common/autotest_common.sh@10 -- # set +x 00:05:17.101 ************************************ 00:05:17.101 START TEST env 00:05:17.101 ************************************ 00:05:17.101 05:19:23 env -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:17.101 * Looking for test storage... 00:05:17.101 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:17.101 05:19:23 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:17.101 05:19:23 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:17.101 05:19:23 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:17.101 05:19:23 env -- common/autotest_common.sh@10 -- # set +x 00:05:17.101 ************************************ 00:05:17.101 START TEST env_memory 00:05:17.101 ************************************ 00:05:17.101 05:19:23 env.env_memory -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:17.101 00:05:17.101 00:05:17.101 CUnit - A unit testing framework for C - Version 2.1-3 00:05:17.101 http://cunit.sourceforge.net/ 00:05:17.101 00:05:17.101 00:05:17.101 Suite: memory 00:05:17.101 Test: alloc and free memory map ...[2024-07-14 05:19:23.958872] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:17.101 passed 00:05:17.101 Test: mem map translation ...[2024-07-14 05:19:23.979661] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:17.101 [2024-07-14 05:19:23.979683] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:17.101 [2024-07-14 05:19:23.979732] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:17.101 [2024-07-14 05:19:23.979744] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:17.101 passed 00:05:17.101 Test: mem map registration ...[2024-07-14 05:19:24.020951] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:17.101 [2024-07-14 05:19:24.020971] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:17.101 passed 00:05:17.101 Test: mem map adjacent registrations ...passed 00:05:17.101 00:05:17.101 Run Summary: Type Total Ran Passed Failed Inactive 00:05:17.101 suites 1 1 n/a 0 0 00:05:17.101 tests 4 4 4 0 0 00:05:17.101 asserts 152 152 152 0 n/a 00:05:17.101 00:05:17.101 Elapsed time = 0.139 seconds 00:05:17.101 00:05:17.101 real 0m0.145s 00:05:17.101 user 0m0.136s 00:05:17.101 sys 0m0.009s 00:05:17.101 05:19:24 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:17.101 05:19:24 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:17.101 ************************************ 00:05:17.101 END TEST env_memory 00:05:17.101 ************************************ 00:05:17.101 05:19:24 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:17.101 05:19:24 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:17.101 05:19:24 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:17.101 05:19:24 env -- common/autotest_common.sh@10 -- # set +x 00:05:17.101 ************************************ 00:05:17.101 START TEST env_vtophys 00:05:17.101 ************************************ 00:05:17.101 05:19:24 env.env_vtophys -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:17.101 EAL: lib.eal log level changed from notice to debug 00:05:17.101 EAL: Detected lcore 0 as core 0 on socket 0 00:05:17.101 EAL: Detected lcore 1 as core 1 on socket 0 00:05:17.101 EAL: Detected lcore 2 as core 2 on socket 0 00:05:17.101 EAL: Detected lcore 3 as core 3 on socket 0 00:05:17.101 EAL: Detected lcore 4 as core 4 on socket 0 00:05:17.101 EAL: Detected lcore 5 as core 5 on socket 0 00:05:17.101 EAL: Detected lcore 6 as core 8 on socket 0 00:05:17.101 EAL: Detected lcore 7 as core 9 on socket 0 00:05:17.101 EAL: Detected lcore 8 as core 10 on socket 0 00:05:17.101 EAL: Detected lcore 9 as core 11 on socket 0 00:05:17.101 EAL: Detected lcore 10 as core 12 on socket 0 00:05:17.101 EAL: Detected lcore 11 as core 13 on socket 0 00:05:17.101 EAL: Detected lcore 12 as core 0 on socket 1 00:05:17.101 EAL: Detected lcore 13 as core 1 on socket 1 00:05:17.101 EAL: Detected lcore 14 as core 2 on socket 1 00:05:17.101 EAL: Detected lcore 15 as core 3 on socket 1 00:05:17.101 EAL: Detected lcore 16 as core 4 on socket 1 00:05:17.101 EAL: Detected lcore 17 as core 5 on socket 1 00:05:17.101 EAL: Detected lcore 18 as core 8 on socket 1 00:05:17.101 EAL: Detected lcore 19 as core 9 on socket 1 00:05:17.101 EAL: Detected lcore 20 as core 10 on socket 1 00:05:17.101 EAL: Detected lcore 21 as core 11 on socket 1 00:05:17.101 EAL: Detected lcore 22 as core 12 on socket 1 00:05:17.101 EAL: Detected lcore 23 as core 13 on socket 1 00:05:17.101 EAL: Detected lcore 24 as core 0 on socket 0 00:05:17.101 EAL: Detected lcore 25 as core 1 on socket 0 00:05:17.101 EAL: Detected lcore 26 as core 2 on socket 0 00:05:17.102 EAL: Detected lcore 27 as core 3 on socket 0 00:05:17.102 EAL: Detected lcore 28 as core 4 on socket 0 00:05:17.102 EAL: Detected lcore 29 as core 5 on socket 0 00:05:17.102 EAL: Detected lcore 30 as core 8 on socket 0 00:05:17.102 EAL: Detected lcore 31 as core 9 on socket 0 00:05:17.102 EAL: Detected lcore 32 as core 10 on socket 0 00:05:17.102 EAL: Detected lcore 33 as core 11 on socket 0 00:05:17.102 EAL: Detected lcore 34 as core 12 on socket 0 00:05:17.102 EAL: Detected lcore 35 as core 13 on socket 0 00:05:17.102 EAL: Detected lcore 36 as core 0 on socket 1 00:05:17.102 EAL: Detected lcore 37 as core 1 on socket 1 00:05:17.102 EAL: Detected lcore 38 as core 2 on socket 1 00:05:17.102 EAL: Detected lcore 39 as core 3 on socket 1 00:05:17.102 EAL: Detected lcore 40 as core 4 on socket 1 00:05:17.102 EAL: Detected lcore 41 as core 5 on socket 1 00:05:17.102 EAL: Detected lcore 42 as core 8 on socket 1 00:05:17.102 EAL: Detected lcore 43 as core 9 on socket 1 00:05:17.102 EAL: Detected lcore 44 as core 10 on socket 1 00:05:17.102 EAL: Detected lcore 45 as core 11 on socket 1 00:05:17.102 EAL: Detected lcore 46 as core 12 on socket 1 00:05:17.102 EAL: Detected lcore 47 as core 13 on socket 1 00:05:17.102 EAL: Maximum logical cores by configuration: 128 00:05:17.102 EAL: Detected CPU lcores: 48 00:05:17.102 EAL: Detected NUMA nodes: 2 00:05:17.102 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:17.102 EAL: Detected shared linkage of DPDK 00:05:17.102 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:17.102 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:17.102 EAL: Registered [vdev] bus. 00:05:17.102 EAL: bus.vdev log level changed from disabled to notice 00:05:17.102 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:17.102 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:17.102 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:17.102 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:17.102 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:17.102 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:17.102 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:17.102 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:17.102 EAL: No shared files mode enabled, IPC will be disabled 00:05:17.102 EAL: No shared files mode enabled, IPC is disabled 00:05:17.102 EAL: Bus pci wants IOVA as 'DC' 00:05:17.102 EAL: Bus vdev wants IOVA as 'DC' 00:05:17.102 EAL: Buses did not request a specific IOVA mode. 00:05:17.102 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:17.102 EAL: Selected IOVA mode 'VA' 00:05:17.102 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.102 EAL: Probing VFIO support... 00:05:17.102 EAL: IOMMU type 1 (Type 1) is supported 00:05:17.102 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:17.102 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:17.102 EAL: VFIO support initialized 00:05:17.102 EAL: Ask a virtual area of 0x2e000 bytes 00:05:17.102 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:17.102 EAL: Setting up physically contiguous memory... 00:05:17.102 EAL: Setting maximum number of open files to 524288 00:05:17.102 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:17.102 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:17.102 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:17.102 EAL: Ask a virtual area of 0x61000 bytes 00:05:17.102 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:17.102 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:17.102 EAL: Ask a virtual area of 0x400000000 bytes 00:05:17.102 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:17.102 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:17.102 EAL: Ask a virtual area of 0x61000 bytes 00:05:17.102 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:17.102 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:17.102 EAL: Ask a virtual area of 0x400000000 bytes 00:05:17.102 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:17.102 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:17.102 EAL: Ask a virtual area of 0x61000 bytes 00:05:17.102 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:17.102 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:17.102 EAL: Ask a virtual area of 0x400000000 bytes 00:05:17.102 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:17.102 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:17.102 EAL: Ask a virtual area of 0x61000 bytes 00:05:17.102 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:17.102 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:17.102 EAL: Ask a virtual area of 0x400000000 bytes 00:05:17.102 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:17.102 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:17.102 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:17.102 EAL: Ask a virtual area of 0x61000 bytes 00:05:17.102 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:17.102 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:17.102 EAL: Ask a virtual area of 0x400000000 bytes 00:05:17.102 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:17.102 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:17.102 EAL: Ask a virtual area of 0x61000 bytes 00:05:17.102 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:17.102 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:17.102 EAL: Ask a virtual area of 0x400000000 bytes 00:05:17.102 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:17.102 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:17.102 EAL: Ask a virtual area of 0x61000 bytes 00:05:17.102 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:17.102 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:17.102 EAL: Ask a virtual area of 0x400000000 bytes 00:05:17.102 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:17.102 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:17.102 EAL: Ask a virtual area of 0x61000 bytes 00:05:17.102 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:17.102 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:17.102 EAL: Ask a virtual area of 0x400000000 bytes 00:05:17.102 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:17.102 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:17.102 EAL: Hugepages will be freed exactly as allocated. 00:05:17.102 EAL: No shared files mode enabled, IPC is disabled 00:05:17.102 EAL: No shared files mode enabled, IPC is disabled 00:05:17.102 EAL: TSC frequency is ~2700000 KHz 00:05:17.102 EAL: Main lcore 0 is ready (tid=7f9c10224a00;cpuset=[0]) 00:05:17.102 EAL: Trying to obtain current memory policy. 00:05:17.102 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:17.102 EAL: Restoring previous memory policy: 0 00:05:17.102 EAL: request: mp_malloc_sync 00:05:17.102 EAL: No shared files mode enabled, IPC is disabled 00:05:17.102 EAL: Heap on socket 0 was expanded by 2MB 00:05:17.102 EAL: No shared files mode enabled, IPC is disabled 00:05:17.102 EAL: No shared files mode enabled, IPC is disabled 00:05:17.102 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:17.102 EAL: Mem event callback 'spdk:(nil)' registered 00:05:17.102 00:05:17.102 00:05:17.102 CUnit - A unit testing framework for C - Version 2.1-3 00:05:17.102 http://cunit.sourceforge.net/ 00:05:17.102 00:05:17.102 00:05:17.102 Suite: components_suite 00:05:17.102 Test: vtophys_malloc_test ...passed 00:05:17.102 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:17.102 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:17.102 EAL: Restoring previous memory policy: 4 00:05:17.102 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.102 EAL: request: mp_malloc_sync 00:05:17.102 EAL: No shared files mode enabled, IPC is disabled 00:05:17.102 EAL: Heap on socket 0 was expanded by 4MB 00:05:17.102 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.102 EAL: request: mp_malloc_sync 00:05:17.102 EAL: No shared files mode enabled, IPC is disabled 00:05:17.102 EAL: Heap on socket 0 was shrunk by 4MB 00:05:17.102 EAL: Trying to obtain current memory policy. 00:05:17.102 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:17.102 EAL: Restoring previous memory policy: 4 00:05:17.102 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.102 EAL: request: mp_malloc_sync 00:05:17.102 EAL: No shared files mode enabled, IPC is disabled 00:05:17.102 EAL: Heap on socket 0 was expanded by 6MB 00:05:17.102 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.102 EAL: request: mp_malloc_sync 00:05:17.102 EAL: No shared files mode enabled, IPC is disabled 00:05:17.102 EAL: Heap on socket 0 was shrunk by 6MB 00:05:17.102 EAL: Trying to obtain current memory policy. 00:05:17.102 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:17.102 EAL: Restoring previous memory policy: 4 00:05:17.102 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.102 EAL: request: mp_malloc_sync 00:05:17.102 EAL: No shared files mode enabled, IPC is disabled 00:05:17.102 EAL: Heap on socket 0 was expanded by 10MB 00:05:17.102 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.102 EAL: request: mp_malloc_sync 00:05:17.102 EAL: No shared files mode enabled, IPC is disabled 00:05:17.102 EAL: Heap on socket 0 was shrunk by 10MB 00:05:17.102 EAL: Trying to obtain current memory policy. 00:05:17.103 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:17.103 EAL: Restoring previous memory policy: 4 00:05:17.103 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.103 EAL: request: mp_malloc_sync 00:05:17.103 EAL: No shared files mode enabled, IPC is disabled 00:05:17.103 EAL: Heap on socket 0 was expanded by 18MB 00:05:17.103 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.103 EAL: request: mp_malloc_sync 00:05:17.103 EAL: No shared files mode enabled, IPC is disabled 00:05:17.103 EAL: Heap on socket 0 was shrunk by 18MB 00:05:17.103 EAL: Trying to obtain current memory policy. 00:05:17.103 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:17.361 EAL: Restoring previous memory policy: 4 00:05:17.361 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.361 EAL: request: mp_malloc_sync 00:05:17.361 EAL: No shared files mode enabled, IPC is disabled 00:05:17.361 EAL: Heap on socket 0 was expanded by 34MB 00:05:17.361 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.361 EAL: request: mp_malloc_sync 00:05:17.361 EAL: No shared files mode enabled, IPC is disabled 00:05:17.361 EAL: Heap on socket 0 was shrunk by 34MB 00:05:17.361 EAL: Trying to obtain current memory policy. 00:05:17.361 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:17.361 EAL: Restoring previous memory policy: 4 00:05:17.361 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.361 EAL: request: mp_malloc_sync 00:05:17.361 EAL: No shared files mode enabled, IPC is disabled 00:05:17.361 EAL: Heap on socket 0 was expanded by 66MB 00:05:17.361 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.361 EAL: request: mp_malloc_sync 00:05:17.361 EAL: No shared files mode enabled, IPC is disabled 00:05:17.361 EAL: Heap on socket 0 was shrunk by 66MB 00:05:17.361 EAL: Trying to obtain current memory policy. 00:05:17.361 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:17.361 EAL: Restoring previous memory policy: 4 00:05:17.361 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.361 EAL: request: mp_malloc_sync 00:05:17.361 EAL: No shared files mode enabled, IPC is disabled 00:05:17.361 EAL: Heap on socket 0 was expanded by 130MB 00:05:17.361 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.361 EAL: request: mp_malloc_sync 00:05:17.361 EAL: No shared files mode enabled, IPC is disabled 00:05:17.361 EAL: Heap on socket 0 was shrunk by 130MB 00:05:17.361 EAL: Trying to obtain current memory policy. 00:05:17.361 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:17.361 EAL: Restoring previous memory policy: 4 00:05:17.361 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.361 EAL: request: mp_malloc_sync 00:05:17.361 EAL: No shared files mode enabled, IPC is disabled 00:05:17.361 EAL: Heap on socket 0 was expanded by 258MB 00:05:17.361 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.620 EAL: request: mp_malloc_sync 00:05:17.620 EAL: No shared files mode enabled, IPC is disabled 00:05:17.620 EAL: Heap on socket 0 was shrunk by 258MB 00:05:17.620 EAL: Trying to obtain current memory policy. 00:05:17.620 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:17.620 EAL: Restoring previous memory policy: 4 00:05:17.620 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.620 EAL: request: mp_malloc_sync 00:05:17.620 EAL: No shared files mode enabled, IPC is disabled 00:05:17.620 EAL: Heap on socket 0 was expanded by 514MB 00:05:17.879 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.879 EAL: request: mp_malloc_sync 00:05:17.879 EAL: No shared files mode enabled, IPC is disabled 00:05:17.879 EAL: Heap on socket 0 was shrunk by 514MB 00:05:17.879 EAL: Trying to obtain current memory policy. 00:05:17.879 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:18.138 EAL: Restoring previous memory policy: 4 00:05:18.138 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.138 EAL: request: mp_malloc_sync 00:05:18.138 EAL: No shared files mode enabled, IPC is disabled 00:05:18.138 EAL: Heap on socket 0 was expanded by 1026MB 00:05:18.396 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.655 EAL: request: mp_malloc_sync 00:05:18.655 EAL: No shared files mode enabled, IPC is disabled 00:05:18.655 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:18.655 passed 00:05:18.655 00:05:18.655 Run Summary: Type Total Ran Passed Failed Inactive 00:05:18.655 suites 1 1 n/a 0 0 00:05:18.655 tests 2 2 2 0 0 00:05:18.655 asserts 497 497 497 0 n/a 00:05:18.655 00:05:18.655 Elapsed time = 1.371 seconds 00:05:18.655 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.655 EAL: request: mp_malloc_sync 00:05:18.655 EAL: No shared files mode enabled, IPC is disabled 00:05:18.655 EAL: Heap on socket 0 was shrunk by 2MB 00:05:18.655 EAL: No shared files mode enabled, IPC is disabled 00:05:18.655 EAL: No shared files mode enabled, IPC is disabled 00:05:18.655 EAL: No shared files mode enabled, IPC is disabled 00:05:18.655 00:05:18.655 real 0m1.487s 00:05:18.655 user 0m0.850s 00:05:18.655 sys 0m0.602s 00:05:18.655 05:19:25 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:18.655 05:19:25 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:18.655 ************************************ 00:05:18.655 END TEST env_vtophys 00:05:18.655 ************************************ 00:05:18.655 05:19:25 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:18.655 05:19:25 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:18.655 05:19:25 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:18.655 05:19:25 env -- common/autotest_common.sh@10 -- # set +x 00:05:18.655 ************************************ 00:05:18.655 START TEST env_pci 00:05:18.655 ************************************ 00:05:18.655 05:19:25 env.env_pci -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:18.655 00:05:18.655 00:05:18.655 CUnit - A unit testing framework for C - Version 2.1-3 00:05:18.655 http://cunit.sourceforge.net/ 00:05:18.655 00:05:18.655 00:05:18.655 Suite: pci 00:05:18.655 Test: pci_hook ...[2024-07-14 05:19:25.657355] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3100681 has claimed it 00:05:18.655 EAL: Cannot find device (10000:00:01.0) 00:05:18.655 EAL: Failed to attach device on primary process 00:05:18.655 passed 00:05:18.655 00:05:18.655 Run Summary: Type Total Ran Passed Failed Inactive 00:05:18.655 suites 1 1 n/a 0 0 00:05:18.655 tests 1 1 1 0 0 00:05:18.655 asserts 25 25 25 0 n/a 00:05:18.655 00:05:18.655 Elapsed time = 0.021 seconds 00:05:18.655 00:05:18.655 real 0m0.034s 00:05:18.655 user 0m0.007s 00:05:18.655 sys 0m0.026s 00:05:18.655 05:19:25 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:18.655 05:19:25 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:18.655 ************************************ 00:05:18.655 END TEST env_pci 00:05:18.655 ************************************ 00:05:18.655 05:19:25 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:18.655 05:19:25 env -- env/env.sh@15 -- # uname 00:05:18.655 05:19:25 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:18.655 05:19:25 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:18.655 05:19:25 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:18.655 05:19:25 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:05:18.655 05:19:25 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:18.655 05:19:25 env -- common/autotest_common.sh@10 -- # set +x 00:05:18.655 ************************************ 00:05:18.655 START TEST env_dpdk_post_init 00:05:18.655 ************************************ 00:05:18.655 05:19:25 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:18.655 EAL: Detected CPU lcores: 48 00:05:18.655 EAL: Detected NUMA nodes: 2 00:05:18.655 EAL: Detected shared linkage of DPDK 00:05:18.655 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:18.914 EAL: Selected IOVA mode 'VA' 00:05:18.914 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.914 EAL: VFIO support initialized 00:05:18.914 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:18.914 EAL: Using IOMMU type 1 (Type 1) 00:05:18.914 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:05:18.914 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:05:18.914 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:05:18.914 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:05:18.914 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:05:18.914 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:05:18.914 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:05:18.914 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:05:18.914 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:05:18.914 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:05:18.914 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:05:18.914 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:05:18.914 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:05:18.914 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:05:18.914 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:05:19.173 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:05:19.741 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:05:23.022 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:05:23.022 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:05:23.022 Starting DPDK initialization... 00:05:23.022 Starting SPDK post initialization... 00:05:23.022 SPDK NVMe probe 00:05:23.022 Attaching to 0000:88:00.0 00:05:23.022 Attached to 0000:88:00.0 00:05:23.022 Cleaning up... 00:05:23.022 00:05:23.022 real 0m4.383s 00:05:23.022 user 0m3.231s 00:05:23.022 sys 0m0.209s 00:05:23.022 05:19:30 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:23.022 05:19:30 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:23.022 ************************************ 00:05:23.022 END TEST env_dpdk_post_init 00:05:23.022 ************************************ 00:05:23.281 05:19:30 env -- env/env.sh@26 -- # uname 00:05:23.281 05:19:30 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:23.281 05:19:30 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:23.281 05:19:30 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:23.281 05:19:30 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:23.281 05:19:30 env -- common/autotest_common.sh@10 -- # set +x 00:05:23.281 ************************************ 00:05:23.281 START TEST env_mem_callbacks 00:05:23.281 ************************************ 00:05:23.281 05:19:30 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:23.281 EAL: Detected CPU lcores: 48 00:05:23.281 EAL: Detected NUMA nodes: 2 00:05:23.281 EAL: Detected shared linkage of DPDK 00:05:23.281 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:23.281 EAL: Selected IOVA mode 'VA' 00:05:23.281 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.281 EAL: VFIO support initialized 00:05:23.281 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:23.281 00:05:23.281 00:05:23.281 CUnit - A unit testing framework for C - Version 2.1-3 00:05:23.281 http://cunit.sourceforge.net/ 00:05:23.281 00:05:23.281 00:05:23.281 Suite: memory 00:05:23.281 Test: test ... 00:05:23.281 register 0x200000200000 2097152 00:05:23.281 malloc 3145728 00:05:23.281 register 0x200000400000 4194304 00:05:23.281 buf 0x200000500000 len 3145728 PASSED 00:05:23.281 malloc 64 00:05:23.281 buf 0x2000004fff40 len 64 PASSED 00:05:23.281 malloc 4194304 00:05:23.281 register 0x200000800000 6291456 00:05:23.281 buf 0x200000a00000 len 4194304 PASSED 00:05:23.281 free 0x200000500000 3145728 00:05:23.281 free 0x2000004fff40 64 00:05:23.281 unregister 0x200000400000 4194304 PASSED 00:05:23.281 free 0x200000a00000 4194304 00:05:23.281 unregister 0x200000800000 6291456 PASSED 00:05:23.281 malloc 8388608 00:05:23.281 register 0x200000400000 10485760 00:05:23.281 buf 0x200000600000 len 8388608 PASSED 00:05:23.281 free 0x200000600000 8388608 00:05:23.281 unregister 0x200000400000 10485760 PASSED 00:05:23.281 passed 00:05:23.281 00:05:23.281 Run Summary: Type Total Ran Passed Failed Inactive 00:05:23.281 suites 1 1 n/a 0 0 00:05:23.281 tests 1 1 1 0 0 00:05:23.281 asserts 15 15 15 0 n/a 00:05:23.281 00:05:23.281 Elapsed time = 0.005 seconds 00:05:23.281 00:05:23.281 real 0m0.048s 00:05:23.281 user 0m0.014s 00:05:23.281 sys 0m0.034s 00:05:23.281 05:19:30 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:23.281 05:19:30 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:23.281 ************************************ 00:05:23.281 END TEST env_mem_callbacks 00:05:23.281 ************************************ 00:05:23.281 00:05:23.281 real 0m6.380s 00:05:23.281 user 0m4.361s 00:05:23.281 sys 0m1.060s 00:05:23.281 05:19:30 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:23.281 05:19:30 env -- common/autotest_common.sh@10 -- # set +x 00:05:23.281 ************************************ 00:05:23.281 END TEST env 00:05:23.281 ************************************ 00:05:23.281 05:19:30 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:23.281 05:19:30 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:23.281 05:19:30 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:23.281 05:19:30 -- common/autotest_common.sh@10 -- # set +x 00:05:23.281 ************************************ 00:05:23.281 START TEST rpc 00:05:23.281 ************************************ 00:05:23.281 05:19:30 rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:23.281 * Looking for test storage... 00:05:23.281 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:23.281 05:19:30 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3101342 00:05:23.281 05:19:30 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:23.281 05:19:30 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:23.281 05:19:30 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3101342 00:05:23.281 05:19:30 rpc -- common/autotest_common.sh@827 -- # '[' -z 3101342 ']' 00:05:23.281 05:19:30 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.281 05:19:30 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:23.281 05:19:30 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.281 05:19:30 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:23.281 05:19:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.281 [2024-07-14 05:19:30.380815] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:23.281 [2024-07-14 05:19:30.380936] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3101342 ] 00:05:23.540 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.540 [2024-07-14 05:19:30.442668] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.540 [2024-07-14 05:19:30.528786] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:23.540 [2024-07-14 05:19:30.528843] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3101342' to capture a snapshot of events at runtime. 00:05:23.540 [2024-07-14 05:19:30.528879] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:23.540 [2024-07-14 05:19:30.528890] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:23.540 [2024-07-14 05:19:30.528900] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3101342 for offline analysis/debug. 00:05:23.540 [2024-07-14 05:19:30.528941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.798 05:19:30 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:23.798 05:19:30 rpc -- common/autotest_common.sh@860 -- # return 0 00:05:23.798 05:19:30 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:23.798 05:19:30 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:23.798 05:19:30 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:23.798 05:19:30 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:23.798 05:19:30 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:23.798 05:19:30 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:23.798 05:19:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.798 ************************************ 00:05:23.798 START TEST rpc_integrity 00:05:23.798 ************************************ 00:05:23.798 05:19:30 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:23.798 05:19:30 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:23.798 05:19:30 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.798 05:19:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:23.798 05:19:30 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:23.798 05:19:30 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:23.798 05:19:30 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:23.798 05:19:30 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:23.798 05:19:30 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:23.798 05:19:30 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.798 05:19:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:23.798 05:19:30 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:23.798 05:19:30 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:23.798 05:19:30 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:23.798 05:19:30 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.798 05:19:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:23.798 05:19:30 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:23.798 05:19:30 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:23.798 { 00:05:23.798 "name": "Malloc0", 00:05:23.798 "aliases": [ 00:05:23.798 "8bef2cca-c84d-43f1-8d6d-b721b27592ef" 00:05:23.798 ], 00:05:23.798 "product_name": "Malloc disk", 00:05:23.798 "block_size": 512, 00:05:23.798 "num_blocks": 16384, 00:05:23.798 "uuid": "8bef2cca-c84d-43f1-8d6d-b721b27592ef", 00:05:23.798 "assigned_rate_limits": { 00:05:23.798 "rw_ios_per_sec": 0, 00:05:23.798 "rw_mbytes_per_sec": 0, 00:05:23.798 "r_mbytes_per_sec": 0, 00:05:23.798 "w_mbytes_per_sec": 0 00:05:23.798 }, 00:05:23.798 "claimed": false, 00:05:23.798 "zoned": false, 00:05:23.798 "supported_io_types": { 00:05:23.798 "read": true, 00:05:23.798 "write": true, 00:05:23.798 "unmap": true, 00:05:23.798 "write_zeroes": true, 00:05:23.798 "flush": true, 00:05:23.798 "reset": true, 00:05:23.798 "compare": false, 00:05:23.798 "compare_and_write": false, 00:05:23.798 "abort": true, 00:05:23.798 "nvme_admin": false, 00:05:23.798 "nvme_io": false 00:05:23.798 }, 00:05:23.798 "memory_domains": [ 00:05:23.798 { 00:05:23.798 "dma_device_id": "system", 00:05:23.798 "dma_device_type": 1 00:05:23.798 }, 00:05:23.798 { 00:05:23.798 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:23.798 "dma_device_type": 2 00:05:23.798 } 00:05:23.798 ], 00:05:23.798 "driver_specific": {} 00:05:23.798 } 00:05:23.798 ]' 00:05:23.798 05:19:30 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:24.055 05:19:30 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:24.055 05:19:30 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:24.055 05:19:30 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:24.055 05:19:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:24.055 [2024-07-14 05:19:30.928338] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:24.055 [2024-07-14 05:19:30.928388] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:24.055 [2024-07-14 05:19:30.928413] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x15d9d60 00:05:24.055 [2024-07-14 05:19:30.928428] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:24.055 [2024-07-14 05:19:30.929935] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:24.055 [2024-07-14 05:19:30.929963] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:24.055 Passthru0 00:05:24.055 05:19:30 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:24.055 05:19:30 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:24.055 05:19:30 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:24.055 05:19:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:24.055 05:19:30 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:24.055 05:19:30 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:24.055 { 00:05:24.055 "name": "Malloc0", 00:05:24.055 "aliases": [ 00:05:24.055 "8bef2cca-c84d-43f1-8d6d-b721b27592ef" 00:05:24.055 ], 00:05:24.056 "product_name": "Malloc disk", 00:05:24.056 "block_size": 512, 00:05:24.056 "num_blocks": 16384, 00:05:24.056 "uuid": "8bef2cca-c84d-43f1-8d6d-b721b27592ef", 00:05:24.056 "assigned_rate_limits": { 00:05:24.056 "rw_ios_per_sec": 0, 00:05:24.056 "rw_mbytes_per_sec": 0, 00:05:24.056 "r_mbytes_per_sec": 0, 00:05:24.056 "w_mbytes_per_sec": 0 00:05:24.056 }, 00:05:24.056 "claimed": true, 00:05:24.056 "claim_type": "exclusive_write", 00:05:24.056 "zoned": false, 00:05:24.056 "supported_io_types": { 00:05:24.056 "read": true, 00:05:24.056 "write": true, 00:05:24.056 "unmap": true, 00:05:24.056 "write_zeroes": true, 00:05:24.056 "flush": true, 00:05:24.056 "reset": true, 00:05:24.056 "compare": false, 00:05:24.056 "compare_and_write": false, 00:05:24.056 "abort": true, 00:05:24.056 "nvme_admin": false, 00:05:24.056 "nvme_io": false 00:05:24.056 }, 00:05:24.056 "memory_domains": [ 00:05:24.056 { 00:05:24.056 "dma_device_id": "system", 00:05:24.056 "dma_device_type": 1 00:05:24.056 }, 00:05:24.056 { 00:05:24.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:24.056 "dma_device_type": 2 00:05:24.056 } 00:05:24.056 ], 00:05:24.056 "driver_specific": {} 00:05:24.056 }, 00:05:24.056 { 00:05:24.056 "name": "Passthru0", 00:05:24.056 "aliases": [ 00:05:24.056 "733a367b-3fa1-5475-85a5-a7db772b7b92" 00:05:24.056 ], 00:05:24.056 "product_name": "passthru", 00:05:24.056 "block_size": 512, 00:05:24.056 "num_blocks": 16384, 00:05:24.056 "uuid": "733a367b-3fa1-5475-85a5-a7db772b7b92", 00:05:24.056 "assigned_rate_limits": { 00:05:24.056 "rw_ios_per_sec": 0, 00:05:24.056 "rw_mbytes_per_sec": 0, 00:05:24.056 "r_mbytes_per_sec": 0, 00:05:24.056 "w_mbytes_per_sec": 0 00:05:24.056 }, 00:05:24.056 "claimed": false, 00:05:24.056 "zoned": false, 00:05:24.056 "supported_io_types": { 00:05:24.056 "read": true, 00:05:24.056 "write": true, 00:05:24.056 "unmap": true, 00:05:24.056 "write_zeroes": true, 00:05:24.056 "flush": true, 00:05:24.056 "reset": true, 00:05:24.056 "compare": false, 00:05:24.056 "compare_and_write": false, 00:05:24.056 "abort": true, 00:05:24.056 "nvme_admin": false, 00:05:24.056 "nvme_io": false 00:05:24.056 }, 00:05:24.056 "memory_domains": [ 00:05:24.056 { 00:05:24.056 "dma_device_id": "system", 00:05:24.056 "dma_device_type": 1 00:05:24.056 }, 00:05:24.056 { 00:05:24.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:24.056 "dma_device_type": 2 00:05:24.056 } 00:05:24.056 ], 00:05:24.056 "driver_specific": { 00:05:24.056 "passthru": { 00:05:24.056 "name": "Passthru0", 00:05:24.056 "base_bdev_name": "Malloc0" 00:05:24.056 } 00:05:24.056 } 00:05:24.056 } 00:05:24.056 ]' 00:05:24.056 05:19:30 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:24.056 05:19:30 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:24.056 05:19:30 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:24.056 05:19:30 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:24.056 05:19:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:24.056 05:19:30 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:24.056 05:19:30 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:24.056 05:19:30 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:24.056 05:19:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:24.056 05:19:31 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:24.056 05:19:31 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:24.056 05:19:31 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:24.056 05:19:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:24.056 05:19:31 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:24.056 05:19:31 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:24.056 05:19:31 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:24.056 05:19:31 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:24.056 00:05:24.056 real 0m0.233s 00:05:24.056 user 0m0.150s 00:05:24.056 sys 0m0.026s 00:05:24.056 05:19:31 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:24.056 05:19:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:24.056 ************************************ 00:05:24.056 END TEST rpc_integrity 00:05:24.056 ************************************ 00:05:24.056 05:19:31 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:24.056 05:19:31 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:24.056 05:19:31 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:24.056 05:19:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.056 ************************************ 00:05:24.056 START TEST rpc_plugins 00:05:24.056 ************************************ 00:05:24.056 05:19:31 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:05:24.056 05:19:31 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:24.056 05:19:31 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:24.056 05:19:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:24.056 05:19:31 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:24.056 05:19:31 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:24.056 05:19:31 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:24.056 05:19:31 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:24.056 05:19:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:24.056 05:19:31 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:24.056 05:19:31 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:24.056 { 00:05:24.056 "name": "Malloc1", 00:05:24.056 "aliases": [ 00:05:24.056 "761f9246-5571-4576-b9b1-18830120aad0" 00:05:24.056 ], 00:05:24.056 "product_name": "Malloc disk", 00:05:24.056 "block_size": 4096, 00:05:24.056 "num_blocks": 256, 00:05:24.056 "uuid": "761f9246-5571-4576-b9b1-18830120aad0", 00:05:24.056 "assigned_rate_limits": { 00:05:24.056 "rw_ios_per_sec": 0, 00:05:24.056 "rw_mbytes_per_sec": 0, 00:05:24.056 "r_mbytes_per_sec": 0, 00:05:24.056 "w_mbytes_per_sec": 0 00:05:24.056 }, 00:05:24.056 "claimed": false, 00:05:24.056 "zoned": false, 00:05:24.056 "supported_io_types": { 00:05:24.056 "read": true, 00:05:24.056 "write": true, 00:05:24.056 "unmap": true, 00:05:24.056 "write_zeroes": true, 00:05:24.056 "flush": true, 00:05:24.056 "reset": true, 00:05:24.056 "compare": false, 00:05:24.056 "compare_and_write": false, 00:05:24.056 "abort": true, 00:05:24.056 "nvme_admin": false, 00:05:24.056 "nvme_io": false 00:05:24.056 }, 00:05:24.056 "memory_domains": [ 00:05:24.056 { 00:05:24.056 "dma_device_id": "system", 00:05:24.056 "dma_device_type": 1 00:05:24.056 }, 00:05:24.056 { 00:05:24.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:24.056 "dma_device_type": 2 00:05:24.056 } 00:05:24.056 ], 00:05:24.056 "driver_specific": {} 00:05:24.056 } 00:05:24.056 ]' 00:05:24.056 05:19:31 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:24.056 05:19:31 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:24.056 05:19:31 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:24.056 05:19:31 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:24.056 05:19:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:24.056 05:19:31 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:24.314 05:19:31 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:24.314 05:19:31 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:24.314 05:19:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:24.314 05:19:31 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:24.314 05:19:31 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:24.314 05:19:31 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:24.314 05:19:31 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:24.314 00:05:24.314 real 0m0.115s 00:05:24.314 user 0m0.076s 00:05:24.314 sys 0m0.011s 00:05:24.314 05:19:31 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:24.314 05:19:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:24.314 ************************************ 00:05:24.314 END TEST rpc_plugins 00:05:24.314 ************************************ 00:05:24.314 05:19:31 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:24.314 05:19:31 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:24.314 05:19:31 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:24.314 05:19:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.314 ************************************ 00:05:24.314 START TEST rpc_trace_cmd_test 00:05:24.314 ************************************ 00:05:24.314 05:19:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:05:24.314 05:19:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:24.314 05:19:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:24.314 05:19:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:24.314 05:19:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:24.314 05:19:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:24.314 05:19:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:24.314 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3101342", 00:05:24.314 "tpoint_group_mask": "0x8", 00:05:24.314 "iscsi_conn": { 00:05:24.314 "mask": "0x2", 00:05:24.314 "tpoint_mask": "0x0" 00:05:24.314 }, 00:05:24.314 "scsi": { 00:05:24.314 "mask": "0x4", 00:05:24.314 "tpoint_mask": "0x0" 00:05:24.314 }, 00:05:24.314 "bdev": { 00:05:24.314 "mask": "0x8", 00:05:24.314 "tpoint_mask": "0xffffffffffffffff" 00:05:24.314 }, 00:05:24.314 "nvmf_rdma": { 00:05:24.314 "mask": "0x10", 00:05:24.314 "tpoint_mask": "0x0" 00:05:24.314 }, 00:05:24.314 "nvmf_tcp": { 00:05:24.314 "mask": "0x20", 00:05:24.314 "tpoint_mask": "0x0" 00:05:24.314 }, 00:05:24.314 "ftl": { 00:05:24.314 "mask": "0x40", 00:05:24.314 "tpoint_mask": "0x0" 00:05:24.314 }, 00:05:24.314 "blobfs": { 00:05:24.314 "mask": "0x80", 00:05:24.314 "tpoint_mask": "0x0" 00:05:24.314 }, 00:05:24.314 "dsa": { 00:05:24.314 "mask": "0x200", 00:05:24.314 "tpoint_mask": "0x0" 00:05:24.314 }, 00:05:24.314 "thread": { 00:05:24.314 "mask": "0x400", 00:05:24.314 "tpoint_mask": "0x0" 00:05:24.314 }, 00:05:24.314 "nvme_pcie": { 00:05:24.314 "mask": "0x800", 00:05:24.314 "tpoint_mask": "0x0" 00:05:24.314 }, 00:05:24.314 "iaa": { 00:05:24.314 "mask": "0x1000", 00:05:24.314 "tpoint_mask": "0x0" 00:05:24.314 }, 00:05:24.314 "nvme_tcp": { 00:05:24.314 "mask": "0x2000", 00:05:24.314 "tpoint_mask": "0x0" 00:05:24.314 }, 00:05:24.314 "bdev_nvme": { 00:05:24.314 "mask": "0x4000", 00:05:24.314 "tpoint_mask": "0x0" 00:05:24.314 }, 00:05:24.314 "sock": { 00:05:24.314 "mask": "0x8000", 00:05:24.314 "tpoint_mask": "0x0" 00:05:24.314 } 00:05:24.314 }' 00:05:24.314 05:19:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:24.314 05:19:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:24.314 05:19:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:24.314 05:19:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:24.314 05:19:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:24.314 05:19:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:24.314 05:19:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:24.314 05:19:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:24.314 05:19:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:24.598 05:19:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:24.598 00:05:24.598 real 0m0.195s 00:05:24.598 user 0m0.176s 00:05:24.598 sys 0m0.011s 00:05:24.598 05:19:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:24.598 05:19:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:24.598 ************************************ 00:05:24.598 END TEST rpc_trace_cmd_test 00:05:24.598 ************************************ 00:05:24.598 05:19:31 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:24.598 05:19:31 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:24.598 05:19:31 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:24.598 05:19:31 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:24.598 05:19:31 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:24.598 05:19:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.598 ************************************ 00:05:24.598 START TEST rpc_daemon_integrity 00:05:24.598 ************************************ 00:05:24.598 05:19:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:24.598 05:19:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:24.598 05:19:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:24.598 05:19:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:24.598 05:19:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:24.598 05:19:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:24.598 05:19:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:24.598 05:19:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:24.598 05:19:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:24.598 05:19:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:24.598 05:19:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:24.598 05:19:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:24.598 05:19:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:24.598 05:19:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:24.598 05:19:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:24.598 05:19:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:24.598 05:19:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:24.598 05:19:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:24.598 { 00:05:24.598 "name": "Malloc2", 00:05:24.598 "aliases": [ 00:05:24.598 "107e2bf1-e005-43dc-912a-61775f59238c" 00:05:24.598 ], 00:05:24.598 "product_name": "Malloc disk", 00:05:24.598 "block_size": 512, 00:05:24.598 "num_blocks": 16384, 00:05:24.598 "uuid": "107e2bf1-e005-43dc-912a-61775f59238c", 00:05:24.598 "assigned_rate_limits": { 00:05:24.598 "rw_ios_per_sec": 0, 00:05:24.598 "rw_mbytes_per_sec": 0, 00:05:24.598 "r_mbytes_per_sec": 0, 00:05:24.598 "w_mbytes_per_sec": 0 00:05:24.598 }, 00:05:24.598 "claimed": false, 00:05:24.598 "zoned": false, 00:05:24.598 "supported_io_types": { 00:05:24.598 "read": true, 00:05:24.598 "write": true, 00:05:24.598 "unmap": true, 00:05:24.598 "write_zeroes": true, 00:05:24.598 "flush": true, 00:05:24.598 "reset": true, 00:05:24.598 "compare": false, 00:05:24.598 "compare_and_write": false, 00:05:24.598 "abort": true, 00:05:24.598 "nvme_admin": false, 00:05:24.598 "nvme_io": false 00:05:24.598 }, 00:05:24.598 "memory_domains": [ 00:05:24.598 { 00:05:24.598 "dma_device_id": "system", 00:05:24.598 "dma_device_type": 1 00:05:24.598 }, 00:05:24.598 { 00:05:24.598 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:24.598 "dma_device_type": 2 00:05:24.598 } 00:05:24.598 ], 00:05:24.598 "driver_specific": {} 00:05:24.598 } 00:05:24.598 ]' 00:05:24.598 05:19:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:24.598 05:19:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:24.598 05:19:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:24.598 05:19:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:24.598 05:19:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:24.598 [2024-07-14 05:19:31.606300] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:24.598 [2024-07-14 05:19:31.606350] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:24.598 [2024-07-14 05:19:31.606375] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x178b420 00:05:24.598 [2024-07-14 05:19:31.606390] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:24.598 [2024-07-14 05:19:31.607740] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:24.598 [2024-07-14 05:19:31.607769] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:24.598 Passthru0 00:05:24.598 05:19:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:24.598 05:19:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:24.598 05:19:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:24.598 05:19:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:24.598 05:19:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:24.598 05:19:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:24.598 { 00:05:24.598 "name": "Malloc2", 00:05:24.598 "aliases": [ 00:05:24.598 "107e2bf1-e005-43dc-912a-61775f59238c" 00:05:24.598 ], 00:05:24.598 "product_name": "Malloc disk", 00:05:24.598 "block_size": 512, 00:05:24.598 "num_blocks": 16384, 00:05:24.598 "uuid": "107e2bf1-e005-43dc-912a-61775f59238c", 00:05:24.598 "assigned_rate_limits": { 00:05:24.598 "rw_ios_per_sec": 0, 00:05:24.598 "rw_mbytes_per_sec": 0, 00:05:24.598 "r_mbytes_per_sec": 0, 00:05:24.598 "w_mbytes_per_sec": 0 00:05:24.598 }, 00:05:24.598 "claimed": true, 00:05:24.598 "claim_type": "exclusive_write", 00:05:24.598 "zoned": false, 00:05:24.598 "supported_io_types": { 00:05:24.598 "read": true, 00:05:24.598 "write": true, 00:05:24.598 "unmap": true, 00:05:24.598 "write_zeroes": true, 00:05:24.598 "flush": true, 00:05:24.598 "reset": true, 00:05:24.598 "compare": false, 00:05:24.598 "compare_and_write": false, 00:05:24.598 "abort": true, 00:05:24.598 "nvme_admin": false, 00:05:24.598 "nvme_io": false 00:05:24.598 }, 00:05:24.598 "memory_domains": [ 00:05:24.598 { 00:05:24.598 "dma_device_id": "system", 00:05:24.598 "dma_device_type": 1 00:05:24.598 }, 00:05:24.598 { 00:05:24.598 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:24.598 "dma_device_type": 2 00:05:24.598 } 00:05:24.598 ], 00:05:24.598 "driver_specific": {} 00:05:24.598 }, 00:05:24.598 { 00:05:24.598 "name": "Passthru0", 00:05:24.598 "aliases": [ 00:05:24.598 "27b7a770-0305-5cff-be5e-1406285575c5" 00:05:24.598 ], 00:05:24.598 "product_name": "passthru", 00:05:24.598 "block_size": 512, 00:05:24.598 "num_blocks": 16384, 00:05:24.598 "uuid": "27b7a770-0305-5cff-be5e-1406285575c5", 00:05:24.598 "assigned_rate_limits": { 00:05:24.598 "rw_ios_per_sec": 0, 00:05:24.598 "rw_mbytes_per_sec": 0, 00:05:24.598 "r_mbytes_per_sec": 0, 00:05:24.598 "w_mbytes_per_sec": 0 00:05:24.598 }, 00:05:24.598 "claimed": false, 00:05:24.598 "zoned": false, 00:05:24.598 "supported_io_types": { 00:05:24.598 "read": true, 00:05:24.598 "write": true, 00:05:24.598 "unmap": true, 00:05:24.598 "write_zeroes": true, 00:05:24.598 "flush": true, 00:05:24.599 "reset": true, 00:05:24.599 "compare": false, 00:05:24.599 "compare_and_write": false, 00:05:24.599 "abort": true, 00:05:24.599 "nvme_admin": false, 00:05:24.599 "nvme_io": false 00:05:24.599 }, 00:05:24.599 "memory_domains": [ 00:05:24.599 { 00:05:24.599 "dma_device_id": "system", 00:05:24.599 "dma_device_type": 1 00:05:24.599 }, 00:05:24.599 { 00:05:24.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:24.599 "dma_device_type": 2 00:05:24.599 } 00:05:24.599 ], 00:05:24.599 "driver_specific": { 00:05:24.599 "passthru": { 00:05:24.599 "name": "Passthru0", 00:05:24.599 "base_bdev_name": "Malloc2" 00:05:24.599 } 00:05:24.599 } 00:05:24.599 } 00:05:24.599 ]' 00:05:24.599 05:19:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:24.599 05:19:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:24.599 05:19:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:24.599 05:19:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:24.599 05:19:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:24.599 05:19:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:24.599 05:19:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:24.599 05:19:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:24.599 05:19:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:24.599 05:19:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:24.599 05:19:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:24.599 05:19:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:24.599 05:19:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:24.599 05:19:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:24.599 05:19:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:24.599 05:19:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:24.857 05:19:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:24.857 00:05:24.857 real 0m0.232s 00:05:24.857 user 0m0.153s 00:05:24.857 sys 0m0.025s 00:05:24.857 05:19:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:24.857 05:19:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:24.857 ************************************ 00:05:24.857 END TEST rpc_daemon_integrity 00:05:24.857 ************************************ 00:05:24.857 05:19:31 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:24.857 05:19:31 rpc -- rpc/rpc.sh@84 -- # killprocess 3101342 00:05:24.857 05:19:31 rpc -- common/autotest_common.sh@946 -- # '[' -z 3101342 ']' 00:05:24.857 05:19:31 rpc -- common/autotest_common.sh@950 -- # kill -0 3101342 00:05:24.857 05:19:31 rpc -- common/autotest_common.sh@951 -- # uname 00:05:24.857 05:19:31 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:24.857 05:19:31 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3101342 00:05:24.857 05:19:31 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:24.857 05:19:31 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:24.857 05:19:31 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3101342' 00:05:24.857 killing process with pid 3101342 00:05:24.857 05:19:31 rpc -- common/autotest_common.sh@965 -- # kill 3101342 00:05:24.857 05:19:31 rpc -- common/autotest_common.sh@970 -- # wait 3101342 00:05:25.115 00:05:25.115 real 0m1.915s 00:05:25.115 user 0m2.416s 00:05:25.115 sys 0m0.586s 00:05:25.115 05:19:32 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:25.115 05:19:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.115 ************************************ 00:05:25.115 END TEST rpc 00:05:25.115 ************************************ 00:05:25.115 05:19:32 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:25.115 05:19:32 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:25.115 05:19:32 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:25.115 05:19:32 -- common/autotest_common.sh@10 -- # set +x 00:05:25.373 ************************************ 00:05:25.373 START TEST skip_rpc 00:05:25.373 ************************************ 00:05:25.373 05:19:32 skip_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:25.373 * Looking for test storage... 00:05:25.373 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:25.373 05:19:32 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:25.373 05:19:32 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:25.373 05:19:32 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:25.373 05:19:32 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:25.373 05:19:32 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:25.373 05:19:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.373 ************************************ 00:05:25.373 START TEST skip_rpc 00:05:25.373 ************************************ 00:05:25.373 05:19:32 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:05:25.373 05:19:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3101773 00:05:25.373 05:19:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:25.373 05:19:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:25.373 05:19:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:25.373 [2024-07-14 05:19:32.359220] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:25.373 [2024-07-14 05:19:32.359329] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3101773 ] 00:05:25.373 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.373 [2024-07-14 05:19:32.417724] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.631 [2024-07-14 05:19:32.505590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.889 05:19:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:30.889 05:19:37 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:30.889 05:19:37 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:30.889 05:19:37 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:30.889 05:19:37 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:30.889 05:19:37 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:30.889 05:19:37 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:30.889 05:19:37 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:30.889 05:19:37 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.889 05:19:37 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.889 05:19:37 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:30.889 05:19:37 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:30.889 05:19:37 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:30.889 05:19:37 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:30.889 05:19:37 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:30.889 05:19:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:30.889 05:19:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3101773 00:05:30.889 05:19:37 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 3101773 ']' 00:05:30.889 05:19:37 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 3101773 00:05:30.889 05:19:37 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:05:30.889 05:19:37 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:30.889 05:19:37 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3101773 00:05:30.889 05:19:37 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:30.889 05:19:37 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:30.889 05:19:37 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3101773' 00:05:30.889 killing process with pid 3101773 00:05:30.889 05:19:37 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 3101773 00:05:30.889 05:19:37 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 3101773 00:05:30.889 00:05:30.889 real 0m5.445s 00:05:30.889 user 0m5.138s 00:05:30.889 sys 0m0.310s 00:05:30.889 05:19:37 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:30.889 05:19:37 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.889 ************************************ 00:05:30.889 END TEST skip_rpc 00:05:30.889 ************************************ 00:05:30.889 05:19:37 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:30.889 05:19:37 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:30.889 05:19:37 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:30.889 05:19:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.889 ************************************ 00:05:30.889 START TEST skip_rpc_with_json 00:05:30.889 ************************************ 00:05:30.889 05:19:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:05:30.889 05:19:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:30.889 05:19:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3102458 00:05:30.889 05:19:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:30.890 05:19:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:30.890 05:19:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3102458 00:05:30.890 05:19:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 3102458 ']' 00:05:30.890 05:19:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.890 05:19:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:30.890 05:19:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.890 05:19:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:30.890 05:19:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:30.890 [2024-07-14 05:19:37.853917] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:30.890 [2024-07-14 05:19:37.854015] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3102458 ] 00:05:30.890 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.890 [2024-07-14 05:19:37.917008] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.148 [2024-07-14 05:19:38.009396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.407 05:19:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:31.407 05:19:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:05:31.407 05:19:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:31.407 05:19:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.407 05:19:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:31.407 [2024-07-14 05:19:38.270115] nvmf_rpc.c:2558:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:31.407 request: 00:05:31.407 { 00:05:31.407 "trtype": "tcp", 00:05:31.407 "method": "nvmf_get_transports", 00:05:31.407 "req_id": 1 00:05:31.407 } 00:05:31.407 Got JSON-RPC error response 00:05:31.407 response: 00:05:31.407 { 00:05:31.407 "code": -19, 00:05:31.407 "message": "No such device" 00:05:31.407 } 00:05:31.407 05:19:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:31.407 05:19:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:31.407 05:19:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.407 05:19:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:31.407 [2024-07-14 05:19:38.278228] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:31.407 05:19:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.407 05:19:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:31.407 05:19:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.407 05:19:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:31.407 05:19:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.407 05:19:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:31.407 { 00:05:31.407 "subsystems": [ 00:05:31.408 { 00:05:31.408 "subsystem": "vfio_user_target", 00:05:31.408 "config": null 00:05:31.408 }, 00:05:31.408 { 00:05:31.408 "subsystem": "keyring", 00:05:31.408 "config": [] 00:05:31.408 }, 00:05:31.408 { 00:05:31.408 "subsystem": "iobuf", 00:05:31.408 "config": [ 00:05:31.408 { 00:05:31.408 "method": "iobuf_set_options", 00:05:31.408 "params": { 00:05:31.408 "small_pool_count": 8192, 00:05:31.408 "large_pool_count": 1024, 00:05:31.408 "small_bufsize": 8192, 00:05:31.408 "large_bufsize": 135168 00:05:31.408 } 00:05:31.408 } 00:05:31.408 ] 00:05:31.408 }, 00:05:31.408 { 00:05:31.408 "subsystem": "sock", 00:05:31.408 "config": [ 00:05:31.408 { 00:05:31.408 "method": "sock_set_default_impl", 00:05:31.408 "params": { 00:05:31.408 "impl_name": "posix" 00:05:31.408 } 00:05:31.408 }, 00:05:31.408 { 00:05:31.408 "method": "sock_impl_set_options", 00:05:31.408 "params": { 00:05:31.408 "impl_name": "ssl", 00:05:31.408 "recv_buf_size": 4096, 00:05:31.408 "send_buf_size": 4096, 00:05:31.408 "enable_recv_pipe": true, 00:05:31.408 "enable_quickack": false, 00:05:31.408 "enable_placement_id": 0, 00:05:31.408 "enable_zerocopy_send_server": true, 00:05:31.408 "enable_zerocopy_send_client": false, 00:05:31.408 "zerocopy_threshold": 0, 00:05:31.408 "tls_version": 0, 00:05:31.408 "enable_ktls": false 00:05:31.408 } 00:05:31.408 }, 00:05:31.408 { 00:05:31.408 "method": "sock_impl_set_options", 00:05:31.408 "params": { 00:05:31.408 "impl_name": "posix", 00:05:31.408 "recv_buf_size": 2097152, 00:05:31.408 "send_buf_size": 2097152, 00:05:31.408 "enable_recv_pipe": true, 00:05:31.408 "enable_quickack": false, 00:05:31.408 "enable_placement_id": 0, 00:05:31.408 "enable_zerocopy_send_server": true, 00:05:31.408 "enable_zerocopy_send_client": false, 00:05:31.408 "zerocopy_threshold": 0, 00:05:31.408 "tls_version": 0, 00:05:31.408 "enable_ktls": false 00:05:31.408 } 00:05:31.408 } 00:05:31.408 ] 00:05:31.408 }, 00:05:31.408 { 00:05:31.408 "subsystem": "vmd", 00:05:31.408 "config": [] 00:05:31.408 }, 00:05:31.408 { 00:05:31.408 "subsystem": "accel", 00:05:31.408 "config": [ 00:05:31.408 { 00:05:31.408 "method": "accel_set_options", 00:05:31.408 "params": { 00:05:31.408 "small_cache_size": 128, 00:05:31.408 "large_cache_size": 16, 00:05:31.408 "task_count": 2048, 00:05:31.408 "sequence_count": 2048, 00:05:31.408 "buf_count": 2048 00:05:31.408 } 00:05:31.408 } 00:05:31.408 ] 00:05:31.408 }, 00:05:31.408 { 00:05:31.408 "subsystem": "bdev", 00:05:31.408 "config": [ 00:05:31.408 { 00:05:31.408 "method": "bdev_set_options", 00:05:31.408 "params": { 00:05:31.408 "bdev_io_pool_size": 65535, 00:05:31.408 "bdev_io_cache_size": 256, 00:05:31.408 "bdev_auto_examine": true, 00:05:31.408 "iobuf_small_cache_size": 128, 00:05:31.408 "iobuf_large_cache_size": 16 00:05:31.408 } 00:05:31.408 }, 00:05:31.408 { 00:05:31.408 "method": "bdev_raid_set_options", 00:05:31.408 "params": { 00:05:31.408 "process_window_size_kb": 1024 00:05:31.408 } 00:05:31.408 }, 00:05:31.408 { 00:05:31.408 "method": "bdev_iscsi_set_options", 00:05:31.408 "params": { 00:05:31.408 "timeout_sec": 30 00:05:31.408 } 00:05:31.408 }, 00:05:31.408 { 00:05:31.408 "method": "bdev_nvme_set_options", 00:05:31.408 "params": { 00:05:31.408 "action_on_timeout": "none", 00:05:31.408 "timeout_us": 0, 00:05:31.408 "timeout_admin_us": 0, 00:05:31.408 "keep_alive_timeout_ms": 10000, 00:05:31.408 "arbitration_burst": 0, 00:05:31.408 "low_priority_weight": 0, 00:05:31.408 "medium_priority_weight": 0, 00:05:31.408 "high_priority_weight": 0, 00:05:31.408 "nvme_adminq_poll_period_us": 10000, 00:05:31.408 "nvme_ioq_poll_period_us": 0, 00:05:31.408 "io_queue_requests": 0, 00:05:31.408 "delay_cmd_submit": true, 00:05:31.408 "transport_retry_count": 4, 00:05:31.408 "bdev_retry_count": 3, 00:05:31.408 "transport_ack_timeout": 0, 00:05:31.408 "ctrlr_loss_timeout_sec": 0, 00:05:31.408 "reconnect_delay_sec": 0, 00:05:31.408 "fast_io_fail_timeout_sec": 0, 00:05:31.408 "disable_auto_failback": false, 00:05:31.408 "generate_uuids": false, 00:05:31.408 "transport_tos": 0, 00:05:31.408 "nvme_error_stat": false, 00:05:31.408 "rdma_srq_size": 0, 00:05:31.408 "io_path_stat": false, 00:05:31.408 "allow_accel_sequence": false, 00:05:31.408 "rdma_max_cq_size": 0, 00:05:31.408 "rdma_cm_event_timeout_ms": 0, 00:05:31.408 "dhchap_digests": [ 00:05:31.408 "sha256", 00:05:31.408 "sha384", 00:05:31.408 "sha512" 00:05:31.408 ], 00:05:31.408 "dhchap_dhgroups": [ 00:05:31.408 "null", 00:05:31.408 "ffdhe2048", 00:05:31.408 "ffdhe3072", 00:05:31.408 "ffdhe4096", 00:05:31.408 "ffdhe6144", 00:05:31.408 "ffdhe8192" 00:05:31.408 ] 00:05:31.408 } 00:05:31.408 }, 00:05:31.408 { 00:05:31.408 "method": "bdev_nvme_set_hotplug", 00:05:31.408 "params": { 00:05:31.408 "period_us": 100000, 00:05:31.408 "enable": false 00:05:31.408 } 00:05:31.408 }, 00:05:31.408 { 00:05:31.408 "method": "bdev_wait_for_examine" 00:05:31.408 } 00:05:31.408 ] 00:05:31.408 }, 00:05:31.408 { 00:05:31.408 "subsystem": "scsi", 00:05:31.408 "config": null 00:05:31.408 }, 00:05:31.408 { 00:05:31.408 "subsystem": "scheduler", 00:05:31.408 "config": [ 00:05:31.408 { 00:05:31.408 "method": "framework_set_scheduler", 00:05:31.408 "params": { 00:05:31.408 "name": "static" 00:05:31.408 } 00:05:31.408 } 00:05:31.408 ] 00:05:31.408 }, 00:05:31.408 { 00:05:31.408 "subsystem": "vhost_scsi", 00:05:31.408 "config": [] 00:05:31.408 }, 00:05:31.408 { 00:05:31.408 "subsystem": "vhost_blk", 00:05:31.408 "config": [] 00:05:31.408 }, 00:05:31.408 { 00:05:31.408 "subsystem": "ublk", 00:05:31.408 "config": [] 00:05:31.408 }, 00:05:31.408 { 00:05:31.408 "subsystem": "nbd", 00:05:31.408 "config": [] 00:05:31.408 }, 00:05:31.408 { 00:05:31.408 "subsystem": "nvmf", 00:05:31.408 "config": [ 00:05:31.408 { 00:05:31.408 "method": "nvmf_set_config", 00:05:31.408 "params": { 00:05:31.408 "discovery_filter": "match_any", 00:05:31.408 "admin_cmd_passthru": { 00:05:31.408 "identify_ctrlr": false 00:05:31.408 } 00:05:31.408 } 00:05:31.408 }, 00:05:31.408 { 00:05:31.408 "method": "nvmf_set_max_subsystems", 00:05:31.408 "params": { 00:05:31.408 "max_subsystems": 1024 00:05:31.408 } 00:05:31.408 }, 00:05:31.408 { 00:05:31.408 "method": "nvmf_set_crdt", 00:05:31.408 "params": { 00:05:31.408 "crdt1": 0, 00:05:31.408 "crdt2": 0, 00:05:31.408 "crdt3": 0 00:05:31.408 } 00:05:31.408 }, 00:05:31.408 { 00:05:31.408 "method": "nvmf_create_transport", 00:05:31.408 "params": { 00:05:31.408 "trtype": "TCP", 00:05:31.408 "max_queue_depth": 128, 00:05:31.408 "max_io_qpairs_per_ctrlr": 127, 00:05:31.408 "in_capsule_data_size": 4096, 00:05:31.408 "max_io_size": 131072, 00:05:31.408 "io_unit_size": 131072, 00:05:31.408 "max_aq_depth": 128, 00:05:31.408 "num_shared_buffers": 511, 00:05:31.408 "buf_cache_size": 4294967295, 00:05:31.408 "dif_insert_or_strip": false, 00:05:31.408 "zcopy": false, 00:05:31.408 "c2h_success": true, 00:05:31.408 "sock_priority": 0, 00:05:31.408 "abort_timeout_sec": 1, 00:05:31.408 "ack_timeout": 0, 00:05:31.408 "data_wr_pool_size": 0 00:05:31.408 } 00:05:31.408 } 00:05:31.408 ] 00:05:31.408 }, 00:05:31.408 { 00:05:31.408 "subsystem": "iscsi", 00:05:31.408 "config": [ 00:05:31.408 { 00:05:31.408 "method": "iscsi_set_options", 00:05:31.408 "params": { 00:05:31.408 "node_base": "iqn.2016-06.io.spdk", 00:05:31.408 "max_sessions": 128, 00:05:31.408 "max_connections_per_session": 2, 00:05:31.408 "max_queue_depth": 64, 00:05:31.408 "default_time2wait": 2, 00:05:31.408 "default_time2retain": 20, 00:05:31.408 "first_burst_length": 8192, 00:05:31.408 "immediate_data": true, 00:05:31.408 "allow_duplicated_isid": false, 00:05:31.408 "error_recovery_level": 0, 00:05:31.408 "nop_timeout": 60, 00:05:31.408 "nop_in_interval": 30, 00:05:31.408 "disable_chap": false, 00:05:31.408 "require_chap": false, 00:05:31.409 "mutual_chap": false, 00:05:31.409 "chap_group": 0, 00:05:31.409 "max_large_datain_per_connection": 64, 00:05:31.409 "max_r2t_per_connection": 4, 00:05:31.409 "pdu_pool_size": 36864, 00:05:31.409 "immediate_data_pool_size": 16384, 00:05:31.409 "data_out_pool_size": 2048 00:05:31.409 } 00:05:31.409 } 00:05:31.409 ] 00:05:31.409 } 00:05:31.409 ] 00:05:31.409 } 00:05:31.409 05:19:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:31.409 05:19:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3102458 00:05:31.409 05:19:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 3102458 ']' 00:05:31.409 05:19:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 3102458 00:05:31.409 05:19:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:05:31.409 05:19:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:31.409 05:19:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3102458 00:05:31.409 05:19:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:31.409 05:19:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:31.409 05:19:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3102458' 00:05:31.409 killing process with pid 3102458 00:05:31.409 05:19:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 3102458 00:05:31.409 05:19:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 3102458 00:05:31.974 05:19:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3102598 00:05:31.974 05:19:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:31.974 05:19:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:37.271 05:19:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3102598 00:05:37.271 05:19:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 3102598 ']' 00:05:37.271 05:19:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 3102598 00:05:37.271 05:19:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:05:37.271 05:19:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:37.271 05:19:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3102598 00:05:37.271 05:19:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:37.271 05:19:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:37.271 05:19:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3102598' 00:05:37.271 killing process with pid 3102598 00:05:37.271 05:19:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 3102598 00:05:37.271 05:19:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 3102598 00:05:37.271 05:19:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:37.271 05:19:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:37.271 00:05:37.271 real 0m6.515s 00:05:37.271 user 0m6.148s 00:05:37.271 sys 0m0.680s 00:05:37.271 05:19:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:37.271 05:19:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:37.271 ************************************ 00:05:37.271 END TEST skip_rpc_with_json 00:05:37.271 ************************************ 00:05:37.271 05:19:44 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:37.271 05:19:44 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:37.271 05:19:44 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:37.271 05:19:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.271 ************************************ 00:05:37.271 START TEST skip_rpc_with_delay 00:05:37.271 ************************************ 00:05:37.271 05:19:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:05:37.271 05:19:44 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:37.271 05:19:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:37.271 05:19:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:37.271 05:19:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:37.271 05:19:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:37.271 05:19:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:37.271 05:19:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:37.271 05:19:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:37.271 05:19:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:37.271 05:19:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:37.271 05:19:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:37.271 05:19:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:37.529 [2024-07-14 05:19:44.423950] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:37.529 [2024-07-14 05:19:44.424054] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:37.529 05:19:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:37.529 05:19:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:37.529 05:19:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:37.529 05:19:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:37.529 00:05:37.529 real 0m0.069s 00:05:37.529 user 0m0.040s 00:05:37.529 sys 0m0.029s 00:05:37.529 05:19:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:37.529 05:19:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:37.529 ************************************ 00:05:37.529 END TEST skip_rpc_with_delay 00:05:37.529 ************************************ 00:05:37.529 05:19:44 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:37.529 05:19:44 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:37.529 05:19:44 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:37.529 05:19:44 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:37.529 05:19:44 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:37.529 05:19:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.529 ************************************ 00:05:37.529 START TEST exit_on_failed_rpc_init 00:05:37.529 ************************************ 00:05:37.529 05:19:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:05:37.529 05:19:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3103322 00:05:37.529 05:19:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:37.529 05:19:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3103322 00:05:37.529 05:19:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 3103322 ']' 00:05:37.529 05:19:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.529 05:19:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:37.529 05:19:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.529 05:19:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:37.529 05:19:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:37.529 [2024-07-14 05:19:44.535978] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:37.529 [2024-07-14 05:19:44.536052] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3103322 ] 00:05:37.529 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.529 [2024-07-14 05:19:44.593862] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.787 [2024-07-14 05:19:44.684140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.045 05:19:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:38.045 05:19:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:05:38.045 05:19:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:38.045 05:19:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:38.045 05:19:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:38.045 05:19:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:38.045 05:19:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:38.045 05:19:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:38.045 05:19:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:38.045 05:19:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:38.045 05:19:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:38.045 05:19:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:38.045 05:19:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:38.045 05:19:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:38.045 05:19:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:38.045 [2024-07-14 05:19:44.991004] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:38.045 [2024-07-14 05:19:44.991084] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3103331 ] 00:05:38.045 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.045 [2024-07-14 05:19:45.053343] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.045 [2024-07-14 05:19:45.148548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:38.045 [2024-07-14 05:19:45.148647] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:38.045 [2024-07-14 05:19:45.148668] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:38.046 [2024-07-14 05:19:45.148681] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:38.304 05:19:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:38.304 05:19:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:38.304 05:19:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:38.304 05:19:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:38.304 05:19:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:38.304 05:19:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:38.304 05:19:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:38.304 05:19:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3103322 00:05:38.304 05:19:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 3103322 ']' 00:05:38.304 05:19:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 3103322 00:05:38.304 05:19:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:05:38.304 05:19:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:38.304 05:19:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3103322 00:05:38.304 05:19:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:38.304 05:19:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:38.304 05:19:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3103322' 00:05:38.304 killing process with pid 3103322 00:05:38.304 05:19:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 3103322 00:05:38.304 05:19:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 3103322 00:05:38.870 00:05:38.870 real 0m1.194s 00:05:38.870 user 0m1.295s 00:05:38.870 sys 0m0.464s 00:05:38.871 05:19:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:38.871 05:19:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:38.871 ************************************ 00:05:38.871 END TEST exit_on_failed_rpc_init 00:05:38.871 ************************************ 00:05:38.871 05:19:45 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:38.871 00:05:38.871 real 0m13.465s 00:05:38.871 user 0m12.722s 00:05:38.871 sys 0m1.640s 00:05:38.871 05:19:45 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:38.871 05:19:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.871 ************************************ 00:05:38.871 END TEST skip_rpc 00:05:38.871 ************************************ 00:05:38.871 05:19:45 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:38.871 05:19:45 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:38.871 05:19:45 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:38.871 05:19:45 -- common/autotest_common.sh@10 -- # set +x 00:05:38.871 ************************************ 00:05:38.871 START TEST rpc_client 00:05:38.871 ************************************ 00:05:38.871 05:19:45 rpc_client -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:38.871 * Looking for test storage... 00:05:38.871 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:38.871 05:19:45 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:38.871 OK 00:05:38.871 05:19:45 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:38.871 00:05:38.871 real 0m0.070s 00:05:38.871 user 0m0.032s 00:05:38.871 sys 0m0.043s 00:05:38.871 05:19:45 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:38.871 05:19:45 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:38.871 ************************************ 00:05:38.871 END TEST rpc_client 00:05:38.871 ************************************ 00:05:38.871 05:19:45 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:38.871 05:19:45 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:38.871 05:19:45 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:38.871 05:19:45 -- common/autotest_common.sh@10 -- # set +x 00:05:38.871 ************************************ 00:05:38.871 START TEST json_config 00:05:38.871 ************************************ 00:05:38.871 05:19:45 json_config -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:38.871 05:19:45 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:38.871 05:19:45 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:38.871 05:19:45 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:38.871 05:19:45 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:38.871 05:19:45 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:38.871 05:19:45 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:38.871 05:19:45 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:38.871 05:19:45 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:38.871 05:19:45 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:38.871 05:19:45 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:38.871 05:19:45 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:38.871 05:19:45 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:38.871 05:19:45 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:38.871 05:19:45 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:38.871 05:19:45 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:38.871 05:19:45 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:38.871 05:19:45 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:38.871 05:19:45 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:38.871 05:19:45 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:38.871 05:19:45 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:38.871 05:19:45 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:38.871 05:19:45 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:38.871 05:19:45 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.871 05:19:45 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.871 05:19:45 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.871 05:19:45 json_config -- paths/export.sh@5 -- # export PATH 00:05:38.871 05:19:45 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.871 05:19:45 json_config -- nvmf/common.sh@47 -- # : 0 00:05:38.871 05:19:45 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:38.871 05:19:45 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:38.871 05:19:45 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:38.871 05:19:45 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:38.871 05:19:45 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:38.871 05:19:45 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:38.871 05:19:45 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:38.871 05:19:45 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:38.871 05:19:45 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:38.871 05:19:45 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:38.871 05:19:45 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:38.871 05:19:45 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:38.871 05:19:45 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:38.871 05:19:45 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:38.871 05:19:45 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:38.871 05:19:45 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:38.871 05:19:45 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:38.871 05:19:45 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:38.871 05:19:45 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:38.871 05:19:45 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:38.871 05:19:45 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:38.871 05:19:45 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:38.871 05:19:45 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:38.871 05:19:45 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:38.871 INFO: JSON configuration test init 00:05:38.871 05:19:45 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:38.871 05:19:45 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:38.871 05:19:45 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:38.871 05:19:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.871 05:19:45 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:38.871 05:19:45 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:38.871 05:19:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.871 05:19:45 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:38.871 05:19:45 json_config -- json_config/common.sh@9 -- # local app=target 00:05:38.871 05:19:45 json_config -- json_config/common.sh@10 -- # shift 00:05:38.871 05:19:45 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:38.871 05:19:45 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:38.871 05:19:45 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:38.871 05:19:45 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:38.871 05:19:45 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:38.871 05:19:45 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3103573 00:05:38.871 05:19:45 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:38.871 05:19:45 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:38.871 Waiting for target to run... 00:05:38.871 05:19:45 json_config -- json_config/common.sh@25 -- # waitforlisten 3103573 /var/tmp/spdk_tgt.sock 00:05:38.871 05:19:45 json_config -- common/autotest_common.sh@827 -- # '[' -z 3103573 ']' 00:05:38.871 05:19:45 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:38.871 05:19:45 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:38.871 05:19:45 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:38.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:38.871 05:19:45 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:38.871 05:19:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:39.130 [2024-07-14 05:19:45.978286] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:39.130 [2024-07-14 05:19:45.978384] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3103573 ] 00:05:39.130 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.388 [2024-07-14 05:19:46.332630] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.388 [2024-07-14 05:19:46.396002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.955 05:19:46 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:39.955 05:19:46 json_config -- common/autotest_common.sh@860 -- # return 0 00:05:39.955 05:19:46 json_config -- json_config/common.sh@26 -- # echo '' 00:05:39.955 00:05:39.955 05:19:46 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:39.955 05:19:46 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:39.955 05:19:46 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:39.955 05:19:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:39.955 05:19:46 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:39.955 05:19:46 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:39.955 05:19:46 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:39.955 05:19:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:39.955 05:19:46 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:39.955 05:19:46 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:39.955 05:19:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:43.241 05:19:50 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:43.241 05:19:50 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:43.241 05:19:50 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:43.241 05:19:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:43.241 05:19:50 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:43.241 05:19:50 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:43.241 05:19:50 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:43.241 05:19:50 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:43.241 05:19:50 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:43.241 05:19:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:43.241 05:19:50 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:43.241 05:19:50 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:43.241 05:19:50 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:43.241 05:19:50 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:43.241 05:19:50 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:43.241 05:19:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:43.241 05:19:50 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:43.499 05:19:50 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:43.499 05:19:50 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:43.499 05:19:50 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:43.499 05:19:50 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:43.499 05:19:50 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:43.499 05:19:50 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:43.499 05:19:50 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:43.499 05:19:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:43.499 05:19:50 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:43.499 05:19:50 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:43.499 05:19:50 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:43.499 05:19:50 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:43.499 05:19:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:43.500 MallocForNvmf0 00:05:43.500 05:19:50 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:43.500 05:19:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:43.758 MallocForNvmf1 00:05:43.758 05:19:50 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:43.758 05:19:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:44.017 [2024-07-14 05:19:51.061813] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:44.017 05:19:51 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:44.017 05:19:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:44.274 05:19:51 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:44.275 05:19:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:44.532 05:19:51 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:44.532 05:19:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:44.790 05:19:51 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:44.790 05:19:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:45.047 [2024-07-14 05:19:52.024972] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:45.047 05:19:52 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:45.047 05:19:52 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:45.047 05:19:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:45.047 05:19:52 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:45.047 05:19:52 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:45.047 05:19:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:45.047 05:19:52 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:45.047 05:19:52 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:45.047 05:19:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:45.305 MallocBdevForConfigChangeCheck 00:05:45.305 05:19:52 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:45.305 05:19:52 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:45.305 05:19:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:45.305 05:19:52 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:45.305 05:19:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:45.871 05:19:52 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:45.871 INFO: shutting down applications... 00:05:45.871 05:19:52 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:45.871 05:19:52 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:45.871 05:19:52 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:45.871 05:19:52 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:47.767 Calling clear_iscsi_subsystem 00:05:47.767 Calling clear_nvmf_subsystem 00:05:47.767 Calling clear_nbd_subsystem 00:05:47.767 Calling clear_ublk_subsystem 00:05:47.767 Calling clear_vhost_blk_subsystem 00:05:47.767 Calling clear_vhost_scsi_subsystem 00:05:47.767 Calling clear_bdev_subsystem 00:05:47.767 05:19:54 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:47.767 05:19:54 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:47.767 05:19:54 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:47.767 05:19:54 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:47.767 05:19:54 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:47.767 05:19:54 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:47.768 05:19:54 json_config -- json_config/json_config.sh@345 -- # break 00:05:47.768 05:19:54 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:47.768 05:19:54 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:47.768 05:19:54 json_config -- json_config/common.sh@31 -- # local app=target 00:05:47.768 05:19:54 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:47.768 05:19:54 json_config -- json_config/common.sh@35 -- # [[ -n 3103573 ]] 00:05:47.768 05:19:54 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3103573 00:05:47.768 05:19:54 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:47.768 05:19:54 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:47.768 05:19:54 json_config -- json_config/common.sh@41 -- # kill -0 3103573 00:05:47.768 05:19:54 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:48.337 05:19:55 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:48.337 05:19:55 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:48.337 05:19:55 json_config -- json_config/common.sh@41 -- # kill -0 3103573 00:05:48.337 05:19:55 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:48.337 05:19:55 json_config -- json_config/common.sh@43 -- # break 00:05:48.337 05:19:55 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:48.337 05:19:55 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:48.337 SPDK target shutdown done 00:05:48.337 05:19:55 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:48.337 INFO: relaunching applications... 00:05:48.337 05:19:55 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:48.337 05:19:55 json_config -- json_config/common.sh@9 -- # local app=target 00:05:48.337 05:19:55 json_config -- json_config/common.sh@10 -- # shift 00:05:48.337 05:19:55 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:48.337 05:19:55 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:48.337 05:19:55 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:48.337 05:19:55 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:48.337 05:19:55 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:48.337 05:19:55 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3104792 00:05:48.337 05:19:55 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:48.337 05:19:55 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:48.337 Waiting for target to run... 00:05:48.337 05:19:55 json_config -- json_config/common.sh@25 -- # waitforlisten 3104792 /var/tmp/spdk_tgt.sock 00:05:48.337 05:19:55 json_config -- common/autotest_common.sh@827 -- # '[' -z 3104792 ']' 00:05:48.337 05:19:55 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:48.337 05:19:55 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:48.337 05:19:55 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:48.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:48.337 05:19:55 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:48.337 05:19:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:48.337 [2024-07-14 05:19:55.340254] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:48.337 [2024-07-14 05:19:55.340343] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3104792 ] 00:05:48.337 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.906 [2024-07-14 05:19:55.860960] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.906 [2024-07-14 05:19:55.942958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.190 [2024-07-14 05:19:58.970759] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:52.190 [2024-07-14 05:19:59.003219] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:52.823 05:19:59 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:52.823 05:19:59 json_config -- common/autotest_common.sh@860 -- # return 0 00:05:52.823 05:19:59 json_config -- json_config/common.sh@26 -- # echo '' 00:05:52.823 00:05:52.823 05:19:59 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:52.823 05:19:59 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:52.823 INFO: Checking if target configuration is the same... 00:05:52.823 05:19:59 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:52.823 05:19:59 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:52.823 05:19:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:52.823 + '[' 2 -ne 2 ']' 00:05:52.823 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:52.823 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:52.823 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:52.823 +++ basename /dev/fd/62 00:05:52.823 ++ mktemp /tmp/62.XXX 00:05:52.823 + tmp_file_1=/tmp/62.82N 00:05:52.823 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:52.823 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:52.823 + tmp_file_2=/tmp/spdk_tgt_config.json.o7z 00:05:52.823 + ret=0 00:05:52.823 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:53.082 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:53.082 + diff -u /tmp/62.82N /tmp/spdk_tgt_config.json.o7z 00:05:53.082 + echo 'INFO: JSON config files are the same' 00:05:53.082 INFO: JSON config files are the same 00:05:53.082 + rm /tmp/62.82N /tmp/spdk_tgt_config.json.o7z 00:05:53.341 + exit 0 00:05:53.341 05:20:00 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:53.341 05:20:00 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:53.341 INFO: changing configuration and checking if this can be detected... 00:05:53.341 05:20:00 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:53.341 05:20:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:53.341 05:20:00 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:53.341 05:20:00 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:53.341 05:20:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:53.341 + '[' 2 -ne 2 ']' 00:05:53.341 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:53.341 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:53.341 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:53.341 +++ basename /dev/fd/62 00:05:53.599 ++ mktemp /tmp/62.XXX 00:05:53.599 + tmp_file_1=/tmp/62.rHl 00:05:53.599 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:53.599 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:53.599 + tmp_file_2=/tmp/spdk_tgt_config.json.ebr 00:05:53.599 + ret=0 00:05:53.599 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:53.858 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:53.858 + diff -u /tmp/62.rHl /tmp/spdk_tgt_config.json.ebr 00:05:53.858 + ret=1 00:05:53.858 + echo '=== Start of file: /tmp/62.rHl ===' 00:05:53.858 + cat /tmp/62.rHl 00:05:53.858 + echo '=== End of file: /tmp/62.rHl ===' 00:05:53.858 + echo '' 00:05:53.858 + echo '=== Start of file: /tmp/spdk_tgt_config.json.ebr ===' 00:05:53.858 + cat /tmp/spdk_tgt_config.json.ebr 00:05:53.858 + echo '=== End of file: /tmp/spdk_tgt_config.json.ebr ===' 00:05:53.858 + echo '' 00:05:53.858 + rm /tmp/62.rHl /tmp/spdk_tgt_config.json.ebr 00:05:53.858 + exit 1 00:05:53.858 05:20:00 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:53.858 INFO: configuration change detected. 00:05:53.858 05:20:00 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:53.858 05:20:00 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:53.858 05:20:00 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:53.858 05:20:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:53.858 05:20:00 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:53.858 05:20:00 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:53.858 05:20:00 json_config -- json_config/json_config.sh@317 -- # [[ -n 3104792 ]] 00:05:53.858 05:20:00 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:53.858 05:20:00 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:53.858 05:20:00 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:53.858 05:20:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:53.858 05:20:00 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:53.858 05:20:00 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:53.858 05:20:00 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:53.858 05:20:00 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:53.858 05:20:00 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:53.858 05:20:00 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:53.858 05:20:00 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:53.858 05:20:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:53.858 05:20:00 json_config -- json_config/json_config.sh@323 -- # killprocess 3104792 00:05:53.858 05:20:00 json_config -- common/autotest_common.sh@946 -- # '[' -z 3104792 ']' 00:05:53.858 05:20:00 json_config -- common/autotest_common.sh@950 -- # kill -0 3104792 00:05:53.858 05:20:00 json_config -- common/autotest_common.sh@951 -- # uname 00:05:53.858 05:20:00 json_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:53.858 05:20:00 json_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3104792 00:05:53.858 05:20:00 json_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:53.858 05:20:00 json_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:53.858 05:20:00 json_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3104792' 00:05:53.858 killing process with pid 3104792 00:05:53.858 05:20:00 json_config -- common/autotest_common.sh@965 -- # kill 3104792 00:05:53.858 05:20:00 json_config -- common/autotest_common.sh@970 -- # wait 3104792 00:05:55.756 05:20:02 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:55.756 05:20:02 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:55.756 05:20:02 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:55.756 05:20:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:55.756 05:20:02 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:55.756 05:20:02 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:55.756 INFO: Success 00:05:55.756 00:05:55.756 real 0m16.691s 00:05:55.756 user 0m18.551s 00:05:55.756 sys 0m2.072s 00:05:55.756 05:20:02 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:55.756 05:20:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:55.756 ************************************ 00:05:55.756 END TEST json_config 00:05:55.756 ************************************ 00:05:55.756 05:20:02 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:55.756 05:20:02 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:55.756 05:20:02 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:55.756 05:20:02 -- common/autotest_common.sh@10 -- # set +x 00:05:55.756 ************************************ 00:05:55.756 START TEST json_config_extra_key 00:05:55.756 ************************************ 00:05:55.756 05:20:02 json_config_extra_key -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:55.756 05:20:02 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:55.756 05:20:02 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:55.756 05:20:02 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:55.756 05:20:02 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:55.756 05:20:02 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:55.756 05:20:02 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:55.756 05:20:02 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:55.756 05:20:02 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:55.756 05:20:02 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:55.756 05:20:02 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:55.756 05:20:02 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:55.756 05:20:02 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:55.756 05:20:02 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:55.756 05:20:02 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:55.756 05:20:02 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:55.756 05:20:02 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:55.756 05:20:02 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:55.756 05:20:02 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:55.756 05:20:02 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:55.756 05:20:02 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:55.756 05:20:02 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:55.756 05:20:02 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:55.756 05:20:02 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.756 05:20:02 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.756 05:20:02 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.756 05:20:02 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:55.756 05:20:02 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.756 05:20:02 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:55.756 05:20:02 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:55.756 05:20:02 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:55.756 05:20:02 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:55.756 05:20:02 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:55.756 05:20:02 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:55.756 05:20:02 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:55.756 05:20:02 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:55.756 05:20:02 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:55.756 05:20:02 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:55.756 05:20:02 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:55.756 05:20:02 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:55.756 05:20:02 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:55.756 05:20:02 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:55.756 05:20:02 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:55.756 05:20:02 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:55.756 05:20:02 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:55.756 05:20:02 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:55.756 05:20:02 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:55.756 05:20:02 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:55.756 INFO: launching applications... 00:05:55.756 05:20:02 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:55.756 05:20:02 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:55.756 05:20:02 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:55.756 05:20:02 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:55.756 05:20:02 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:55.756 05:20:02 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:55.756 05:20:02 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:55.756 05:20:02 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:55.756 05:20:02 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3105808 00:05:55.756 05:20:02 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:55.756 05:20:02 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:55.756 Waiting for target to run... 00:05:55.756 05:20:02 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3105808 /var/tmp/spdk_tgt.sock 00:05:55.756 05:20:02 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 3105808 ']' 00:05:55.756 05:20:02 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:55.756 05:20:02 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:55.756 05:20:02 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:55.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:55.756 05:20:02 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:55.756 05:20:02 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:55.756 [2024-07-14 05:20:02.714715] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:55.756 [2024-07-14 05:20:02.714809] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3105808 ] 00:05:55.756 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.013 [2024-07-14 05:20:03.078470] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.272 [2024-07-14 05:20:03.143448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.531 05:20:03 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:56.531 05:20:03 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:05:56.531 05:20:03 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:56.531 00:05:56.531 05:20:03 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:56.531 INFO: shutting down applications... 00:05:56.531 05:20:03 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:56.531 05:20:03 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:56.531 05:20:03 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:56.531 05:20:03 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3105808 ]] 00:05:56.531 05:20:03 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3105808 00:05:56.531 05:20:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:56.789 05:20:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:56.789 05:20:03 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3105808 00:05:56.789 05:20:03 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:57.047 05:20:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:57.047 05:20:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:57.047 05:20:04 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3105808 00:05:57.047 05:20:04 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:57.047 05:20:04 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:57.047 05:20:04 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:57.047 05:20:04 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:57.047 SPDK target shutdown done 00:05:57.047 05:20:04 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:57.047 Success 00:05:57.047 00:05:57.047 real 0m1.536s 00:05:57.047 user 0m1.481s 00:05:57.047 sys 0m0.450s 00:05:57.047 05:20:04 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:57.047 05:20:04 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:57.047 ************************************ 00:05:57.047 END TEST json_config_extra_key 00:05:57.047 ************************************ 00:05:57.304 05:20:04 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:57.304 05:20:04 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:57.304 05:20:04 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:57.304 05:20:04 -- common/autotest_common.sh@10 -- # set +x 00:05:57.304 ************************************ 00:05:57.304 START TEST alias_rpc 00:05:57.304 ************************************ 00:05:57.304 05:20:04 alias_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:57.304 * Looking for test storage... 00:05:57.304 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:57.304 05:20:04 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:57.304 05:20:04 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3105992 00:05:57.304 05:20:04 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:57.304 05:20:04 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3105992 00:05:57.304 05:20:04 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 3105992 ']' 00:05:57.304 05:20:04 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.304 05:20:04 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:57.304 05:20:04 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.304 05:20:04 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:57.304 05:20:04 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.304 [2024-07-14 05:20:04.293696] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:57.304 [2024-07-14 05:20:04.293794] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3105992 ] 00:05:57.304 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.304 [2024-07-14 05:20:04.355079] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.562 [2024-07-14 05:20:04.447501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.820 05:20:04 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:57.820 05:20:04 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:57.820 05:20:04 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:58.078 05:20:04 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3105992 00:05:58.078 05:20:04 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 3105992 ']' 00:05:58.078 05:20:04 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 3105992 00:05:58.078 05:20:04 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:05:58.078 05:20:04 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:58.078 05:20:04 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3105992 00:05:58.078 05:20:04 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:58.078 05:20:04 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:58.078 05:20:04 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3105992' 00:05:58.078 killing process with pid 3105992 00:05:58.078 05:20:04 alias_rpc -- common/autotest_common.sh@965 -- # kill 3105992 00:05:58.078 05:20:04 alias_rpc -- common/autotest_common.sh@970 -- # wait 3105992 00:05:58.336 00:05:58.336 real 0m1.209s 00:05:58.336 user 0m1.301s 00:05:58.336 sys 0m0.424s 00:05:58.336 05:20:05 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:58.336 05:20:05 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.336 ************************************ 00:05:58.336 END TEST alias_rpc 00:05:58.336 ************************************ 00:05:58.336 05:20:05 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:58.336 05:20:05 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:58.336 05:20:05 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:58.336 05:20:05 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:58.336 05:20:05 -- common/autotest_common.sh@10 -- # set +x 00:05:58.594 ************************************ 00:05:58.594 START TEST spdkcli_tcp 00:05:58.594 ************************************ 00:05:58.594 05:20:05 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:58.594 * Looking for test storage... 00:05:58.594 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:58.594 05:20:05 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:58.594 05:20:05 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:58.594 05:20:05 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:58.594 05:20:05 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:58.594 05:20:05 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:58.594 05:20:05 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:58.594 05:20:05 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:58.594 05:20:05 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:58.594 05:20:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:58.594 05:20:05 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3106294 00:05:58.594 05:20:05 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:58.594 05:20:05 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3106294 00:05:58.595 05:20:05 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 3106294 ']' 00:05:58.595 05:20:05 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.595 05:20:05 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:58.595 05:20:05 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.595 05:20:05 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:58.595 05:20:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:58.595 [2024-07-14 05:20:05.549910] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:58.595 [2024-07-14 05:20:05.549993] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3106294 ] 00:05:58.595 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.595 [2024-07-14 05:20:05.615201] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:58.853 [2024-07-14 05:20:05.709578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.853 [2024-07-14 05:20:05.709583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.111 05:20:05 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:59.111 05:20:05 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:05:59.111 05:20:05 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3106312 00:05:59.111 05:20:05 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:59.111 05:20:05 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:59.111 [ 00:05:59.111 "bdev_malloc_delete", 00:05:59.111 "bdev_malloc_create", 00:05:59.111 "bdev_null_resize", 00:05:59.111 "bdev_null_delete", 00:05:59.111 "bdev_null_create", 00:05:59.111 "bdev_nvme_cuse_unregister", 00:05:59.111 "bdev_nvme_cuse_register", 00:05:59.111 "bdev_opal_new_user", 00:05:59.111 "bdev_opal_set_lock_state", 00:05:59.111 "bdev_opal_delete", 00:05:59.111 "bdev_opal_get_info", 00:05:59.111 "bdev_opal_create", 00:05:59.111 "bdev_nvme_opal_revert", 00:05:59.111 "bdev_nvme_opal_init", 00:05:59.111 "bdev_nvme_send_cmd", 00:05:59.111 "bdev_nvme_get_path_iostat", 00:05:59.111 "bdev_nvme_get_mdns_discovery_info", 00:05:59.111 "bdev_nvme_stop_mdns_discovery", 00:05:59.111 "bdev_nvme_start_mdns_discovery", 00:05:59.111 "bdev_nvme_set_multipath_policy", 00:05:59.111 "bdev_nvme_set_preferred_path", 00:05:59.111 "bdev_nvme_get_io_paths", 00:05:59.111 "bdev_nvme_remove_error_injection", 00:05:59.111 "bdev_nvme_add_error_injection", 00:05:59.111 "bdev_nvme_get_discovery_info", 00:05:59.111 "bdev_nvme_stop_discovery", 00:05:59.111 "bdev_nvme_start_discovery", 00:05:59.111 "bdev_nvme_get_controller_health_info", 00:05:59.111 "bdev_nvme_disable_controller", 00:05:59.111 "bdev_nvme_enable_controller", 00:05:59.111 "bdev_nvme_reset_controller", 00:05:59.111 "bdev_nvme_get_transport_statistics", 00:05:59.111 "bdev_nvme_apply_firmware", 00:05:59.111 "bdev_nvme_detach_controller", 00:05:59.111 "bdev_nvme_get_controllers", 00:05:59.111 "bdev_nvme_attach_controller", 00:05:59.111 "bdev_nvme_set_hotplug", 00:05:59.111 "bdev_nvme_set_options", 00:05:59.111 "bdev_passthru_delete", 00:05:59.111 "bdev_passthru_create", 00:05:59.111 "bdev_lvol_set_parent_bdev", 00:05:59.111 "bdev_lvol_set_parent", 00:05:59.111 "bdev_lvol_check_shallow_copy", 00:05:59.111 "bdev_lvol_start_shallow_copy", 00:05:59.111 "bdev_lvol_grow_lvstore", 00:05:59.111 "bdev_lvol_get_lvols", 00:05:59.111 "bdev_lvol_get_lvstores", 00:05:59.111 "bdev_lvol_delete", 00:05:59.111 "bdev_lvol_set_read_only", 00:05:59.111 "bdev_lvol_resize", 00:05:59.111 "bdev_lvol_decouple_parent", 00:05:59.111 "bdev_lvol_inflate", 00:05:59.111 "bdev_lvol_rename", 00:05:59.111 "bdev_lvol_clone_bdev", 00:05:59.111 "bdev_lvol_clone", 00:05:59.111 "bdev_lvol_snapshot", 00:05:59.111 "bdev_lvol_create", 00:05:59.111 "bdev_lvol_delete_lvstore", 00:05:59.111 "bdev_lvol_rename_lvstore", 00:05:59.111 "bdev_lvol_create_lvstore", 00:05:59.111 "bdev_raid_set_options", 00:05:59.111 "bdev_raid_remove_base_bdev", 00:05:59.111 "bdev_raid_add_base_bdev", 00:05:59.111 "bdev_raid_delete", 00:05:59.111 "bdev_raid_create", 00:05:59.111 "bdev_raid_get_bdevs", 00:05:59.111 "bdev_error_inject_error", 00:05:59.111 "bdev_error_delete", 00:05:59.111 "bdev_error_create", 00:05:59.111 "bdev_split_delete", 00:05:59.111 "bdev_split_create", 00:05:59.111 "bdev_delay_delete", 00:05:59.111 "bdev_delay_create", 00:05:59.111 "bdev_delay_update_latency", 00:05:59.111 "bdev_zone_block_delete", 00:05:59.111 "bdev_zone_block_create", 00:05:59.111 "blobfs_create", 00:05:59.111 "blobfs_detect", 00:05:59.111 "blobfs_set_cache_size", 00:05:59.111 "bdev_aio_delete", 00:05:59.111 "bdev_aio_rescan", 00:05:59.111 "bdev_aio_create", 00:05:59.111 "bdev_ftl_set_property", 00:05:59.111 "bdev_ftl_get_properties", 00:05:59.111 "bdev_ftl_get_stats", 00:05:59.111 "bdev_ftl_unmap", 00:05:59.111 "bdev_ftl_unload", 00:05:59.111 "bdev_ftl_delete", 00:05:59.111 "bdev_ftl_load", 00:05:59.111 "bdev_ftl_create", 00:05:59.111 "bdev_virtio_attach_controller", 00:05:59.111 "bdev_virtio_scsi_get_devices", 00:05:59.111 "bdev_virtio_detach_controller", 00:05:59.111 "bdev_virtio_blk_set_hotplug", 00:05:59.111 "bdev_iscsi_delete", 00:05:59.111 "bdev_iscsi_create", 00:05:59.111 "bdev_iscsi_set_options", 00:05:59.111 "accel_error_inject_error", 00:05:59.111 "ioat_scan_accel_module", 00:05:59.111 "dsa_scan_accel_module", 00:05:59.111 "iaa_scan_accel_module", 00:05:59.111 "vfu_virtio_create_scsi_endpoint", 00:05:59.111 "vfu_virtio_scsi_remove_target", 00:05:59.111 "vfu_virtio_scsi_add_target", 00:05:59.111 "vfu_virtio_create_blk_endpoint", 00:05:59.112 "vfu_virtio_delete_endpoint", 00:05:59.112 "keyring_file_remove_key", 00:05:59.112 "keyring_file_add_key", 00:05:59.112 "keyring_linux_set_options", 00:05:59.112 "iscsi_get_histogram", 00:05:59.112 "iscsi_enable_histogram", 00:05:59.112 "iscsi_set_options", 00:05:59.112 "iscsi_get_auth_groups", 00:05:59.112 "iscsi_auth_group_remove_secret", 00:05:59.112 "iscsi_auth_group_add_secret", 00:05:59.112 "iscsi_delete_auth_group", 00:05:59.112 "iscsi_create_auth_group", 00:05:59.112 "iscsi_set_discovery_auth", 00:05:59.112 "iscsi_get_options", 00:05:59.112 "iscsi_target_node_request_logout", 00:05:59.112 "iscsi_target_node_set_redirect", 00:05:59.112 "iscsi_target_node_set_auth", 00:05:59.112 "iscsi_target_node_add_lun", 00:05:59.112 "iscsi_get_stats", 00:05:59.112 "iscsi_get_connections", 00:05:59.112 "iscsi_portal_group_set_auth", 00:05:59.112 "iscsi_start_portal_group", 00:05:59.112 "iscsi_delete_portal_group", 00:05:59.112 "iscsi_create_portal_group", 00:05:59.112 "iscsi_get_portal_groups", 00:05:59.112 "iscsi_delete_target_node", 00:05:59.112 "iscsi_target_node_remove_pg_ig_maps", 00:05:59.112 "iscsi_target_node_add_pg_ig_maps", 00:05:59.112 "iscsi_create_target_node", 00:05:59.112 "iscsi_get_target_nodes", 00:05:59.112 "iscsi_delete_initiator_group", 00:05:59.112 "iscsi_initiator_group_remove_initiators", 00:05:59.112 "iscsi_initiator_group_add_initiators", 00:05:59.112 "iscsi_create_initiator_group", 00:05:59.112 "iscsi_get_initiator_groups", 00:05:59.112 "nvmf_set_crdt", 00:05:59.112 "nvmf_set_config", 00:05:59.112 "nvmf_set_max_subsystems", 00:05:59.112 "nvmf_stop_mdns_prr", 00:05:59.112 "nvmf_publish_mdns_prr", 00:05:59.112 "nvmf_subsystem_get_listeners", 00:05:59.112 "nvmf_subsystem_get_qpairs", 00:05:59.112 "nvmf_subsystem_get_controllers", 00:05:59.112 "nvmf_get_stats", 00:05:59.112 "nvmf_get_transports", 00:05:59.112 "nvmf_create_transport", 00:05:59.112 "nvmf_get_targets", 00:05:59.112 "nvmf_delete_target", 00:05:59.112 "nvmf_create_target", 00:05:59.112 "nvmf_subsystem_allow_any_host", 00:05:59.112 "nvmf_subsystem_remove_host", 00:05:59.112 "nvmf_subsystem_add_host", 00:05:59.112 "nvmf_ns_remove_host", 00:05:59.112 "nvmf_ns_add_host", 00:05:59.112 "nvmf_subsystem_remove_ns", 00:05:59.112 "nvmf_subsystem_add_ns", 00:05:59.112 "nvmf_subsystem_listener_set_ana_state", 00:05:59.112 "nvmf_discovery_get_referrals", 00:05:59.112 "nvmf_discovery_remove_referral", 00:05:59.112 "nvmf_discovery_add_referral", 00:05:59.112 "nvmf_subsystem_remove_listener", 00:05:59.112 "nvmf_subsystem_add_listener", 00:05:59.112 "nvmf_delete_subsystem", 00:05:59.112 "nvmf_create_subsystem", 00:05:59.112 "nvmf_get_subsystems", 00:05:59.112 "env_dpdk_get_mem_stats", 00:05:59.112 "nbd_get_disks", 00:05:59.112 "nbd_stop_disk", 00:05:59.112 "nbd_start_disk", 00:05:59.112 "ublk_recover_disk", 00:05:59.112 "ublk_get_disks", 00:05:59.112 "ublk_stop_disk", 00:05:59.112 "ublk_start_disk", 00:05:59.112 "ublk_destroy_target", 00:05:59.112 "ublk_create_target", 00:05:59.112 "virtio_blk_create_transport", 00:05:59.112 "virtio_blk_get_transports", 00:05:59.112 "vhost_controller_set_coalescing", 00:05:59.112 "vhost_get_controllers", 00:05:59.112 "vhost_delete_controller", 00:05:59.112 "vhost_create_blk_controller", 00:05:59.112 "vhost_scsi_controller_remove_target", 00:05:59.112 "vhost_scsi_controller_add_target", 00:05:59.112 "vhost_start_scsi_controller", 00:05:59.112 "vhost_create_scsi_controller", 00:05:59.112 "thread_set_cpumask", 00:05:59.112 "framework_get_scheduler", 00:05:59.112 "framework_set_scheduler", 00:05:59.112 "framework_get_reactors", 00:05:59.112 "thread_get_io_channels", 00:05:59.112 "thread_get_pollers", 00:05:59.112 "thread_get_stats", 00:05:59.112 "framework_monitor_context_switch", 00:05:59.112 "spdk_kill_instance", 00:05:59.112 "log_enable_timestamps", 00:05:59.112 "log_get_flags", 00:05:59.112 "log_clear_flag", 00:05:59.112 "log_set_flag", 00:05:59.112 "log_get_level", 00:05:59.112 "log_set_level", 00:05:59.112 "log_get_print_level", 00:05:59.112 "log_set_print_level", 00:05:59.112 "framework_enable_cpumask_locks", 00:05:59.112 "framework_disable_cpumask_locks", 00:05:59.112 "framework_wait_init", 00:05:59.112 "framework_start_init", 00:05:59.112 "scsi_get_devices", 00:05:59.112 "bdev_get_histogram", 00:05:59.112 "bdev_enable_histogram", 00:05:59.112 "bdev_set_qos_limit", 00:05:59.112 "bdev_set_qd_sampling_period", 00:05:59.112 "bdev_get_bdevs", 00:05:59.112 "bdev_reset_iostat", 00:05:59.112 "bdev_get_iostat", 00:05:59.112 "bdev_examine", 00:05:59.112 "bdev_wait_for_examine", 00:05:59.112 "bdev_set_options", 00:05:59.112 "notify_get_notifications", 00:05:59.112 "notify_get_types", 00:05:59.112 "accel_get_stats", 00:05:59.112 "accel_set_options", 00:05:59.112 "accel_set_driver", 00:05:59.112 "accel_crypto_key_destroy", 00:05:59.112 "accel_crypto_keys_get", 00:05:59.112 "accel_crypto_key_create", 00:05:59.112 "accel_assign_opc", 00:05:59.112 "accel_get_module_info", 00:05:59.112 "accel_get_opc_assignments", 00:05:59.112 "vmd_rescan", 00:05:59.112 "vmd_remove_device", 00:05:59.112 "vmd_enable", 00:05:59.112 "sock_get_default_impl", 00:05:59.112 "sock_set_default_impl", 00:05:59.112 "sock_impl_set_options", 00:05:59.112 "sock_impl_get_options", 00:05:59.112 "iobuf_get_stats", 00:05:59.112 "iobuf_set_options", 00:05:59.112 "keyring_get_keys", 00:05:59.112 "framework_get_pci_devices", 00:05:59.112 "framework_get_config", 00:05:59.112 "framework_get_subsystems", 00:05:59.112 "vfu_tgt_set_base_path", 00:05:59.112 "trace_get_info", 00:05:59.112 "trace_get_tpoint_group_mask", 00:05:59.112 "trace_disable_tpoint_group", 00:05:59.112 "trace_enable_tpoint_group", 00:05:59.112 "trace_clear_tpoint_mask", 00:05:59.112 "trace_set_tpoint_mask", 00:05:59.112 "spdk_get_version", 00:05:59.112 "rpc_get_methods" 00:05:59.112 ] 00:05:59.370 05:20:06 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:59.370 05:20:06 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:59.370 05:20:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:59.370 05:20:06 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:59.370 05:20:06 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3106294 00:05:59.370 05:20:06 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 3106294 ']' 00:05:59.370 05:20:06 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 3106294 00:05:59.370 05:20:06 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:05:59.370 05:20:06 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:59.370 05:20:06 spdkcli_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3106294 00:05:59.370 05:20:06 spdkcli_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:59.370 05:20:06 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:59.370 05:20:06 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3106294' 00:05:59.370 killing process with pid 3106294 00:05:59.370 05:20:06 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 3106294 00:05:59.370 05:20:06 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 3106294 00:05:59.628 00:05:59.628 real 0m1.234s 00:05:59.628 user 0m2.189s 00:05:59.628 sys 0m0.468s 00:05:59.628 05:20:06 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:59.628 05:20:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:59.628 ************************************ 00:05:59.628 END TEST spdkcli_tcp 00:05:59.628 ************************************ 00:05:59.628 05:20:06 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:59.628 05:20:06 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:59.628 05:20:06 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:59.628 05:20:06 -- common/autotest_common.sh@10 -- # set +x 00:05:59.628 ************************************ 00:05:59.628 START TEST dpdk_mem_utility 00:05:59.628 ************************************ 00:05:59.628 05:20:06 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:59.886 * Looking for test storage... 00:05:59.886 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:59.886 05:20:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:59.886 05:20:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3106508 00:05:59.886 05:20:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:59.886 05:20:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3106508 00:05:59.886 05:20:06 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 3106508 ']' 00:05:59.886 05:20:06 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.886 05:20:06 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:59.886 05:20:06 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.886 05:20:06 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:59.886 05:20:06 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:59.886 [2024-07-14 05:20:06.817129] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:59.886 [2024-07-14 05:20:06.817221] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3106508 ] 00:05:59.886 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.886 [2024-07-14 05:20:06.874907] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.886 [2024-07-14 05:20:06.964896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.144 05:20:07 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:00.144 05:20:07 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:06:00.144 05:20:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:00.144 05:20:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:00.144 05:20:07 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:00.144 05:20:07 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:00.144 { 00:06:00.144 "filename": "/tmp/spdk_mem_dump.txt" 00:06:00.144 } 00:06:00.144 05:20:07 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:00.144 05:20:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:00.402 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:00.402 1 heaps totaling size 814.000000 MiB 00:06:00.402 size: 814.000000 MiB heap id: 0 00:06:00.402 end heaps---------- 00:06:00.402 8 mempools totaling size 598.116089 MiB 00:06:00.402 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:00.402 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:00.402 size: 84.521057 MiB name: bdev_io_3106508 00:06:00.402 size: 51.011292 MiB name: evtpool_3106508 00:06:00.402 size: 50.003479 MiB name: msgpool_3106508 00:06:00.402 size: 21.763794 MiB name: PDU_Pool 00:06:00.402 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:00.402 size: 0.026123 MiB name: Session_Pool 00:06:00.402 end mempools------- 00:06:00.402 6 memzones totaling size 4.142822 MiB 00:06:00.402 size: 1.000366 MiB name: RG_ring_0_3106508 00:06:00.402 size: 1.000366 MiB name: RG_ring_1_3106508 00:06:00.402 size: 1.000366 MiB name: RG_ring_4_3106508 00:06:00.402 size: 1.000366 MiB name: RG_ring_5_3106508 00:06:00.402 size: 0.125366 MiB name: RG_ring_2_3106508 00:06:00.402 size: 0.015991 MiB name: RG_ring_3_3106508 00:06:00.402 end memzones------- 00:06:00.402 05:20:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:00.402 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:00.402 list of free elements. size: 12.519348 MiB 00:06:00.402 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:00.402 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:00.402 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:00.402 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:00.402 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:00.402 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:00.402 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:00.402 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:00.402 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:00.402 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:00.402 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:00.402 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:00.402 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:00.402 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:00.402 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:00.402 list of standard malloc elements. size: 199.218079 MiB 00:06:00.402 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:00.402 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:00.402 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:00.402 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:00.402 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:00.402 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:00.402 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:00.402 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:00.402 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:00.402 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:00.402 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:00.402 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:00.402 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:00.402 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:00.402 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:00.402 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:00.402 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:00.402 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:00.402 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:00.402 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:00.402 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:00.402 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:00.402 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:00.402 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:00.402 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:00.402 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:00.402 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:00.402 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:00.402 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:00.402 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:00.402 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:00.402 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:00.402 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:00.402 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:00.402 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:00.402 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:00.402 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:00.402 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:00.402 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:00.402 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:00.402 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:00.402 list of memzone associated elements. size: 602.262573 MiB 00:06:00.402 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:00.402 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:00.402 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:00.402 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:00.402 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:00.402 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3106508_0 00:06:00.402 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:00.402 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3106508_0 00:06:00.402 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:00.402 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3106508_0 00:06:00.402 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:00.402 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:00.402 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:00.402 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:00.402 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:00.402 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3106508 00:06:00.402 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:00.402 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3106508 00:06:00.402 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:00.402 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3106508 00:06:00.402 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:00.402 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:00.402 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:00.402 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:00.402 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:00.402 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:00.402 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:00.402 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:00.402 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:00.402 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3106508 00:06:00.402 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:00.402 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3106508 00:06:00.402 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:00.402 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3106508 00:06:00.402 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:00.402 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3106508 00:06:00.402 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:00.402 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3106508 00:06:00.402 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:00.402 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:00.402 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:00.402 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:00.402 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:00.402 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:00.402 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:00.402 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3106508 00:06:00.402 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:00.402 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:00.402 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:00.402 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:00.402 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:00.402 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3106508 00:06:00.402 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:00.402 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:00.402 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:00.402 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3106508 00:06:00.402 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:00.402 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3106508 00:06:00.402 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:00.402 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:00.402 05:20:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:00.402 05:20:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3106508 00:06:00.402 05:20:07 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 3106508 ']' 00:06:00.402 05:20:07 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 3106508 00:06:00.402 05:20:07 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:06:00.402 05:20:07 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:00.402 05:20:07 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3106508 00:06:00.402 05:20:07 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:00.402 05:20:07 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:00.402 05:20:07 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3106508' 00:06:00.402 killing process with pid 3106508 00:06:00.402 05:20:07 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 3106508 00:06:00.402 05:20:07 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 3106508 00:06:00.967 00:06:00.967 real 0m1.063s 00:06:00.967 user 0m1.017s 00:06:00.967 sys 0m0.427s 00:06:00.967 05:20:07 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:00.967 05:20:07 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:00.967 ************************************ 00:06:00.967 END TEST dpdk_mem_utility 00:06:00.967 ************************************ 00:06:00.967 05:20:07 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:00.967 05:20:07 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:00.967 05:20:07 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:00.967 05:20:07 -- common/autotest_common.sh@10 -- # set +x 00:06:00.967 ************************************ 00:06:00.967 START TEST event 00:06:00.967 ************************************ 00:06:00.967 05:20:07 event -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:00.967 * Looking for test storage... 00:06:00.967 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:00.967 05:20:07 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:00.967 05:20:07 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:00.967 05:20:07 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:00.967 05:20:07 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:00.967 05:20:07 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:00.967 05:20:07 event -- common/autotest_common.sh@10 -- # set +x 00:06:00.967 ************************************ 00:06:00.967 START TEST event_perf 00:06:00.967 ************************************ 00:06:00.967 05:20:07 event.event_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:00.967 Running I/O for 1 seconds...[2024-07-14 05:20:07.901701] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:00.967 [2024-07-14 05:20:07.901766] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3106697 ] 00:06:00.967 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.967 [2024-07-14 05:20:07.965549] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:00.967 [2024-07-14 05:20:08.056352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.967 [2024-07-14 05:20:08.056438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:00.967 [2024-07-14 05:20:08.056441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.967 [2024-07-14 05:20:08.056381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:02.337 Running I/O for 1 seconds... 00:06:02.337 lcore 0: 235910 00:06:02.337 lcore 1: 235909 00:06:02.337 lcore 2: 235909 00:06:02.337 lcore 3: 235910 00:06:02.337 done. 00:06:02.337 00:06:02.337 real 0m1.252s 00:06:02.337 user 0m4.150s 00:06:02.337 sys 0m0.094s 00:06:02.337 05:20:09 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:02.337 05:20:09 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:02.337 ************************************ 00:06:02.337 END TEST event_perf 00:06:02.337 ************************************ 00:06:02.337 05:20:09 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:02.337 05:20:09 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:02.337 05:20:09 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:02.337 05:20:09 event -- common/autotest_common.sh@10 -- # set +x 00:06:02.337 ************************************ 00:06:02.337 START TEST event_reactor 00:06:02.337 ************************************ 00:06:02.337 05:20:09 event.event_reactor -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:02.337 [2024-07-14 05:20:09.203488] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:02.337 [2024-07-14 05:20:09.203554] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3106853 ] 00:06:02.337 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.337 [2024-07-14 05:20:09.267974] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.337 [2024-07-14 05:20:09.357455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.710 test_start 00:06:03.710 oneshot 00:06:03.710 tick 100 00:06:03.710 tick 100 00:06:03.710 tick 250 00:06:03.710 tick 100 00:06:03.710 tick 100 00:06:03.710 tick 100 00:06:03.710 tick 250 00:06:03.710 tick 500 00:06:03.710 tick 100 00:06:03.710 tick 100 00:06:03.710 tick 250 00:06:03.710 tick 100 00:06:03.710 tick 100 00:06:03.710 test_end 00:06:03.710 00:06:03.710 real 0m1.247s 00:06:03.710 user 0m1.162s 00:06:03.710 sys 0m0.080s 00:06:03.710 05:20:10 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:03.710 05:20:10 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:03.710 ************************************ 00:06:03.710 END TEST event_reactor 00:06:03.710 ************************************ 00:06:03.710 05:20:10 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:03.710 05:20:10 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:03.710 05:20:10 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:03.710 05:20:10 event -- common/autotest_common.sh@10 -- # set +x 00:06:03.710 ************************************ 00:06:03.710 START TEST event_reactor_perf 00:06:03.710 ************************************ 00:06:03.710 05:20:10 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:03.710 [2024-07-14 05:20:10.500348] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:03.710 [2024-07-14 05:20:10.500416] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3107013 ] 00:06:03.710 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.710 [2024-07-14 05:20:10.564982] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.710 [2024-07-14 05:20:10.655100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.650 test_start 00:06:04.650 test_end 00:06:04.650 Performance: 351159 events per second 00:06:04.650 00:06:04.650 real 0m1.250s 00:06:04.650 user 0m1.165s 00:06:04.650 sys 0m0.081s 00:06:04.650 05:20:11 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:04.650 05:20:11 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:04.650 ************************************ 00:06:04.650 END TEST event_reactor_perf 00:06:04.650 ************************************ 00:06:04.908 05:20:11 event -- event/event.sh@49 -- # uname -s 00:06:04.908 05:20:11 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:04.908 05:20:11 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:04.908 05:20:11 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:04.908 05:20:11 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:04.908 05:20:11 event -- common/autotest_common.sh@10 -- # set +x 00:06:04.908 ************************************ 00:06:04.908 START TEST event_scheduler 00:06:04.908 ************************************ 00:06:04.908 05:20:11 event.event_scheduler -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:04.908 * Looking for test storage... 00:06:04.908 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:04.908 05:20:11 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:04.908 05:20:11 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3107191 00:06:04.908 05:20:11 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:04.908 05:20:11 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:04.908 05:20:11 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3107191 00:06:04.908 05:20:11 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 3107191 ']' 00:06:04.908 05:20:11 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.908 05:20:11 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:04.908 05:20:11 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.909 05:20:11 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:04.909 05:20:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:04.909 [2024-07-14 05:20:11.888828] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:04.909 [2024-07-14 05:20:11.888923] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3107191 ] 00:06:04.909 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.909 [2024-07-14 05:20:11.947192] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:05.167 [2024-07-14 05:20:12.035614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.167 [2024-07-14 05:20:12.035679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.167 [2024-07-14 05:20:12.035744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:05.167 [2024-07-14 05:20:12.035746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:05.167 05:20:12 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:05.167 05:20:12 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:06:05.167 05:20:12 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:05.167 05:20:12 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.167 05:20:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:05.167 POWER: Env isn't set yet! 00:06:05.167 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:05.167 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_available_frequencies 00:06:05.167 POWER: Cannot get available frequencies of lcore 0 00:06:05.167 POWER: Attempting to initialise PSTAT power management... 00:06:05.167 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:06:05.167 POWER: Initialized successfully for lcore 0 power management 00:06:05.167 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:06:05.167 POWER: Initialized successfully for lcore 1 power management 00:06:05.167 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:06:05.167 POWER: Initialized successfully for lcore 2 power management 00:06:05.167 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:06:05.167 POWER: Initialized successfully for lcore 3 power management 00:06:05.167 [2024-07-14 05:20:12.138083] scheduler_dynamic.c: 382:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:05.167 [2024-07-14 05:20:12.138101] scheduler_dynamic.c: 384:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:05.167 [2024-07-14 05:20:12.138112] scheduler_dynamic.c: 386:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:05.167 05:20:12 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.167 05:20:12 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:05.167 05:20:12 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.167 05:20:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:05.167 [2024-07-14 05:20:12.240653] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:05.167 05:20:12 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.167 05:20:12 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:05.167 05:20:12 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:05.167 05:20:12 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:05.167 05:20:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:05.167 ************************************ 00:06:05.167 START TEST scheduler_create_thread 00:06:05.167 ************************************ 00:06:05.167 05:20:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:06:05.167 05:20:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:05.167 05:20:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.167 05:20:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.426 2 00:06:05.426 05:20:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.426 05:20:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:05.426 05:20:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.426 05:20:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.426 3 00:06:05.426 05:20:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.426 05:20:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:05.426 05:20:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.426 05:20:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.426 4 00:06:05.426 05:20:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.426 05:20:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:05.426 05:20:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.427 05:20:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.427 5 00:06:05.427 05:20:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.427 05:20:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:05.427 05:20:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.427 05:20:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.427 6 00:06:05.427 05:20:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.427 05:20:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:05.427 05:20:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.427 05:20:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.427 7 00:06:05.427 05:20:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.427 05:20:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:05.427 05:20:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.427 05:20:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.427 8 00:06:05.427 05:20:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.427 05:20:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:05.427 05:20:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.427 05:20:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.427 9 00:06:05.427 05:20:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.427 05:20:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:05.427 05:20:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.427 05:20:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.427 10 00:06:05.427 05:20:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.427 05:20:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:05.427 05:20:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.427 05:20:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.427 05:20:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.427 05:20:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:05.427 05:20:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:05.427 05:20:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.427 05:20:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.427 05:20:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.427 05:20:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:05.427 05:20:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.427 05:20:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.995 05:20:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.995 05:20:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:05.995 05:20:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:05.995 05:20:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.995 05:20:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.991 05:20:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.991 00:06:06.991 real 0m1.758s 00:06:06.991 user 0m0.009s 00:06:06.991 sys 0m0.005s 00:06:06.991 05:20:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:06.991 05:20:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.991 ************************************ 00:06:06.991 END TEST scheduler_create_thread 00:06:06.991 ************************************ 00:06:06.991 05:20:14 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:06.991 05:20:14 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3107191 00:06:06.991 05:20:14 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 3107191 ']' 00:06:06.991 05:20:14 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 3107191 00:06:06.991 05:20:14 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:06:06.991 05:20:14 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:06.991 05:20:14 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3107191 00:06:07.248 05:20:14 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:06:07.248 05:20:14 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:06:07.248 05:20:14 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3107191' 00:06:07.248 killing process with pid 3107191 00:06:07.248 05:20:14 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 3107191 00:06:07.248 05:20:14 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 3107191 00:06:07.506 [2024-07-14 05:20:14.508510] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:07.764 POWER: Power management governor of lcore 0 has been set to 'userspace' successfully 00:06:07.764 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:06:07.764 POWER: Power management governor of lcore 1 has been set to 'schedutil' successfully 00:06:07.764 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:06:07.764 POWER: Power management governor of lcore 2 has been set to 'schedutil' successfully 00:06:07.764 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:06:07.764 POWER: Power management governor of lcore 3 has been set to 'schedutil' successfully 00:06:07.764 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:06:07.764 00:06:07.764 real 0m2.921s 00:06:07.764 user 0m3.834s 00:06:07.764 sys 0m0.326s 00:06:07.764 05:20:14 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:07.764 05:20:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:07.764 ************************************ 00:06:07.764 END TEST event_scheduler 00:06:07.764 ************************************ 00:06:07.764 05:20:14 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:07.764 05:20:14 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:07.764 05:20:14 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:07.764 05:20:14 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:07.764 05:20:14 event -- common/autotest_common.sh@10 -- # set +x 00:06:07.764 ************************************ 00:06:07.764 START TEST app_repeat 00:06:07.764 ************************************ 00:06:07.764 05:20:14 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:06:07.764 05:20:14 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.764 05:20:14 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.764 05:20:14 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:07.764 05:20:14 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:07.764 05:20:14 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:07.764 05:20:14 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:07.764 05:20:14 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:07.764 05:20:14 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3107641 00:06:07.764 05:20:14 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:07.764 05:20:14 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:07.764 05:20:14 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3107641' 00:06:07.764 Process app_repeat pid: 3107641 00:06:07.764 05:20:14 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:07.764 05:20:14 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:07.764 spdk_app_start Round 0 00:06:07.764 05:20:14 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3107641 /var/tmp/spdk-nbd.sock 00:06:07.764 05:20:14 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3107641 ']' 00:06:07.764 05:20:14 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:07.764 05:20:14 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:07.764 05:20:14 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:07.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:07.764 05:20:14 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:07.764 05:20:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:07.764 [2024-07-14 05:20:14.788072] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:07.764 [2024-07-14 05:20:14.788136] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3107641 ] 00:06:07.764 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.764 [2024-07-14 05:20:14.852595] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:08.022 [2024-07-14 05:20:14.943690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.022 [2024-07-14 05:20:14.943696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.022 05:20:15 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:08.022 05:20:15 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:08.022 05:20:15 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:08.280 Malloc0 00:06:08.280 05:20:15 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:08.537 Malloc1 00:06:08.537 05:20:15 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:08.537 05:20:15 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.537 05:20:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:08.537 05:20:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:08.537 05:20:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.537 05:20:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:08.537 05:20:15 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:08.537 05:20:15 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.537 05:20:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:08.537 05:20:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:08.537 05:20:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.537 05:20:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:08.537 05:20:15 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:08.537 05:20:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:08.537 05:20:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:08.537 05:20:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:08.795 /dev/nbd0 00:06:08.795 05:20:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:08.795 05:20:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:08.795 05:20:15 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:08.795 05:20:15 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:08.795 05:20:15 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:08.795 05:20:15 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:08.795 05:20:15 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:08.795 05:20:15 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:08.795 05:20:15 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:08.795 05:20:15 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:08.795 05:20:15 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:08.795 1+0 records in 00:06:08.795 1+0 records out 00:06:08.795 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000196461 s, 20.8 MB/s 00:06:08.795 05:20:15 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:08.795 05:20:15 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:08.795 05:20:15 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:08.795 05:20:15 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:08.795 05:20:15 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:08.795 05:20:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:08.795 05:20:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:08.795 05:20:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:09.053 /dev/nbd1 00:06:09.053 05:20:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:09.053 05:20:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:09.053 05:20:16 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:09.053 05:20:16 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:09.053 05:20:16 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:09.053 05:20:16 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:09.053 05:20:16 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:09.053 05:20:16 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:09.053 05:20:16 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:09.053 05:20:16 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:09.053 05:20:16 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:09.053 1+0 records in 00:06:09.053 1+0 records out 00:06:09.053 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000203961 s, 20.1 MB/s 00:06:09.053 05:20:16 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:09.053 05:20:16 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:09.053 05:20:16 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:09.053 05:20:16 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:09.053 05:20:16 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:09.053 05:20:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:09.053 05:20:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:09.053 05:20:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:09.053 05:20:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.053 05:20:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:09.311 05:20:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:09.311 { 00:06:09.311 "nbd_device": "/dev/nbd0", 00:06:09.311 "bdev_name": "Malloc0" 00:06:09.311 }, 00:06:09.311 { 00:06:09.311 "nbd_device": "/dev/nbd1", 00:06:09.311 "bdev_name": "Malloc1" 00:06:09.311 } 00:06:09.311 ]' 00:06:09.311 05:20:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:09.311 { 00:06:09.311 "nbd_device": "/dev/nbd0", 00:06:09.311 "bdev_name": "Malloc0" 00:06:09.311 }, 00:06:09.311 { 00:06:09.311 "nbd_device": "/dev/nbd1", 00:06:09.311 "bdev_name": "Malloc1" 00:06:09.311 } 00:06:09.311 ]' 00:06:09.311 05:20:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:09.312 05:20:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:09.312 /dev/nbd1' 00:06:09.312 05:20:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:09.312 /dev/nbd1' 00:06:09.312 05:20:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:09.312 05:20:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:09.312 05:20:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:09.312 05:20:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:09.312 05:20:16 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:09.312 05:20:16 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:09.312 05:20:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.312 05:20:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:09.312 05:20:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:09.312 05:20:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:09.312 05:20:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:09.312 05:20:16 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:09.570 256+0 records in 00:06:09.570 256+0 records out 00:06:09.570 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00493352 s, 213 MB/s 00:06:09.570 05:20:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:09.570 05:20:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:09.570 256+0 records in 00:06:09.570 256+0 records out 00:06:09.570 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0236963 s, 44.3 MB/s 00:06:09.570 05:20:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:09.570 05:20:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:09.570 256+0 records in 00:06:09.570 256+0 records out 00:06:09.570 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0255694 s, 41.0 MB/s 00:06:09.570 05:20:16 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:09.570 05:20:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.570 05:20:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:09.570 05:20:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:09.570 05:20:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:09.570 05:20:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:09.570 05:20:16 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:09.570 05:20:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:09.570 05:20:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:09.570 05:20:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:09.570 05:20:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:09.570 05:20:16 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:09.570 05:20:16 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:09.570 05:20:16 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.570 05:20:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.570 05:20:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:09.570 05:20:16 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:09.570 05:20:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:09.570 05:20:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:09.829 05:20:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:09.829 05:20:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:09.829 05:20:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:09.829 05:20:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:09.829 05:20:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:09.829 05:20:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:09.829 05:20:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:09.829 05:20:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:09.829 05:20:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:09.829 05:20:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:10.087 05:20:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:10.087 05:20:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:10.087 05:20:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:10.087 05:20:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:10.087 05:20:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:10.087 05:20:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:10.087 05:20:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:10.087 05:20:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:10.087 05:20:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:10.087 05:20:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.087 05:20:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:10.345 05:20:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:10.345 05:20:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:10.345 05:20:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:10.345 05:20:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:10.345 05:20:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:10.345 05:20:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:10.345 05:20:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:10.345 05:20:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:10.345 05:20:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:10.345 05:20:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:10.345 05:20:17 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:10.345 05:20:17 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:10.345 05:20:17 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:10.604 05:20:17 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:10.862 [2024-07-14 05:20:17.818376] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:10.862 [2024-07-14 05:20:17.909244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.862 [2024-07-14 05:20:17.909244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.862 [2024-07-14 05:20:17.966019] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:10.862 [2024-07-14 05:20:17.966106] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:14.144 05:20:20 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:14.144 05:20:20 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:14.144 spdk_app_start Round 1 00:06:14.144 05:20:20 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3107641 /var/tmp/spdk-nbd.sock 00:06:14.144 05:20:20 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3107641 ']' 00:06:14.144 05:20:20 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:14.144 05:20:20 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:14.144 05:20:20 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:14.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:14.144 05:20:20 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:14.144 05:20:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:14.144 05:20:20 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:14.144 05:20:20 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:14.144 05:20:20 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:14.144 Malloc0 00:06:14.144 05:20:21 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:14.402 Malloc1 00:06:14.402 05:20:21 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:14.402 05:20:21 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.402 05:20:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:14.402 05:20:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:14.402 05:20:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.402 05:20:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:14.402 05:20:21 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:14.402 05:20:21 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.402 05:20:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:14.402 05:20:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:14.402 05:20:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.402 05:20:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:14.402 05:20:21 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:14.402 05:20:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:14.402 05:20:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:14.402 05:20:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:14.660 /dev/nbd0 00:06:14.661 05:20:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:14.661 05:20:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:14.661 05:20:21 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:14.661 05:20:21 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:14.661 05:20:21 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:14.661 05:20:21 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:14.661 05:20:21 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:14.661 05:20:21 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:14.661 05:20:21 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:14.661 05:20:21 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:14.661 05:20:21 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:14.661 1+0 records in 00:06:14.661 1+0 records out 00:06:14.661 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000234735 s, 17.4 MB/s 00:06:14.661 05:20:21 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:14.661 05:20:21 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:14.661 05:20:21 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:14.661 05:20:21 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:14.661 05:20:21 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:14.661 05:20:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:14.661 05:20:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:14.661 05:20:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:14.918 /dev/nbd1 00:06:14.918 05:20:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:14.918 05:20:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:14.918 05:20:21 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:14.918 05:20:21 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:14.918 05:20:21 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:14.918 05:20:21 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:14.918 05:20:21 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:14.918 05:20:21 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:14.918 05:20:21 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:14.918 05:20:21 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:14.918 05:20:21 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:14.918 1+0 records in 00:06:14.918 1+0 records out 00:06:14.918 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000206503 s, 19.8 MB/s 00:06:14.918 05:20:21 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:14.918 05:20:21 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:14.918 05:20:21 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:14.918 05:20:21 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:14.918 05:20:21 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:14.918 05:20:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:14.918 05:20:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:14.918 05:20:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:14.918 05:20:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.918 05:20:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:15.176 05:20:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:15.176 { 00:06:15.176 "nbd_device": "/dev/nbd0", 00:06:15.176 "bdev_name": "Malloc0" 00:06:15.176 }, 00:06:15.176 { 00:06:15.176 "nbd_device": "/dev/nbd1", 00:06:15.176 "bdev_name": "Malloc1" 00:06:15.176 } 00:06:15.176 ]' 00:06:15.176 05:20:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:15.176 { 00:06:15.176 "nbd_device": "/dev/nbd0", 00:06:15.176 "bdev_name": "Malloc0" 00:06:15.176 }, 00:06:15.176 { 00:06:15.176 "nbd_device": "/dev/nbd1", 00:06:15.176 "bdev_name": "Malloc1" 00:06:15.176 } 00:06:15.176 ]' 00:06:15.176 05:20:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:15.176 05:20:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:15.176 /dev/nbd1' 00:06:15.176 05:20:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:15.176 /dev/nbd1' 00:06:15.176 05:20:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:15.176 05:20:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:15.176 05:20:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:15.176 05:20:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:15.176 05:20:22 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:15.176 05:20:22 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:15.176 05:20:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.176 05:20:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:15.176 05:20:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:15.176 05:20:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:15.176 05:20:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:15.176 05:20:22 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:15.176 256+0 records in 00:06:15.176 256+0 records out 00:06:15.176 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00493453 s, 212 MB/s 00:06:15.176 05:20:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:15.176 05:20:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:15.176 256+0 records in 00:06:15.176 256+0 records out 00:06:15.176 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0236417 s, 44.4 MB/s 00:06:15.176 05:20:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:15.176 05:20:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:15.176 256+0 records in 00:06:15.176 256+0 records out 00:06:15.176 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0231874 s, 45.2 MB/s 00:06:15.176 05:20:22 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:15.176 05:20:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.176 05:20:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:15.176 05:20:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:15.176 05:20:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:15.176 05:20:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:15.176 05:20:22 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:15.176 05:20:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:15.176 05:20:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:15.176 05:20:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:15.176 05:20:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:15.176 05:20:22 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:15.176 05:20:22 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:15.176 05:20:22 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.176 05:20:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.176 05:20:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:15.176 05:20:22 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:15.176 05:20:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:15.177 05:20:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:15.435 05:20:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:15.693 05:20:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:15.693 05:20:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:15.693 05:20:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:15.693 05:20:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:15.693 05:20:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:15.693 05:20:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:15.693 05:20:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:15.693 05:20:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:15.693 05:20:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:15.951 05:20:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:15.951 05:20:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:15.951 05:20:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:15.951 05:20:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:15.951 05:20:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:15.951 05:20:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:15.951 05:20:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:15.951 05:20:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:15.951 05:20:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:15.951 05:20:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.951 05:20:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:15.951 05:20:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:16.209 05:20:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:16.209 05:20:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:16.209 05:20:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:16.209 05:20:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:16.209 05:20:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:16.209 05:20:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:16.209 05:20:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:16.209 05:20:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:16.209 05:20:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:16.209 05:20:23 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:16.209 05:20:23 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:16.209 05:20:23 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:16.466 05:20:23 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:16.724 [2024-07-14 05:20:23.584246] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:16.724 [2024-07-14 05:20:23.675008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.724 [2024-07-14 05:20:23.675014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.724 [2024-07-14 05:20:23.738129] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:16.724 [2024-07-14 05:20:23.738221] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:20.005 05:20:26 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:20.005 05:20:26 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:20.005 spdk_app_start Round 2 00:06:20.005 05:20:26 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3107641 /var/tmp/spdk-nbd.sock 00:06:20.005 05:20:26 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3107641 ']' 00:06:20.005 05:20:26 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:20.005 05:20:26 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:20.005 05:20:26 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:20.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:20.005 05:20:26 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:20.005 05:20:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:20.005 05:20:26 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:20.005 05:20:26 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:20.005 05:20:26 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:20.005 Malloc0 00:06:20.005 05:20:26 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:20.263 Malloc1 00:06:20.263 05:20:27 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:20.263 05:20:27 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.263 05:20:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:20.263 05:20:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:20.263 05:20:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.263 05:20:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:20.263 05:20:27 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:20.263 05:20:27 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.263 05:20:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:20.263 05:20:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:20.263 05:20:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.263 05:20:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:20.263 05:20:27 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:20.263 05:20:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:20.263 05:20:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:20.263 05:20:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:20.521 /dev/nbd0 00:06:20.521 05:20:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:20.521 05:20:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:20.521 05:20:27 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:20.521 05:20:27 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:20.521 05:20:27 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:20.521 05:20:27 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:20.521 05:20:27 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:20.521 05:20:27 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:20.521 05:20:27 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:20.521 05:20:27 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:20.521 05:20:27 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:20.521 1+0 records in 00:06:20.521 1+0 records out 00:06:20.521 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00016678 s, 24.6 MB/s 00:06:20.521 05:20:27 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:20.521 05:20:27 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:20.521 05:20:27 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:20.521 05:20:27 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:20.521 05:20:27 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:20.521 05:20:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:20.521 05:20:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:20.521 05:20:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:20.779 /dev/nbd1 00:06:20.779 05:20:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:20.779 05:20:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:20.779 05:20:27 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:20.779 05:20:27 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:20.779 05:20:27 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:20.779 05:20:27 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:20.779 05:20:27 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:20.779 05:20:27 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:20.779 05:20:27 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:20.779 05:20:27 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:20.779 05:20:27 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:20.779 1+0 records in 00:06:20.779 1+0 records out 00:06:20.779 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000186646 s, 21.9 MB/s 00:06:20.779 05:20:27 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:20.779 05:20:27 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:20.779 05:20:27 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:20.779 05:20:27 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:20.779 05:20:27 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:20.779 05:20:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:20.779 05:20:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:20.779 05:20:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:20.779 05:20:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.779 05:20:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:21.037 05:20:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:21.037 { 00:06:21.037 "nbd_device": "/dev/nbd0", 00:06:21.037 "bdev_name": "Malloc0" 00:06:21.037 }, 00:06:21.037 { 00:06:21.037 "nbd_device": "/dev/nbd1", 00:06:21.037 "bdev_name": "Malloc1" 00:06:21.037 } 00:06:21.037 ]' 00:06:21.037 05:20:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:21.037 { 00:06:21.037 "nbd_device": "/dev/nbd0", 00:06:21.037 "bdev_name": "Malloc0" 00:06:21.037 }, 00:06:21.037 { 00:06:21.037 "nbd_device": "/dev/nbd1", 00:06:21.037 "bdev_name": "Malloc1" 00:06:21.037 } 00:06:21.037 ]' 00:06:21.038 05:20:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:21.038 05:20:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:21.038 /dev/nbd1' 00:06:21.038 05:20:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:21.038 /dev/nbd1' 00:06:21.038 05:20:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:21.038 05:20:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:21.038 05:20:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:21.038 05:20:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:21.038 05:20:27 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:21.038 05:20:27 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:21.038 05:20:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.038 05:20:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:21.038 05:20:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:21.038 05:20:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:21.038 05:20:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:21.038 05:20:27 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:21.038 256+0 records in 00:06:21.038 256+0 records out 00:06:21.038 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0048318 s, 217 MB/s 00:06:21.038 05:20:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:21.038 05:20:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:21.038 256+0 records in 00:06:21.038 256+0 records out 00:06:21.038 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0208995 s, 50.2 MB/s 00:06:21.038 05:20:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:21.038 05:20:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:21.038 256+0 records in 00:06:21.038 256+0 records out 00:06:21.038 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0252939 s, 41.5 MB/s 00:06:21.038 05:20:28 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:21.038 05:20:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.038 05:20:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:21.038 05:20:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:21.038 05:20:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:21.038 05:20:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:21.038 05:20:28 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:21.038 05:20:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:21.038 05:20:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:21.038 05:20:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:21.038 05:20:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:21.038 05:20:28 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:21.038 05:20:28 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:21.038 05:20:28 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.038 05:20:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.038 05:20:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:21.038 05:20:28 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:21.038 05:20:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:21.038 05:20:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:21.296 05:20:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:21.296 05:20:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:21.296 05:20:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:21.296 05:20:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.296 05:20:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.296 05:20:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:21.296 05:20:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:21.296 05:20:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.296 05:20:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:21.296 05:20:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:21.554 05:20:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:21.554 05:20:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:21.554 05:20:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:21.554 05:20:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.554 05:20:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.554 05:20:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:21.554 05:20:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:21.554 05:20:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.554 05:20:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:21.554 05:20:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.554 05:20:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:21.812 05:20:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:21.812 05:20:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:21.812 05:20:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:21.812 05:20:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:21.812 05:20:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:21.812 05:20:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:21.812 05:20:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:21.812 05:20:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:21.812 05:20:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:21.812 05:20:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:21.812 05:20:28 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:21.812 05:20:28 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:21.812 05:20:28 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:22.070 05:20:29 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:22.354 [2024-07-14 05:20:29.378674] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:22.613 [2024-07-14 05:20:29.470564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.613 [2024-07-14 05:20:29.470569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.613 [2024-07-14 05:20:29.532429] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:22.613 [2024-07-14 05:20:29.532501] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:25.138 05:20:32 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3107641 /var/tmp/spdk-nbd.sock 00:06:25.138 05:20:32 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3107641 ']' 00:06:25.138 05:20:32 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:25.138 05:20:32 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:25.138 05:20:32 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:25.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:25.138 05:20:32 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:25.138 05:20:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:25.396 05:20:32 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:25.396 05:20:32 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:25.396 05:20:32 event.app_repeat -- event/event.sh@39 -- # killprocess 3107641 00:06:25.396 05:20:32 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 3107641 ']' 00:06:25.396 05:20:32 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 3107641 00:06:25.396 05:20:32 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:06:25.396 05:20:32 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:25.396 05:20:32 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3107641 00:06:25.396 05:20:32 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:25.396 05:20:32 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:25.396 05:20:32 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3107641' 00:06:25.396 killing process with pid 3107641 00:06:25.396 05:20:32 event.app_repeat -- common/autotest_common.sh@965 -- # kill 3107641 00:06:25.396 05:20:32 event.app_repeat -- common/autotest_common.sh@970 -- # wait 3107641 00:06:25.655 spdk_app_start is called in Round 0. 00:06:25.655 Shutdown signal received, stop current app iteration 00:06:25.655 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 reinitialization... 00:06:25.655 spdk_app_start is called in Round 1. 00:06:25.655 Shutdown signal received, stop current app iteration 00:06:25.655 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 reinitialization... 00:06:25.655 spdk_app_start is called in Round 2. 00:06:25.655 Shutdown signal received, stop current app iteration 00:06:25.655 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 reinitialization... 00:06:25.655 spdk_app_start is called in Round 3. 00:06:25.655 Shutdown signal received, stop current app iteration 00:06:25.655 05:20:32 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:25.655 05:20:32 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:25.655 00:06:25.655 real 0m17.861s 00:06:25.655 user 0m38.849s 00:06:25.655 sys 0m3.201s 00:06:25.655 05:20:32 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:25.655 05:20:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:25.655 ************************************ 00:06:25.655 END TEST app_repeat 00:06:25.655 ************************************ 00:06:25.655 05:20:32 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:25.655 05:20:32 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:25.655 05:20:32 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:25.655 05:20:32 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:25.655 05:20:32 event -- common/autotest_common.sh@10 -- # set +x 00:06:25.655 ************************************ 00:06:25.655 START TEST cpu_locks 00:06:25.655 ************************************ 00:06:25.655 05:20:32 event.cpu_locks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:25.655 * Looking for test storage... 00:06:25.655 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:25.655 05:20:32 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:25.655 05:20:32 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:25.655 05:20:32 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:25.655 05:20:32 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:25.655 05:20:32 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:25.655 05:20:32 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:25.655 05:20:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:25.655 ************************************ 00:06:25.655 START TEST default_locks 00:06:25.655 ************************************ 00:06:25.655 05:20:32 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:06:25.655 05:20:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3109989 00:06:25.655 05:20:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:25.655 05:20:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3109989 00:06:25.655 05:20:32 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 3109989 ']' 00:06:25.655 05:20:32 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.655 05:20:32 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:25.655 05:20:32 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.655 05:20:32 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:25.655 05:20:32 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:25.914 [2024-07-14 05:20:32.801465] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:25.914 [2024-07-14 05:20:32.801541] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3109989 ] 00:06:25.914 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.914 [2024-07-14 05:20:32.859590] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.914 [2024-07-14 05:20:32.943854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.172 05:20:33 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:26.172 05:20:33 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:06:26.172 05:20:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3109989 00:06:26.172 05:20:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3109989 00:06:26.172 05:20:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:26.736 lslocks: write error 00:06:26.736 05:20:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3109989 00:06:26.736 05:20:33 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 3109989 ']' 00:06:26.736 05:20:33 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 3109989 00:06:26.736 05:20:33 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:06:26.736 05:20:33 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:26.736 05:20:33 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3109989 00:06:26.736 05:20:33 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:26.736 05:20:33 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:26.736 05:20:33 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3109989' 00:06:26.737 killing process with pid 3109989 00:06:26.737 05:20:33 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 3109989 00:06:26.737 05:20:33 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 3109989 00:06:26.995 05:20:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3109989 00:06:26.995 05:20:33 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:26.995 05:20:33 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3109989 00:06:26.995 05:20:33 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:26.995 05:20:33 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:26.995 05:20:33 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:26.995 05:20:33 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:26.995 05:20:33 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 3109989 00:06:26.995 05:20:33 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 3109989 ']' 00:06:26.995 05:20:33 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.995 05:20:33 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:26.995 05:20:33 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.995 05:20:33 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:26.995 05:20:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:26.995 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (3109989) - No such process 00:06:26.995 ERROR: process (pid: 3109989) is no longer running 00:06:26.995 05:20:33 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:26.995 05:20:33 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:06:26.995 05:20:33 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:26.995 05:20:33 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:26.995 05:20:33 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:26.995 05:20:33 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:26.995 05:20:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:26.995 05:20:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:26.995 05:20:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:26.995 05:20:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:26.995 00:06:26.995 real 0m1.249s 00:06:26.995 user 0m1.166s 00:06:26.995 sys 0m0.566s 00:06:26.995 05:20:33 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:26.995 05:20:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:26.995 ************************************ 00:06:26.995 END TEST default_locks 00:06:26.995 ************************************ 00:06:26.995 05:20:34 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:26.995 05:20:34 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:26.995 05:20:34 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:26.995 05:20:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:26.995 ************************************ 00:06:26.995 START TEST default_locks_via_rpc 00:06:26.995 ************************************ 00:06:26.995 05:20:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:06:26.995 05:20:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3110153 00:06:26.995 05:20:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:26.995 05:20:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3110153 00:06:26.995 05:20:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3110153 ']' 00:06:26.995 05:20:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.995 05:20:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:26.995 05:20:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.995 05:20:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:26.995 05:20:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.254 [2024-07-14 05:20:34.105802] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:27.254 [2024-07-14 05:20:34.105922] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3110153 ] 00:06:27.254 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.254 [2024-07-14 05:20:34.169003] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.254 [2024-07-14 05:20:34.263786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.513 05:20:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:27.513 05:20:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:27.513 05:20:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:27.513 05:20:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.513 05:20:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.513 05:20:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.513 05:20:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:27.513 05:20:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:27.513 05:20:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:27.513 05:20:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:27.513 05:20:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:27.513 05:20:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.513 05:20:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.513 05:20:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.513 05:20:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3110153 00:06:27.513 05:20:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3110153 00:06:27.513 05:20:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:27.772 05:20:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3110153 00:06:27.772 05:20:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 3110153 ']' 00:06:27.772 05:20:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 3110153 00:06:27.772 05:20:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:06:27.772 05:20:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:27.772 05:20:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3110153 00:06:27.772 05:20:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:27.772 05:20:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:27.772 05:20:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3110153' 00:06:27.772 killing process with pid 3110153 00:06:27.772 05:20:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 3110153 00:06:27.772 05:20:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 3110153 00:06:28.338 00:06:28.338 real 0m1.157s 00:06:28.338 user 0m1.086s 00:06:28.338 sys 0m0.524s 00:06:28.338 05:20:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:28.338 05:20:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.338 ************************************ 00:06:28.338 END TEST default_locks_via_rpc 00:06:28.338 ************************************ 00:06:28.338 05:20:35 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:28.338 05:20:35 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:28.338 05:20:35 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:28.338 05:20:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.338 ************************************ 00:06:28.338 START TEST non_locking_app_on_locked_coremask 00:06:28.338 ************************************ 00:06:28.338 05:20:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:06:28.338 05:20:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3110315 00:06:28.338 05:20:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:28.338 05:20:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3110315 /var/tmp/spdk.sock 00:06:28.338 05:20:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3110315 ']' 00:06:28.338 05:20:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.338 05:20:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:28.338 05:20:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.338 05:20:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:28.338 05:20:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.338 [2024-07-14 05:20:35.304463] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:28.338 [2024-07-14 05:20:35.304549] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3110315 ] 00:06:28.338 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.338 [2024-07-14 05:20:35.372524] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.597 [2024-07-14 05:20:35.464210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.856 05:20:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:28.856 05:20:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:28.856 05:20:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3110327 00:06:28.856 05:20:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:28.856 05:20:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3110327 /var/tmp/spdk2.sock 00:06:28.856 05:20:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3110327 ']' 00:06:28.856 05:20:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:28.856 05:20:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:28.856 05:20:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:28.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:28.856 05:20:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:28.856 05:20:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.856 [2024-07-14 05:20:35.770705] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:28.856 [2024-07-14 05:20:35.770791] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3110327 ] 00:06:28.856 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.856 [2024-07-14 05:20:35.866180] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:28.856 [2024-07-14 05:20:35.866214] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.115 [2024-07-14 05:20:36.050526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.681 05:20:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:29.681 05:20:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:29.681 05:20:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3110315 00:06:29.681 05:20:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3110315 00:06:29.681 05:20:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:30.246 lslocks: write error 00:06:30.246 05:20:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3110315 00:06:30.246 05:20:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3110315 ']' 00:06:30.246 05:20:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 3110315 00:06:30.246 05:20:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:30.246 05:20:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:30.246 05:20:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3110315 00:06:30.246 05:20:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:30.246 05:20:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:30.246 05:20:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3110315' 00:06:30.246 killing process with pid 3110315 00:06:30.246 05:20:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 3110315 00:06:30.246 05:20:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 3110315 00:06:31.181 05:20:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3110327 00:06:31.181 05:20:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3110327 ']' 00:06:31.181 05:20:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 3110327 00:06:31.181 05:20:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:31.181 05:20:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:31.181 05:20:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3110327 00:06:31.181 05:20:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:31.181 05:20:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:31.181 05:20:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3110327' 00:06:31.181 killing process with pid 3110327 00:06:31.181 05:20:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 3110327 00:06:31.181 05:20:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 3110327 00:06:31.440 00:06:31.440 real 0m3.278s 00:06:31.440 user 0m3.403s 00:06:31.440 sys 0m1.049s 00:06:31.440 05:20:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:31.440 05:20:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:31.440 ************************************ 00:06:31.440 END TEST non_locking_app_on_locked_coremask 00:06:31.440 ************************************ 00:06:31.699 05:20:38 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:31.699 05:20:38 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:31.699 05:20:38 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:31.699 05:20:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:31.699 ************************************ 00:06:31.699 START TEST locking_app_on_unlocked_coremask 00:06:31.699 ************************************ 00:06:31.699 05:20:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:06:31.699 05:20:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3110747 00:06:31.699 05:20:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:31.699 05:20:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3110747 /var/tmp/spdk.sock 00:06:31.699 05:20:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3110747 ']' 00:06:31.699 05:20:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.699 05:20:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:31.699 05:20:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.699 05:20:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:31.699 05:20:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:31.699 [2024-07-14 05:20:38.639934] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:31.699 [2024-07-14 05:20:38.640011] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3110747 ] 00:06:31.699 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.699 [2024-07-14 05:20:38.703440] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:31.699 [2024-07-14 05:20:38.703479] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.699 [2024-07-14 05:20:38.795418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.957 05:20:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:31.957 05:20:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:31.957 05:20:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3110761 00:06:31.957 05:20:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:31.957 05:20:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3110761 /var/tmp/spdk2.sock 00:06:31.957 05:20:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3110761 ']' 00:06:31.957 05:20:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:31.957 05:20:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:31.957 05:20:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:31.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:31.958 05:20:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:31.958 05:20:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:32.216 [2024-07-14 05:20:39.110433] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:32.216 [2024-07-14 05:20:39.110520] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3110761 ] 00:06:32.217 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.217 [2024-07-14 05:20:39.211427] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.475 [2024-07-14 05:20:39.395455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.042 05:20:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:33.042 05:20:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:33.042 05:20:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3110761 00:06:33.042 05:20:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3110761 00:06:33.042 05:20:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:33.609 lslocks: write error 00:06:33.609 05:20:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3110747 00:06:33.609 05:20:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3110747 ']' 00:06:33.609 05:20:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 3110747 00:06:33.609 05:20:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:33.609 05:20:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:33.609 05:20:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3110747 00:06:33.609 05:20:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:33.609 05:20:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:33.609 05:20:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3110747' 00:06:33.609 killing process with pid 3110747 00:06:33.609 05:20:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 3110747 00:06:33.609 05:20:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 3110747 00:06:34.544 05:20:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3110761 00:06:34.544 05:20:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3110761 ']' 00:06:34.544 05:20:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 3110761 00:06:34.544 05:20:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:34.544 05:20:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:34.544 05:20:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3110761 00:06:34.544 05:20:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:34.544 05:20:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:34.544 05:20:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3110761' 00:06:34.544 killing process with pid 3110761 00:06:34.544 05:20:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 3110761 00:06:34.544 05:20:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 3110761 00:06:34.803 00:06:34.803 real 0m3.211s 00:06:34.803 user 0m3.330s 00:06:34.803 sys 0m1.120s 00:06:34.803 05:20:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:34.803 05:20:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:34.803 ************************************ 00:06:34.803 END TEST locking_app_on_unlocked_coremask 00:06:34.803 ************************************ 00:06:34.803 05:20:41 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:34.803 05:20:41 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:34.803 05:20:41 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:34.803 05:20:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:34.803 ************************************ 00:06:34.803 START TEST locking_app_on_locked_coremask 00:06:34.803 ************************************ 00:06:34.803 05:20:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:06:34.803 05:20:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3111187 00:06:34.803 05:20:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:34.803 05:20:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3111187 /var/tmp/spdk.sock 00:06:34.804 05:20:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3111187 ']' 00:06:34.804 05:20:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.804 05:20:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:34.804 05:20:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.804 05:20:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:34.804 05:20:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:34.804 [2024-07-14 05:20:41.901476] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:34.804 [2024-07-14 05:20:41.901564] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3111187 ] 00:06:35.063 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.063 [2024-07-14 05:20:41.964112] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.063 [2024-07-14 05:20:42.051983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.322 05:20:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:35.322 05:20:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:35.322 05:20:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3111195 00:06:35.322 05:20:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:35.322 05:20:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3111195 /var/tmp/spdk2.sock 00:06:35.322 05:20:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:35.322 05:20:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3111195 /var/tmp/spdk2.sock 00:06:35.322 05:20:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:35.322 05:20:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:35.322 05:20:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:35.322 05:20:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:35.322 05:20:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3111195 /var/tmp/spdk2.sock 00:06:35.322 05:20:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3111195 ']' 00:06:35.322 05:20:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:35.322 05:20:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:35.322 05:20:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:35.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:35.322 05:20:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:35.322 05:20:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:35.322 [2024-07-14 05:20:42.359047] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:35.322 [2024-07-14 05:20:42.359130] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3111195 ] 00:06:35.322 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.580 [2024-07-14 05:20:42.456561] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3111187 has claimed it. 00:06:35.580 [2024-07-14 05:20:42.456626] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:36.145 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (3111195) - No such process 00:06:36.145 ERROR: process (pid: 3111195) is no longer running 00:06:36.145 05:20:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:36.145 05:20:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:06:36.145 05:20:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:36.145 05:20:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:36.145 05:20:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:36.145 05:20:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:36.145 05:20:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3111187 00:06:36.145 05:20:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3111187 00:06:36.145 05:20:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:36.403 lslocks: write error 00:06:36.403 05:20:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3111187 00:06:36.403 05:20:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3111187 ']' 00:06:36.403 05:20:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 3111187 00:06:36.403 05:20:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:36.403 05:20:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:36.403 05:20:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3111187 00:06:36.403 05:20:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:36.403 05:20:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:36.403 05:20:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3111187' 00:06:36.403 killing process with pid 3111187 00:06:36.403 05:20:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 3111187 00:06:36.403 05:20:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 3111187 00:06:36.977 00:06:36.977 real 0m1.962s 00:06:36.977 user 0m2.106s 00:06:36.977 sys 0m0.629s 00:06:36.977 05:20:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:36.977 05:20:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:36.977 ************************************ 00:06:36.977 END TEST locking_app_on_locked_coremask 00:06:36.977 ************************************ 00:06:36.977 05:20:43 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:36.977 05:20:43 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:36.977 05:20:43 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:36.977 05:20:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:36.977 ************************************ 00:06:36.977 START TEST locking_overlapped_coremask 00:06:36.977 ************************************ 00:06:36.977 05:20:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:06:36.977 05:20:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3111485 00:06:36.977 05:20:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:36.977 05:20:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3111485 /var/tmp/spdk.sock 00:06:36.977 05:20:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 3111485 ']' 00:06:36.977 05:20:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.977 05:20:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:36.977 05:20:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.977 05:20:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:36.977 05:20:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:36.977 [2024-07-14 05:20:43.912074] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:36.977 [2024-07-14 05:20:43.912156] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3111485 ] 00:06:36.977 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.977 [2024-07-14 05:20:43.979387] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:36.977 [2024-07-14 05:20:44.077889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:36.977 [2024-07-14 05:20:44.077927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:36.977 [2024-07-14 05:20:44.077931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.235 05:20:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:37.235 05:20:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:37.235 05:20:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3111491 00:06:37.235 05:20:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3111491 /var/tmp/spdk2.sock 00:06:37.235 05:20:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:37.235 05:20:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3111491 /var/tmp/spdk2.sock 00:06:37.235 05:20:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:37.235 05:20:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:37.235 05:20:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:37.235 05:20:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:37.235 05:20:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:37.235 05:20:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3111491 /var/tmp/spdk2.sock 00:06:37.235 05:20:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 3111491 ']' 00:06:37.235 05:20:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:37.235 05:20:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:37.235 05:20:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:37.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:37.235 05:20:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:37.235 05:20:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:37.493 [2024-07-14 05:20:44.388732] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:37.493 [2024-07-14 05:20:44.388815] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3111491 ] 00:06:37.493 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.493 [2024-07-14 05:20:44.477738] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3111485 has claimed it. 00:06:37.493 [2024-07-14 05:20:44.477800] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:38.059 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (3111491) - No such process 00:06:38.059 ERROR: process (pid: 3111491) is no longer running 00:06:38.059 05:20:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:38.059 05:20:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:06:38.059 05:20:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:38.059 05:20:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:38.059 05:20:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:38.059 05:20:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:38.059 05:20:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:38.059 05:20:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:38.059 05:20:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:38.059 05:20:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:38.059 05:20:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3111485 00:06:38.059 05:20:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 3111485 ']' 00:06:38.060 05:20:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 3111485 00:06:38.060 05:20:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:06:38.060 05:20:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:38.060 05:20:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3111485 00:06:38.060 05:20:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:38.060 05:20:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:38.060 05:20:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3111485' 00:06:38.060 killing process with pid 3111485 00:06:38.060 05:20:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 3111485 00:06:38.060 05:20:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 3111485 00:06:38.627 00:06:38.627 real 0m1.663s 00:06:38.627 user 0m4.480s 00:06:38.627 sys 0m0.478s 00:06:38.627 05:20:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:38.627 05:20:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.627 ************************************ 00:06:38.627 END TEST locking_overlapped_coremask 00:06:38.627 ************************************ 00:06:38.627 05:20:45 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:38.627 05:20:45 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:38.627 05:20:45 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:38.627 05:20:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:38.627 ************************************ 00:06:38.627 START TEST locking_overlapped_coremask_via_rpc 00:06:38.627 ************************************ 00:06:38.627 05:20:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:06:38.627 05:20:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3111657 00:06:38.627 05:20:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:38.627 05:20:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3111657 /var/tmp/spdk.sock 00:06:38.627 05:20:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3111657 ']' 00:06:38.627 05:20:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.627 05:20:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:38.627 05:20:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.627 05:20:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:38.627 05:20:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.627 [2024-07-14 05:20:45.616383] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:38.627 [2024-07-14 05:20:45.616482] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3111657 ] 00:06:38.627 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.627 [2024-07-14 05:20:45.675618] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:38.627 [2024-07-14 05:20:45.675657] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:38.886 [2024-07-14 05:20:45.766127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.886 [2024-07-14 05:20:45.769898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:38.886 [2024-07-14 05:20:45.769912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.145 05:20:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:39.145 05:20:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:39.145 05:20:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3111784 00:06:39.145 05:20:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3111784 /var/tmp/spdk2.sock 00:06:39.145 05:20:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:39.145 05:20:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3111784 ']' 00:06:39.145 05:20:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:39.145 05:20:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:39.145 05:20:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:39.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:39.145 05:20:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:39.145 05:20:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.145 [2024-07-14 05:20:46.058088] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:39.145 [2024-07-14 05:20:46.058188] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3111784 ] 00:06:39.145 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.145 [2024-07-14 05:20:46.148380] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:39.145 [2024-07-14 05:20:46.148412] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:39.403 [2024-07-14 05:20:46.325014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:39.403 [2024-07-14 05:20:46.325075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:39.403 [2024-07-14 05:20:46.325077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:39.968 05:20:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:39.968 05:20:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:39.968 05:20:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:39.968 05:20:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.968 05:20:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.968 05:20:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.968 05:20:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:39.968 05:20:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:39.968 05:20:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:39.968 05:20:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:39.968 05:20:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:39.968 05:20:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:39.968 05:20:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:39.968 05:20:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:39.968 05:20:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.968 05:20:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.968 [2024-07-14 05:20:47.006966] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3111657 has claimed it. 00:06:39.968 request: 00:06:39.968 { 00:06:39.968 "method": "framework_enable_cpumask_locks", 00:06:39.968 "req_id": 1 00:06:39.968 } 00:06:39.968 Got JSON-RPC error response 00:06:39.968 response: 00:06:39.968 { 00:06:39.968 "code": -32603, 00:06:39.968 "message": "Failed to claim CPU core: 2" 00:06:39.968 } 00:06:39.968 05:20:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:39.968 05:20:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:39.968 05:20:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:39.968 05:20:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:39.968 05:20:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:39.968 05:20:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3111657 /var/tmp/spdk.sock 00:06:39.968 05:20:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3111657 ']' 00:06:39.968 05:20:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.968 05:20:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:39.968 05:20:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.968 05:20:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:39.968 05:20:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.224 05:20:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:40.224 05:20:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:40.224 05:20:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3111784 /var/tmp/spdk2.sock 00:06:40.224 05:20:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3111784 ']' 00:06:40.224 05:20:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:40.224 05:20:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:40.224 05:20:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:40.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:40.224 05:20:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:40.224 05:20:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.482 05:20:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:40.482 05:20:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:40.482 05:20:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:40.482 05:20:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:40.482 05:20:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:40.482 05:20:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:40.482 00:06:40.482 real 0m1.955s 00:06:40.482 user 0m1.008s 00:06:40.482 sys 0m0.195s 00:06:40.482 05:20:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:40.482 05:20:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.482 ************************************ 00:06:40.482 END TEST locking_overlapped_coremask_via_rpc 00:06:40.482 ************************************ 00:06:40.482 05:20:47 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:40.482 05:20:47 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3111657 ]] 00:06:40.482 05:20:47 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3111657 00:06:40.482 05:20:47 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3111657 ']' 00:06:40.482 05:20:47 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3111657 00:06:40.482 05:20:47 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:06:40.482 05:20:47 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:40.482 05:20:47 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3111657 00:06:40.482 05:20:47 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:40.482 05:20:47 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:40.482 05:20:47 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3111657' 00:06:40.482 killing process with pid 3111657 00:06:40.482 05:20:47 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 3111657 00:06:40.482 05:20:47 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 3111657 00:06:41.047 05:20:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3111784 ]] 00:06:41.047 05:20:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3111784 00:06:41.047 05:20:47 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3111784 ']' 00:06:41.047 05:20:47 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3111784 00:06:41.047 05:20:47 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:06:41.047 05:20:47 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:41.047 05:20:47 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3111784 00:06:41.047 05:20:48 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:06:41.047 05:20:48 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:06:41.047 05:20:48 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3111784' 00:06:41.047 killing process with pid 3111784 00:06:41.047 05:20:48 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 3111784 00:06:41.047 05:20:48 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 3111784 00:06:41.305 05:20:48 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:41.305 05:20:48 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:41.305 05:20:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3111657 ]] 00:06:41.305 05:20:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3111657 00:06:41.305 05:20:48 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3111657 ']' 00:06:41.305 05:20:48 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3111657 00:06:41.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3111657) - No such process 00:06:41.305 05:20:48 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 3111657 is not found' 00:06:41.305 Process with pid 3111657 is not found 00:06:41.305 05:20:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3111784 ]] 00:06:41.305 05:20:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3111784 00:06:41.305 05:20:48 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3111784 ']' 00:06:41.305 05:20:48 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3111784 00:06:41.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3111784) - No such process 00:06:41.305 05:20:48 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 3111784 is not found' 00:06:41.305 Process with pid 3111784 is not found 00:06:41.305 05:20:48 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:41.305 00:06:41.305 real 0m15.731s 00:06:41.305 user 0m27.323s 00:06:41.305 sys 0m5.455s 00:06:41.305 05:20:48 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:41.305 05:20:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:41.305 ************************************ 00:06:41.305 END TEST cpu_locks 00:06:41.305 ************************************ 00:06:41.564 00:06:41.564 real 0m40.605s 00:06:41.564 user 1m16.614s 00:06:41.564 sys 0m9.469s 00:06:41.564 05:20:48 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:41.564 05:20:48 event -- common/autotest_common.sh@10 -- # set +x 00:06:41.564 ************************************ 00:06:41.564 END TEST event 00:06:41.564 ************************************ 00:06:41.564 05:20:48 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:41.564 05:20:48 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:41.564 05:20:48 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:41.564 05:20:48 -- common/autotest_common.sh@10 -- # set +x 00:06:41.564 ************************************ 00:06:41.564 START TEST thread 00:06:41.564 ************************************ 00:06:41.564 05:20:48 thread -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:41.564 * Looking for test storage... 00:06:41.564 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:41.564 05:20:48 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:41.564 05:20:48 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:41.564 05:20:48 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:41.564 05:20:48 thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.564 ************************************ 00:06:41.564 START TEST thread_poller_perf 00:06:41.564 ************************************ 00:06:41.564 05:20:48 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:41.564 [2024-07-14 05:20:48.564217] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:41.564 [2024-07-14 05:20:48.564279] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3112153 ] 00:06:41.564 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.564 [2024-07-14 05:20:48.625990] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.821 [2024-07-14 05:20:48.716950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.821 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:42.755 ====================================== 00:06:42.755 busy:2716886839 (cyc) 00:06:42.755 total_run_count: 292000 00:06:42.755 tsc_hz: 2700000000 (cyc) 00:06:42.755 ====================================== 00:06:42.755 poller_cost: 9304 (cyc), 3445 (nsec) 00:06:42.755 00:06:42.755 real 0m1.258s 00:06:42.755 user 0m1.181s 00:06:42.755 sys 0m0.071s 00:06:42.755 05:20:49 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:42.755 05:20:49 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:42.755 ************************************ 00:06:42.756 END TEST thread_poller_perf 00:06:42.756 ************************************ 00:06:42.756 05:20:49 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:42.756 05:20:49 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:42.756 05:20:49 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:42.756 05:20:49 thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.756 ************************************ 00:06:42.756 START TEST thread_poller_perf 00:06:42.756 ************************************ 00:06:42.756 05:20:49 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:43.014 [2024-07-14 05:20:49.868211] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:43.014 [2024-07-14 05:20:49.868276] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3112312 ] 00:06:43.014 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.014 [2024-07-14 05:20:49.931168] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.014 [2024-07-14 05:20:50.025771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.014 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:44.387 ====================================== 00:06:44.387 busy:2702652812 (cyc) 00:06:44.387 total_run_count: 3905000 00:06:44.387 tsc_hz: 2700000000 (cyc) 00:06:44.387 ====================================== 00:06:44.387 poller_cost: 692 (cyc), 256 (nsec) 00:06:44.387 00:06:44.387 real 0m1.254s 00:06:44.387 user 0m1.164s 00:06:44.387 sys 0m0.084s 00:06:44.387 05:20:51 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:44.387 05:20:51 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:44.387 ************************************ 00:06:44.387 END TEST thread_poller_perf 00:06:44.387 ************************************ 00:06:44.387 05:20:51 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:44.387 00:06:44.387 real 0m2.654s 00:06:44.387 user 0m2.395s 00:06:44.387 sys 0m0.258s 00:06:44.387 05:20:51 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:44.387 05:20:51 thread -- common/autotest_common.sh@10 -- # set +x 00:06:44.387 ************************************ 00:06:44.387 END TEST thread 00:06:44.387 ************************************ 00:06:44.387 05:20:51 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:44.387 05:20:51 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:44.387 05:20:51 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:44.387 05:20:51 -- common/autotest_common.sh@10 -- # set +x 00:06:44.387 ************************************ 00:06:44.387 START TEST accel 00:06:44.387 ************************************ 00:06:44.387 05:20:51 accel -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:44.387 * Looking for test storage... 00:06:44.387 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:44.387 05:20:51 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:44.387 05:20:51 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:44.387 05:20:51 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:44.387 05:20:51 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=3112505 00:06:44.387 05:20:51 accel -- accel/accel.sh@63 -- # waitforlisten 3112505 00:06:44.387 05:20:51 accel -- common/autotest_common.sh@827 -- # '[' -z 3112505 ']' 00:06:44.387 05:20:51 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:44.387 05:20:51 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.387 05:20:51 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:44.387 05:20:51 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:44.387 05:20:51 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:44.387 05:20:51 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.387 05:20:51 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:44.387 05:20:51 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:44.387 05:20:51 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.387 05:20:51 accel -- common/autotest_common.sh@10 -- # set +x 00:06:44.387 05:20:51 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.387 05:20:51 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:44.387 05:20:51 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:44.387 05:20:51 accel -- accel/accel.sh@41 -- # jq -r . 00:06:44.387 [2024-07-14 05:20:51.276059] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:44.387 [2024-07-14 05:20:51.276172] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3112505 ] 00:06:44.387 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.387 [2024-07-14 05:20:51.337196] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.387 [2024-07-14 05:20:51.426250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.646 05:20:51 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:44.646 05:20:51 accel -- common/autotest_common.sh@860 -- # return 0 00:06:44.646 05:20:51 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:44.646 05:20:51 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:44.646 05:20:51 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:44.646 05:20:51 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:44.646 05:20:51 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:44.646 05:20:51 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:44.646 05:20:51 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:44.646 05:20:51 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:44.646 05:20:51 accel -- common/autotest_common.sh@10 -- # set +x 00:06:44.646 05:20:51 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:44.646 05:20:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:44.646 05:20:51 accel -- accel/accel.sh@72 -- # IFS== 00:06:44.646 05:20:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:44.646 05:20:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:44.646 05:20:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:44.646 05:20:51 accel -- accel/accel.sh@72 -- # IFS== 00:06:44.646 05:20:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:44.646 05:20:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:44.646 05:20:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:44.646 05:20:51 accel -- accel/accel.sh@72 -- # IFS== 00:06:44.646 05:20:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:44.646 05:20:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:44.646 05:20:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:44.646 05:20:51 accel -- accel/accel.sh@72 -- # IFS== 00:06:44.646 05:20:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:44.646 05:20:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:44.646 05:20:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:44.646 05:20:51 accel -- accel/accel.sh@72 -- # IFS== 00:06:44.646 05:20:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:44.646 05:20:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:44.646 05:20:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:44.646 05:20:51 accel -- accel/accel.sh@72 -- # IFS== 00:06:44.646 05:20:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:44.646 05:20:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:44.646 05:20:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:44.646 05:20:51 accel -- accel/accel.sh@72 -- # IFS== 00:06:44.646 05:20:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:44.646 05:20:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:44.646 05:20:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:44.646 05:20:51 accel -- accel/accel.sh@72 -- # IFS== 00:06:44.646 05:20:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:44.646 05:20:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:44.646 05:20:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:44.646 05:20:51 accel -- accel/accel.sh@72 -- # IFS== 00:06:44.646 05:20:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:44.646 05:20:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:44.646 05:20:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:44.646 05:20:51 accel -- accel/accel.sh@72 -- # IFS== 00:06:44.646 05:20:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:44.646 05:20:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:44.646 05:20:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:44.646 05:20:51 accel -- accel/accel.sh@72 -- # IFS== 00:06:44.646 05:20:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:44.646 05:20:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:44.646 05:20:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:44.646 05:20:51 accel -- accel/accel.sh@72 -- # IFS== 00:06:44.646 05:20:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:44.646 05:20:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:44.646 05:20:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:44.646 05:20:51 accel -- accel/accel.sh@72 -- # IFS== 00:06:44.646 05:20:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:44.646 05:20:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:44.646 05:20:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:44.646 05:20:51 accel -- accel/accel.sh@72 -- # IFS== 00:06:44.646 05:20:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:44.646 05:20:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:44.646 05:20:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:44.646 05:20:51 accel -- accel/accel.sh@72 -- # IFS== 00:06:44.646 05:20:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:44.646 05:20:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:44.646 05:20:51 accel -- accel/accel.sh@75 -- # killprocess 3112505 00:06:44.646 05:20:51 accel -- common/autotest_common.sh@946 -- # '[' -z 3112505 ']' 00:06:44.646 05:20:51 accel -- common/autotest_common.sh@950 -- # kill -0 3112505 00:06:44.646 05:20:51 accel -- common/autotest_common.sh@951 -- # uname 00:06:44.646 05:20:51 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:44.646 05:20:51 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3112505 00:06:44.904 05:20:51 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:44.904 05:20:51 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:44.904 05:20:51 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3112505' 00:06:44.904 killing process with pid 3112505 00:06:44.904 05:20:51 accel -- common/autotest_common.sh@965 -- # kill 3112505 00:06:44.904 05:20:51 accel -- common/autotest_common.sh@970 -- # wait 3112505 00:06:45.163 05:20:52 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:45.163 05:20:52 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:45.163 05:20:52 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:45.163 05:20:52 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:45.163 05:20:52 accel -- common/autotest_common.sh@10 -- # set +x 00:06:45.163 05:20:52 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:06:45.163 05:20:52 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:45.163 05:20:52 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:45.163 05:20:52 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:45.163 05:20:52 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:45.163 05:20:52 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.163 05:20:52 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.163 05:20:52 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:45.163 05:20:52 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:45.163 05:20:52 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:45.163 05:20:52 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:45.163 05:20:52 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:45.163 05:20:52 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:45.163 05:20:52 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:45.163 05:20:52 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:45.163 05:20:52 accel -- common/autotest_common.sh@10 -- # set +x 00:06:45.163 ************************************ 00:06:45.163 START TEST accel_missing_filename 00:06:45.163 ************************************ 00:06:45.163 05:20:52 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:06:45.163 05:20:52 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:45.163 05:20:52 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:45.163 05:20:52 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:45.163 05:20:52 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:45.163 05:20:52 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:45.163 05:20:52 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:45.163 05:20:52 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:45.163 05:20:52 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:45.163 05:20:52 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:45.163 05:20:52 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:45.163 05:20:52 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:45.163 05:20:52 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.163 05:20:52 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.163 05:20:52 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:45.163 05:20:52 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:45.163 05:20:52 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:45.421 [2024-07-14 05:20:52.276091] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:45.422 [2024-07-14 05:20:52.276154] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3112674 ] 00:06:45.422 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.422 [2024-07-14 05:20:52.340374] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.422 [2024-07-14 05:20:52.432589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.422 [2024-07-14 05:20:52.490948] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:45.679 [2024-07-14 05:20:52.574965] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:45.679 A filename is required. 00:06:45.679 05:20:52 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:45.679 05:20:52 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:45.679 05:20:52 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:45.679 05:20:52 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:45.679 05:20:52 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:45.679 05:20:52 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:45.679 00:06:45.679 real 0m0.399s 00:06:45.679 user 0m0.278s 00:06:45.679 sys 0m0.151s 00:06:45.679 05:20:52 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:45.679 05:20:52 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:45.679 ************************************ 00:06:45.679 END TEST accel_missing_filename 00:06:45.679 ************************************ 00:06:45.679 05:20:52 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:45.679 05:20:52 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:06:45.679 05:20:52 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:45.679 05:20:52 accel -- common/autotest_common.sh@10 -- # set +x 00:06:45.679 ************************************ 00:06:45.679 START TEST accel_compress_verify 00:06:45.679 ************************************ 00:06:45.679 05:20:52 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:45.679 05:20:52 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:45.679 05:20:52 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:45.679 05:20:52 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:45.679 05:20:52 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:45.679 05:20:52 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:45.679 05:20:52 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:45.679 05:20:52 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:45.679 05:20:52 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:45.679 05:20:52 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:45.679 05:20:52 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:45.679 05:20:52 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:45.679 05:20:52 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.679 05:20:52 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.679 05:20:52 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:45.679 05:20:52 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:45.679 05:20:52 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:45.679 [2024-07-14 05:20:52.722479] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:45.679 [2024-07-14 05:20:52.722543] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3112726 ] 00:06:45.679 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.679 [2024-07-14 05:20:52.781160] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.937 [2024-07-14 05:20:52.871564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.937 [2024-07-14 05:20:52.928455] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:45.937 [2024-07-14 05:20:53.004449] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:46.196 00:06:46.196 Compression does not support the verify option, aborting. 00:06:46.196 05:20:53 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:46.196 05:20:53 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:46.196 05:20:53 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:46.196 05:20:53 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:46.196 05:20:53 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:46.196 05:20:53 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:46.196 00:06:46.196 real 0m0.381s 00:06:46.196 user 0m0.278s 00:06:46.196 sys 0m0.137s 00:06:46.196 05:20:53 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:46.196 05:20:53 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:46.196 ************************************ 00:06:46.196 END TEST accel_compress_verify 00:06:46.196 ************************************ 00:06:46.196 05:20:53 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:46.196 05:20:53 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:46.196 05:20:53 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:46.196 05:20:53 accel -- common/autotest_common.sh@10 -- # set +x 00:06:46.196 ************************************ 00:06:46.196 START TEST accel_wrong_workload 00:06:46.196 ************************************ 00:06:46.196 05:20:53 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:06:46.196 05:20:53 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:46.196 05:20:53 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:46.196 05:20:53 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:46.196 05:20:53 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:46.196 05:20:53 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:46.196 05:20:53 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:46.196 05:20:53 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:46.196 05:20:53 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:46.196 05:20:53 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:46.196 05:20:53 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:46.196 05:20:53 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:46.196 05:20:53 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.196 05:20:53 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.196 05:20:53 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:46.196 05:20:53 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:46.196 05:20:53 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:46.196 Unsupported workload type: foobar 00:06:46.196 [2024-07-14 05:20:53.149384] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:46.196 accel_perf options: 00:06:46.196 [-h help message] 00:06:46.196 [-q queue depth per core] 00:06:46.196 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:46.196 [-T number of threads per core 00:06:46.196 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:46.196 [-t time in seconds] 00:06:46.196 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:46.196 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:46.196 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:46.196 [-l for compress/decompress workloads, name of uncompressed input file 00:06:46.196 [-S for crc32c workload, use this seed value (default 0) 00:06:46.196 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:46.196 [-f for fill workload, use this BYTE value (default 255) 00:06:46.196 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:46.196 [-y verify result if this switch is on] 00:06:46.196 [-a tasks to allocate per core (default: same value as -q)] 00:06:46.196 Can be used to spread operations across a wider range of memory. 00:06:46.196 05:20:53 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:46.196 05:20:53 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:46.196 05:20:53 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:46.196 05:20:53 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:46.196 00:06:46.196 real 0m0.023s 00:06:46.196 user 0m0.010s 00:06:46.196 sys 0m0.013s 00:06:46.196 05:20:53 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:46.196 05:20:53 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:46.196 ************************************ 00:06:46.196 END TEST accel_wrong_workload 00:06:46.196 ************************************ 00:06:46.196 Error: writing output failed: Broken pipe 00:06:46.196 05:20:53 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:46.196 05:20:53 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:06:46.196 05:20:53 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:46.196 05:20:53 accel -- common/autotest_common.sh@10 -- # set +x 00:06:46.196 ************************************ 00:06:46.196 START TEST accel_negative_buffers 00:06:46.196 ************************************ 00:06:46.196 05:20:53 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:46.196 05:20:53 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:46.196 05:20:53 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:46.196 05:20:53 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:46.196 05:20:53 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:46.197 05:20:53 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:46.197 05:20:53 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:46.197 05:20:53 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:46.197 05:20:53 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:46.197 05:20:53 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:46.197 05:20:53 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:46.197 05:20:53 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:46.197 05:20:53 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.197 05:20:53 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.197 05:20:53 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:46.197 05:20:53 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:46.197 05:20:53 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:46.197 -x option must be non-negative. 00:06:46.197 [2024-07-14 05:20:53.220824] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:46.197 accel_perf options: 00:06:46.197 [-h help message] 00:06:46.197 [-q queue depth per core] 00:06:46.197 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:46.197 [-T number of threads per core 00:06:46.197 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:46.197 [-t time in seconds] 00:06:46.197 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:46.197 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:46.197 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:46.197 [-l for compress/decompress workloads, name of uncompressed input file 00:06:46.197 [-S for crc32c workload, use this seed value (default 0) 00:06:46.197 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:46.197 [-f for fill workload, use this BYTE value (default 255) 00:06:46.197 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:46.197 [-y verify result if this switch is on] 00:06:46.197 [-a tasks to allocate per core (default: same value as -q)] 00:06:46.197 Can be used to spread operations across a wider range of memory. 00:06:46.197 05:20:53 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:46.197 05:20:53 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:46.197 05:20:53 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:46.197 05:20:53 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:46.197 00:06:46.197 real 0m0.024s 00:06:46.197 user 0m0.014s 00:06:46.197 sys 0m0.010s 00:06:46.197 05:20:53 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:46.197 05:20:53 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:46.197 ************************************ 00:06:46.197 END TEST accel_negative_buffers 00:06:46.197 ************************************ 00:06:46.197 Error: writing output failed: Broken pipe 00:06:46.197 05:20:53 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:46.197 05:20:53 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:46.197 05:20:53 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:46.197 05:20:53 accel -- common/autotest_common.sh@10 -- # set +x 00:06:46.197 ************************************ 00:06:46.197 START TEST accel_crc32c 00:06:46.197 ************************************ 00:06:46.197 05:20:53 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:46.197 05:20:53 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:46.197 05:20:53 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:46.197 05:20:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.197 05:20:53 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:46.197 05:20:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.197 05:20:53 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:46.197 05:20:53 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:46.197 05:20:53 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:46.197 05:20:53 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:46.197 05:20:53 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.197 05:20:53 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.197 05:20:53 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:46.197 05:20:53 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:46.197 05:20:53 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:46.197 [2024-07-14 05:20:53.282041] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:46.197 [2024-07-14 05:20:53.282110] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3112885 ] 00:06:46.455 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.455 [2024-07-14 05:20:53.344166] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.455 [2024-07-14 05:20:53.438095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.455 05:20:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:46.455 05:20:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.455 05:20:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.455 05:20:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.455 05:20:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:46.455 05:20:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.455 05:20:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.455 05:20:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.455 05:20:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:46.455 05:20:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.455 05:20:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.455 05:20:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.455 05:20:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:46.455 05:20:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.455 05:20:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.455 05:20:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.455 05:20:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:46.455 05:20:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.455 05:20:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.455 05:20:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.455 05:20:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:46.455 05:20:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.455 05:20:53 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:46.455 05:20:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.455 05:20:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.455 05:20:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:46.455 05:20:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.455 05:20:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.455 05:20:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.455 05:20:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:46.455 05:20:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.455 05:20:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.455 05:20:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.455 05:20:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:46.455 05:20:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.455 05:20:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.455 05:20:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.455 05:20:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:46.455 05:20:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.455 05:20:53 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:46.455 05:20:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.455 05:20:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.455 05:20:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:46.455 05:20:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.455 05:20:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.455 05:20:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.455 05:20:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:46.455 05:20:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.455 05:20:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.455 05:20:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.455 05:20:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:46.455 05:20:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.455 05:20:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.455 05:20:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.455 05:20:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:46.455 05:20:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.455 05:20:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.455 05:20:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.455 05:20:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:46.455 05:20:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.455 05:20:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.455 05:20:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.455 05:20:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:46.455 05:20:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.455 05:20:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.455 05:20:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.455 05:20:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:46.455 05:20:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.455 05:20:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.455 05:20:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:47.827 05:20:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:47.827 05:20:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:47.827 05:20:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:47.827 05:20:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:47.827 05:20:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:47.827 05:20:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:47.827 05:20:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:47.827 05:20:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:47.827 05:20:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:47.827 05:20:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:47.827 05:20:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:47.827 05:20:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:47.827 05:20:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:47.827 05:20:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:47.827 05:20:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:47.827 05:20:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:47.827 05:20:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:47.827 05:20:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:47.827 05:20:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:47.827 05:20:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:47.827 05:20:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:47.827 05:20:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:47.827 05:20:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:47.827 05:20:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:47.827 05:20:54 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:47.827 05:20:54 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:47.827 05:20:54 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:47.827 00:06:47.827 real 0m1.410s 00:06:47.827 user 0m1.269s 00:06:47.827 sys 0m0.145s 00:06:47.827 05:20:54 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:47.827 05:20:54 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:47.827 ************************************ 00:06:47.827 END TEST accel_crc32c 00:06:47.827 ************************************ 00:06:47.827 05:20:54 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:47.827 05:20:54 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:47.827 05:20:54 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:47.827 05:20:54 accel -- common/autotest_common.sh@10 -- # set +x 00:06:47.827 ************************************ 00:06:47.827 START TEST accel_crc32c_C2 00:06:47.827 ************************************ 00:06:47.827 05:20:54 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:47.827 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:47.827 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:47.827 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.827 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:47.827 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.827 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:47.827 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:47.827 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:47.827 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:47.827 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.827 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.827 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:47.827 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:47.827 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:47.827 [2024-07-14 05:20:54.740450] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:47.827 [2024-07-14 05:20:54.740514] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3113040 ] 00:06:47.827 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.827 [2024-07-14 05:20:54.804991] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.827 [2024-07-14 05:20:54.897779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.086 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:48.086 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.086 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.086 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.086 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:48.086 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.086 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.086 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.086 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:48.086 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.086 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.086 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.086 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:48.086 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.086 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.086 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.086 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:48.086 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.086 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.086 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.086 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:48.086 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.086 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:48.086 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.086 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.086 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:48.086 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.086 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.086 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.086 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:48.086 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.086 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.086 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.086 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:48.086 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.086 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.086 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.086 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:48.086 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.086 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:48.086 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.086 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.086 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:48.086 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.086 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.086 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.086 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:48.086 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.086 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.086 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.086 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:48.086 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.086 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.086 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.086 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:48.086 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.086 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.086 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.086 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:48.086 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.086 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.086 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.086 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:48.086 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.086 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.086 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.086 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:48.086 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.086 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.086 05:20:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:49.020 05:20:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:49.020 05:20:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.020 05:20:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:49.020 05:20:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:49.020 05:20:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:49.020 05:20:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.020 05:20:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:49.020 05:20:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:49.020 05:20:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:49.020 05:20:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.020 05:20:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:49.020 05:20:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:49.020 05:20:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:49.020 05:20:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.020 05:20:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:49.279 05:20:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:49.279 05:20:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:49.279 05:20:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.279 05:20:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:49.279 05:20:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:49.279 05:20:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:49.279 05:20:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.279 05:20:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:49.279 05:20:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:49.279 05:20:56 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:49.279 05:20:56 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:49.279 05:20:56 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:49.279 00:06:49.279 real 0m1.405s 00:06:49.279 user 0m1.251s 00:06:49.279 sys 0m0.157s 00:06:49.279 05:20:56 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:49.279 05:20:56 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:49.279 ************************************ 00:06:49.279 END TEST accel_crc32c_C2 00:06:49.279 ************************************ 00:06:49.279 05:20:56 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:49.279 05:20:56 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:49.279 05:20:56 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:49.279 05:20:56 accel -- common/autotest_common.sh@10 -- # set +x 00:06:49.279 ************************************ 00:06:49.279 START TEST accel_copy 00:06:49.279 ************************************ 00:06:49.279 05:20:56 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:06:49.279 05:20:56 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:49.279 05:20:56 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:49.279 05:20:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:49.279 05:20:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:49.279 05:20:56 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:49.279 05:20:56 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:49.279 05:20:56 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:49.279 05:20:56 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:49.279 05:20:56 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:49.279 05:20:56 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.279 05:20:56 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.279 05:20:56 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:49.279 05:20:56 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:49.279 05:20:56 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:49.279 [2024-07-14 05:20:56.192285] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:49.279 [2024-07-14 05:20:56.192346] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3113276 ] 00:06:49.279 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.279 [2024-07-14 05:20:56.254272] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.279 [2024-07-14 05:20:56.345596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.538 05:20:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:49.538 05:20:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:49.538 05:20:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:49.538 05:20:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:49.538 05:20:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:49.538 05:20:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:49.538 05:20:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:49.538 05:20:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:49.538 05:20:56 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:49.538 05:20:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:49.538 05:20:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:49.538 05:20:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:49.538 05:20:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:49.538 05:20:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:49.538 05:20:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:49.538 05:20:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:49.538 05:20:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:49.538 05:20:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:49.538 05:20:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:49.538 05:20:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:49.538 05:20:56 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:49.538 05:20:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:49.538 05:20:56 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:49.538 05:20:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:49.538 05:20:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:49.538 05:20:56 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:49.538 05:20:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:49.538 05:20:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:49.538 05:20:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:49.538 05:20:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:49.538 05:20:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:49.538 05:20:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:49.538 05:20:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:49.538 05:20:56 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:49.538 05:20:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:49.538 05:20:56 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:49.538 05:20:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:49.538 05:20:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:49.538 05:20:56 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:49.538 05:20:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:49.538 05:20:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:49.538 05:20:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:49.538 05:20:56 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:49.538 05:20:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:49.538 05:20:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:49.538 05:20:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:49.538 05:20:56 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:49.539 05:20:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:49.539 05:20:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:49.539 05:20:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:49.539 05:20:56 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:49.539 05:20:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:49.539 05:20:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:49.539 05:20:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:49.539 05:20:56 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:49.539 05:20:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:49.539 05:20:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:49.539 05:20:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:49.539 05:20:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:49.539 05:20:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:49.539 05:20:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:49.539 05:20:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:49.539 05:20:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:49.539 05:20:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:49.539 05:20:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:49.539 05:20:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.503 05:20:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:50.503 05:20:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.503 05:20:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.503 05:20:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.503 05:20:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:50.503 05:20:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.503 05:20:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.503 05:20:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.503 05:20:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:50.503 05:20:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.503 05:20:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.503 05:20:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.503 05:20:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:50.503 05:20:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.503 05:20:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.503 05:20:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.503 05:20:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:50.503 05:20:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.503 05:20:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.503 05:20:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.503 05:20:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:50.503 05:20:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.503 05:20:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.503 05:20:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.503 05:20:57 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:50.503 05:20:57 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:50.504 05:20:57 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:50.504 00:06:50.504 real 0m1.394s 00:06:50.504 user 0m1.261s 00:06:50.504 sys 0m0.135s 00:06:50.504 05:20:57 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:50.504 05:20:57 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:50.504 ************************************ 00:06:50.504 END TEST accel_copy 00:06:50.504 ************************************ 00:06:50.775 05:20:57 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:50.775 05:20:57 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:06:50.775 05:20:57 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:50.775 05:20:57 accel -- common/autotest_common.sh@10 -- # set +x 00:06:50.775 ************************************ 00:06:50.775 START TEST accel_fill 00:06:50.775 ************************************ 00:06:50.775 05:20:57 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:50.775 [2024-07-14 05:20:57.635294] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:50.775 [2024-07-14 05:20:57.635358] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3113474 ] 00:06:50.775 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.775 [2024-07-14 05:20:57.700205] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.775 [2024-07-14 05:20:57.793033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:50.775 05:20:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:52.150 05:20:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:52.150 05:20:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:52.150 05:20:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:52.150 05:20:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:52.150 05:20:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:52.150 05:20:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:52.150 05:20:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:52.150 05:20:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:52.150 05:20:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:52.150 05:20:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:52.150 05:20:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:52.150 05:20:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:52.150 05:20:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:52.150 05:20:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:52.150 05:20:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:52.150 05:20:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:52.150 05:20:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:52.150 05:20:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:52.150 05:20:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:52.150 05:20:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:52.150 05:20:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:52.150 05:20:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:52.150 05:20:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:52.150 05:20:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:52.150 05:20:59 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:52.150 05:20:59 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:52.150 05:20:59 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:52.150 00:06:52.150 real 0m1.414s 00:06:52.150 user 0m1.262s 00:06:52.150 sys 0m0.154s 00:06:52.150 05:20:59 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:52.150 05:20:59 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:52.150 ************************************ 00:06:52.150 END TEST accel_fill 00:06:52.150 ************************************ 00:06:52.150 05:20:59 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:52.150 05:20:59 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:52.150 05:20:59 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:52.150 05:20:59 accel -- common/autotest_common.sh@10 -- # set +x 00:06:52.150 ************************************ 00:06:52.150 START TEST accel_copy_crc32c 00:06:52.150 ************************************ 00:06:52.150 05:20:59 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:06:52.150 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:52.150 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:52.150 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.150 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:52.150 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.150 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:52.150 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:52.150 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:52.150 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:52.150 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.150 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.150 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:52.150 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:52.150 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:52.150 [2024-07-14 05:20:59.094450] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:52.150 [2024-07-14 05:20:59.094510] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3113633 ] 00:06:52.150 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.150 [2024-07-14 05:20:59.156237] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.150 [2024-07-14 05:20:59.249135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.409 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:52.409 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.409 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.409 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.409 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:52.409 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.409 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.409 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.409 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:52.409 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.409 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.409 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.409 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:52.409 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.409 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.409 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.409 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:52.409 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.409 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.409 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.409 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:52.409 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.409 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:52.409 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.409 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.409 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:52.409 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.409 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.409 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.409 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:52.409 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.409 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.409 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.409 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:52.409 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.409 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.409 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.409 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:52.409 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.409 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.409 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.409 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:52.409 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.409 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:52.409 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.409 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.409 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:52.409 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.409 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.409 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.409 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:52.409 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.409 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.409 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.409 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:52.409 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.409 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.409 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.409 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:52.410 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.410 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.410 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.410 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:52.410 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.410 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.410 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.410 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:52.410 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.410 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.410 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.410 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:52.410 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.410 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.410 05:20:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.786 05:21:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:53.786 05:21:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.786 05:21:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.786 05:21:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.786 05:21:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:53.786 05:21:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.786 05:21:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.786 05:21:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.786 05:21:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:53.786 05:21:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.786 05:21:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.786 05:21:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.786 05:21:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:53.786 05:21:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.786 05:21:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.786 05:21:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.786 05:21:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:53.786 05:21:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.786 05:21:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.786 05:21:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.786 05:21:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:53.786 05:21:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.786 05:21:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.786 05:21:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.786 05:21:00 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:53.786 05:21:00 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:53.786 05:21:00 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:53.786 00:06:53.786 real 0m1.398s 00:06:53.786 user 0m1.267s 00:06:53.786 sys 0m0.133s 00:06:53.786 05:21:00 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:53.786 05:21:00 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:53.786 ************************************ 00:06:53.786 END TEST accel_copy_crc32c 00:06:53.786 ************************************ 00:06:53.786 05:21:00 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:53.786 05:21:00 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:53.786 05:21:00 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:53.786 05:21:00 accel -- common/autotest_common.sh@10 -- # set +x 00:06:53.786 ************************************ 00:06:53.786 START TEST accel_copy_crc32c_C2 00:06:53.786 ************************************ 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:53.786 [2024-07-14 05:21:00.538391] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:53.786 [2024-07-14 05:21:00.538455] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3113829 ] 00:06:53.786 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.786 [2024-07-14 05:21:00.603198] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.786 [2024-07-14 05:21:00.705765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:53.786 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.787 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:53.787 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:53.787 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:53.787 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.787 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:53.787 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:53.787 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:53.787 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.787 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:53.787 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:53.787 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:53.787 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.787 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:53.787 05:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.161 05:21:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:55.161 05:21:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.161 05:21:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.161 05:21:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.161 05:21:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:55.161 05:21:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.161 05:21:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.161 05:21:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.161 05:21:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:55.161 05:21:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.162 05:21:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.162 05:21:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.162 05:21:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:55.162 05:21:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.162 05:21:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.162 05:21:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.162 05:21:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:55.162 05:21:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.162 05:21:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.162 05:21:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.162 05:21:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:55.162 05:21:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.162 05:21:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.162 05:21:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.162 05:21:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:55.162 05:21:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:55.162 05:21:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:55.162 00:06:55.162 real 0m1.408s 00:06:55.162 user 0m1.265s 00:06:55.162 sys 0m0.144s 00:06:55.162 05:21:01 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:55.162 05:21:01 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:55.162 ************************************ 00:06:55.162 END TEST accel_copy_crc32c_C2 00:06:55.162 ************************************ 00:06:55.162 05:21:01 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:55.162 05:21:01 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:55.162 05:21:01 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:55.162 05:21:01 accel -- common/autotest_common.sh@10 -- # set +x 00:06:55.162 ************************************ 00:06:55.162 START TEST accel_dualcast 00:06:55.162 ************************************ 00:06:55.162 05:21:01 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:06:55.162 05:21:01 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:55.162 05:21:01 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:55.162 05:21:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:55.162 05:21:01 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:55.162 05:21:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:55.162 05:21:01 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:55.162 05:21:01 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:55.162 05:21:01 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:55.162 05:21:01 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:55.162 05:21:01 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.162 05:21:01 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.162 05:21:01 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:55.162 05:21:01 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:55.162 05:21:01 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:55.162 [2024-07-14 05:21:01.989634] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:55.162 [2024-07-14 05:21:01.989703] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3114170 ] 00:06:55.162 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.162 [2024-07-14 05:21:02.051004] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.162 [2024-07-14 05:21:02.140885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.162 05:21:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:55.162 05:21:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:55.162 05:21:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:55.162 05:21:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:55.162 05:21:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:55.162 05:21:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:55.162 05:21:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:55.162 05:21:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:55.162 05:21:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:55.162 05:21:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:55.162 05:21:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:55.162 05:21:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:55.162 05:21:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:55.162 05:21:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:55.162 05:21:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:55.162 05:21:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:55.162 05:21:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:55.162 05:21:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:55.162 05:21:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:55.162 05:21:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:55.162 05:21:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:55.162 05:21:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:55.162 05:21:02 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:55.162 05:21:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:55.162 05:21:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:55.162 05:21:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:55.162 05:21:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:55.162 05:21:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:55.162 05:21:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:55.162 05:21:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:55.162 05:21:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:55.162 05:21:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:55.162 05:21:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:55.162 05:21:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:55.162 05:21:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:55.162 05:21:02 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:55.162 05:21:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:55.162 05:21:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:55.162 05:21:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:55.162 05:21:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:55.162 05:21:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:55.162 05:21:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:55.162 05:21:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:55.162 05:21:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:55.162 05:21:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:55.162 05:21:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:55.162 05:21:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:55.162 05:21:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:55.162 05:21:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:55.162 05:21:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:55.162 05:21:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:55.162 05:21:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:55.162 05:21:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:55.162 05:21:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:55.162 05:21:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:55.162 05:21:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:55.162 05:21:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:55.162 05:21:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:55.162 05:21:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:55.162 05:21:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:55.162 05:21:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:55.162 05:21:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:55.162 05:21:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:55.162 05:21:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:55.162 05:21:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:55.162 05:21:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:56.537 05:21:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:56.537 05:21:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:56.537 05:21:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:56.537 05:21:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:56.537 05:21:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:56.537 05:21:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:56.537 05:21:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:56.537 05:21:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:56.537 05:21:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:56.537 05:21:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:56.537 05:21:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:56.537 05:21:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:56.537 05:21:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:56.537 05:21:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:56.537 05:21:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:56.537 05:21:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:56.537 05:21:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:56.537 05:21:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:56.537 05:21:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:56.537 05:21:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:56.537 05:21:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:56.537 05:21:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:56.537 05:21:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:56.537 05:21:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:56.537 05:21:03 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:56.537 05:21:03 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:56.537 05:21:03 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:56.537 00:06:56.537 real 0m1.394s 00:06:56.537 user 0m1.250s 00:06:56.537 sys 0m0.145s 00:06:56.537 05:21:03 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:56.537 05:21:03 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:56.537 ************************************ 00:06:56.537 END TEST accel_dualcast 00:06:56.537 ************************************ 00:06:56.537 05:21:03 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:56.537 05:21:03 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:56.537 05:21:03 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:56.537 05:21:03 accel -- common/autotest_common.sh@10 -- # set +x 00:06:56.537 ************************************ 00:06:56.537 START TEST accel_compare 00:06:56.537 ************************************ 00:06:56.537 05:21:03 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:06:56.537 05:21:03 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:56.537 05:21:03 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:56.537 05:21:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:56.537 05:21:03 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:56.537 05:21:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:56.537 05:21:03 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:56.537 05:21:03 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:56.537 05:21:03 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:56.537 05:21:03 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:56.537 05:21:03 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.537 05:21:03 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.537 05:21:03 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:56.537 05:21:03 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:56.537 05:21:03 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:56.537 [2024-07-14 05:21:03.440299] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:56.537 [2024-07-14 05:21:03.440364] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3114332 ] 00:06:56.537 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.537 [2024-07-14 05:21:03.504413] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.537 [2024-07-14 05:21:03.597234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.795 05:21:03 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:56.795 05:21:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:56.795 05:21:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:56.795 05:21:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:56.795 05:21:03 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:56.795 05:21:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:56.795 05:21:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:56.795 05:21:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:56.795 05:21:03 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:56.795 05:21:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:56.795 05:21:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:56.795 05:21:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:56.795 05:21:03 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:56.795 05:21:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:56.795 05:21:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:56.795 05:21:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:56.795 05:21:03 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:56.795 05:21:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:56.795 05:21:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:56.795 05:21:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:56.795 05:21:03 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:56.795 05:21:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:56.795 05:21:03 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:56.795 05:21:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:56.796 05:21:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:56.796 05:21:03 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:56.796 05:21:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:56.796 05:21:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:56.796 05:21:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:56.796 05:21:03 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:56.796 05:21:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:56.796 05:21:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:56.796 05:21:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:56.796 05:21:03 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:56.796 05:21:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:56.796 05:21:03 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:56.796 05:21:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:56.796 05:21:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:56.796 05:21:03 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:56.796 05:21:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:56.796 05:21:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:56.796 05:21:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:56.796 05:21:03 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:56.796 05:21:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:56.796 05:21:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:56.796 05:21:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:56.796 05:21:03 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:56.796 05:21:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:56.796 05:21:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:56.796 05:21:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:56.796 05:21:03 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:56.796 05:21:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:56.796 05:21:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:56.796 05:21:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:56.796 05:21:03 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:56.796 05:21:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:56.796 05:21:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:56.796 05:21:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:56.796 05:21:03 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:56.796 05:21:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:56.796 05:21:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:56.796 05:21:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:56.796 05:21:03 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:56.796 05:21:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:56.796 05:21:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:56.796 05:21:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:57.731 05:21:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:57.731 05:21:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:57.731 05:21:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:57.731 05:21:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:57.731 05:21:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:57.731 05:21:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:57.731 05:21:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:57.731 05:21:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:57.731 05:21:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:57.731 05:21:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:57.731 05:21:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:57.731 05:21:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:57.731 05:21:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:57.731 05:21:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:57.731 05:21:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:57.731 05:21:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:57.731 05:21:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:57.731 05:21:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:57.731 05:21:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:57.731 05:21:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:57.731 05:21:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:57.731 05:21:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:57.731 05:21:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:57.731 05:21:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:57.731 05:21:04 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:57.731 05:21:04 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:57.731 05:21:04 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:57.731 00:06:57.731 real 0m1.412s 00:06:57.731 user 0m1.265s 00:06:57.731 sys 0m0.149s 00:06:57.731 05:21:04 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:57.731 05:21:04 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:57.731 ************************************ 00:06:57.731 END TEST accel_compare 00:06:57.731 ************************************ 00:06:57.990 05:21:04 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:57.990 05:21:04 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:57.990 05:21:04 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:57.990 05:21:04 accel -- common/autotest_common.sh@10 -- # set +x 00:06:57.990 ************************************ 00:06:57.990 START TEST accel_xor 00:06:57.990 ************************************ 00:06:57.990 05:21:04 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:06:57.990 05:21:04 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:57.990 05:21:04 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:57.990 05:21:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:57.990 05:21:04 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:57.990 05:21:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:57.990 05:21:04 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:57.990 05:21:04 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:57.990 05:21:04 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:57.990 05:21:04 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:57.990 05:21:04 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.990 05:21:04 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.990 05:21:04 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:57.990 05:21:04 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:57.990 05:21:04 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:57.990 [2024-07-14 05:21:04.898972] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:57.990 [2024-07-14 05:21:04.899039] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3114485 ] 00:06:57.990 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.990 [2024-07-14 05:21:04.959975] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.990 [2024-07-14 05:21:05.053661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.248 05:21:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:58.248 05:21:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:58.248 05:21:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:58.248 05:21:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:58.248 05:21:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:58.248 05:21:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:58.248 05:21:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:58.248 05:21:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:58.248 05:21:05 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:58.248 05:21:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:58.248 05:21:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:58.248 05:21:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:58.248 05:21:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:58.248 05:21:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:58.248 05:21:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:58.248 05:21:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:58.248 05:21:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:58.248 05:21:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:58.248 05:21:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:58.249 05:21:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:58.249 05:21:05 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:58.249 05:21:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:58.249 05:21:05 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:58.249 05:21:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:58.249 05:21:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:58.249 05:21:05 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:58.249 05:21:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:58.249 05:21:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:58.249 05:21:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:58.249 05:21:05 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:58.249 05:21:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:58.249 05:21:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:58.249 05:21:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:58.249 05:21:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:58.249 05:21:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:58.249 05:21:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:58.249 05:21:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:58.249 05:21:05 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:58.249 05:21:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:58.249 05:21:05 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:58.249 05:21:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:58.249 05:21:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:58.249 05:21:05 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:58.249 05:21:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:58.249 05:21:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:58.249 05:21:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:58.249 05:21:05 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:58.249 05:21:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:58.249 05:21:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:58.249 05:21:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:58.249 05:21:05 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:58.249 05:21:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:58.249 05:21:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:58.249 05:21:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:58.249 05:21:05 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:58.249 05:21:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:58.249 05:21:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:58.249 05:21:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:58.249 05:21:05 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:58.249 05:21:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:58.249 05:21:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:58.249 05:21:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:58.249 05:21:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:58.249 05:21:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:58.249 05:21:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:58.249 05:21:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:58.249 05:21:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:58.249 05:21:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:58.249 05:21:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:58.249 05:21:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.622 05:21:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:59.622 05:21:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.622 05:21:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.622 05:21:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.622 05:21:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:59.622 05:21:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.622 05:21:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.622 05:21:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.622 05:21:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:59.622 05:21:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.622 05:21:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.622 05:21:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.622 05:21:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:59.622 05:21:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.622 05:21:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.622 05:21:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.622 05:21:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:59.622 05:21:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.622 05:21:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:59.623 00:06:59.623 real 0m1.412s 00:06:59.623 user 0m1.265s 00:06:59.623 sys 0m0.148s 00:06:59.623 05:21:06 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:59.623 05:21:06 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:59.623 ************************************ 00:06:59.623 END TEST accel_xor 00:06:59.623 ************************************ 00:06:59.623 05:21:06 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:59.623 05:21:06 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:59.623 05:21:06 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:59.623 05:21:06 accel -- common/autotest_common.sh@10 -- # set +x 00:06:59.623 ************************************ 00:06:59.623 START TEST accel_xor 00:06:59.623 ************************************ 00:06:59.623 05:21:06 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:59.623 [2024-07-14 05:21:06.355795] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:59.623 [2024-07-14 05:21:06.355856] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3114707 ] 00:06:59.623 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.623 [2024-07-14 05:21:06.417941] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.623 [2024-07-14 05:21:06.511347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.623 05:21:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.999 05:21:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:00.999 05:21:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.999 05:21:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.999 05:21:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.999 05:21:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:00.999 05:21:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.999 05:21:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.999 05:21:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.999 05:21:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:00.999 05:21:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.999 05:21:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.999 05:21:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.999 05:21:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:00.999 05:21:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.999 05:21:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.999 05:21:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.999 05:21:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:00.999 05:21:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.999 05:21:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.999 05:21:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.999 05:21:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:00.999 05:21:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.999 05:21:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.999 05:21:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.999 05:21:07 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:00.999 05:21:07 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:00.999 05:21:07 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:00.999 00:07:00.999 real 0m1.399s 00:07:00.999 user 0m1.260s 00:07:00.999 sys 0m0.140s 00:07:00.999 05:21:07 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:00.999 05:21:07 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:00.999 ************************************ 00:07:00.999 END TEST accel_xor 00:07:00.999 ************************************ 00:07:00.999 05:21:07 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:00.999 05:21:07 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:00.999 05:21:07 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:00.999 05:21:07 accel -- common/autotest_common.sh@10 -- # set +x 00:07:00.999 ************************************ 00:07:00.999 START TEST accel_dif_verify 00:07:00.999 ************************************ 00:07:00.999 05:21:07 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:07:00.999 05:21:07 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:00.999 05:21:07 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:00.999 05:21:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:00.999 05:21:07 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:00.999 05:21:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:00.999 05:21:07 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:00.999 05:21:07 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:00.999 05:21:07 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:00.999 05:21:07 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:00.999 05:21:07 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.999 05:21:07 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.999 05:21:07 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:00.999 05:21:07 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:00.999 05:21:07 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:00.999 [2024-07-14 05:21:07.798133] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:00.999 [2024-07-14 05:21:07.798204] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3115163 ] 00:07:00.999 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.999 [2024-07-14 05:21:07.860616] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.999 [2024-07-14 05:21:07.954529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.999 05:21:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:00.999 05:21:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:00.999 05:21:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:01.000 05:21:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:02.374 05:21:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:02.374 05:21:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:02.374 05:21:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:02.374 05:21:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:02.374 05:21:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:02.374 05:21:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:02.374 05:21:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:02.374 05:21:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:02.374 05:21:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:02.374 05:21:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:02.374 05:21:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:02.374 05:21:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:02.374 05:21:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:02.374 05:21:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:02.374 05:21:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:02.374 05:21:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:02.374 05:21:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:02.374 05:21:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:02.374 05:21:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:02.374 05:21:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:02.374 05:21:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:02.374 05:21:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:02.374 05:21:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:02.374 05:21:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:02.374 05:21:09 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:02.374 05:21:09 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:02.374 05:21:09 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:02.374 00:07:02.374 real 0m1.396s 00:07:02.374 user 0m1.248s 00:07:02.374 sys 0m0.152s 00:07:02.374 05:21:09 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:02.374 05:21:09 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:02.374 ************************************ 00:07:02.374 END TEST accel_dif_verify 00:07:02.375 ************************************ 00:07:02.375 05:21:09 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:02.375 05:21:09 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:02.375 05:21:09 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:02.375 05:21:09 accel -- common/autotest_common.sh@10 -- # set +x 00:07:02.375 ************************************ 00:07:02.375 START TEST accel_dif_generate 00:07:02.375 ************************************ 00:07:02.375 05:21:09 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:02.375 [2024-07-14 05:21:09.233001] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:02.375 [2024-07-14 05:21:09.233057] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3115574 ] 00:07:02.375 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.375 [2024-07-14 05:21:09.293260] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.375 [2024-07-14 05:21:09.383324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:02.375 05:21:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:03.749 05:21:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:03.749 05:21:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:03.749 05:21:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:03.749 05:21:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:03.749 05:21:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:03.749 05:21:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:03.749 05:21:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:03.749 05:21:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:03.749 05:21:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:03.749 05:21:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:03.749 05:21:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:03.749 05:21:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:03.749 05:21:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:03.749 05:21:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:03.749 05:21:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:03.749 05:21:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:03.749 05:21:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:03.749 05:21:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:03.749 05:21:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:03.749 05:21:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:03.749 05:21:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:03.749 05:21:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:03.749 05:21:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:03.749 05:21:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:03.749 05:21:10 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:03.749 05:21:10 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:03.749 05:21:10 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.749 00:07:03.749 real 0m1.399s 00:07:03.749 user 0m1.268s 00:07:03.749 sys 0m0.135s 00:07:03.749 05:21:10 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:03.749 05:21:10 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:03.749 ************************************ 00:07:03.749 END TEST accel_dif_generate 00:07:03.749 ************************************ 00:07:03.749 05:21:10 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:03.749 05:21:10 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:03.749 05:21:10 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:03.749 05:21:10 accel -- common/autotest_common.sh@10 -- # set +x 00:07:03.749 ************************************ 00:07:03.749 START TEST accel_dif_generate_copy 00:07:03.749 ************************************ 00:07:03.749 05:21:10 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:07:03.749 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:03.749 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:03.749 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.749 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:03.749 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.750 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:03.750 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:03.750 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:03.750 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:03.750 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.750 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.750 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:03.750 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:03.750 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:03.750 [2024-07-14 05:21:10.680576] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:03.750 [2024-07-14 05:21:10.680641] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3115736 ] 00:07:03.750 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.750 [2024-07-14 05:21:10.742584] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.750 [2024-07-14 05:21:10.834616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.008 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:04.008 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:04.008 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:04.008 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:04.008 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:04.008 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:04.008 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:04.008 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:04.008 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:04.008 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:04.008 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:04.008 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:04.008 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:04.008 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:04.008 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:04.008 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:04.008 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:04.008 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:04.008 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:04.008 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:04.008 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:04.008 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:04.008 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:04.008 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:04.008 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:04.008 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:04.008 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:04.008 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:04.008 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:04.008 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:04.008 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:04.008 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:04.008 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:04.008 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:04.008 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:04.008 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:04.008 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:04.008 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:04.008 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:04.008 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:04.008 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:04.008 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:04.008 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:04.008 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:04.008 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:04.008 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:04.008 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:04.008 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:04.008 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:04.008 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:04.008 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:04.008 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:04.008 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:04.008 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:04.008 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:04.008 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:04.008 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:04.008 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:04.008 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:04.008 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:04.008 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:04.008 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:04.008 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:04.008 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:04.008 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:04.008 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:04.008 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:04.008 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:04.008 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:04.008 05:21:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.377 05:21:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:05.377 05:21:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.377 05:21:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.377 05:21:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.377 05:21:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:05.377 05:21:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.377 05:21:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.377 05:21:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.377 05:21:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:05.377 05:21:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.377 05:21:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.377 05:21:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.377 05:21:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:05.377 05:21:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.377 05:21:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.377 05:21:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.377 05:21:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:05.377 05:21:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.377 05:21:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.377 05:21:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.377 05:21:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:05.377 05:21:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.377 05:21:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.377 05:21:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.377 05:21:12 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:05.377 05:21:12 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:05.377 05:21:12 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:05.377 00:07:05.377 real 0m1.396s 00:07:05.377 user 0m1.258s 00:07:05.377 sys 0m0.141s 00:07:05.377 05:21:12 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:05.377 05:21:12 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:05.377 ************************************ 00:07:05.377 END TEST accel_dif_generate_copy 00:07:05.377 ************************************ 00:07:05.377 05:21:12 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:05.377 05:21:12 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:05.377 05:21:12 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:07:05.377 05:21:12 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:05.377 05:21:12 accel -- common/autotest_common.sh@10 -- # set +x 00:07:05.377 ************************************ 00:07:05.377 START TEST accel_comp 00:07:05.377 ************************************ 00:07:05.377 05:21:12 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:05.377 05:21:12 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:05.377 05:21:12 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:05.377 05:21:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:05.377 05:21:12 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:05.377 05:21:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:05.377 05:21:12 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:05.377 05:21:12 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:05.377 05:21:12 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:05.377 05:21:12 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:05.377 05:21:12 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.377 05:21:12 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.377 05:21:12 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:05.377 05:21:12 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:05.377 05:21:12 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:05.377 [2024-07-14 05:21:12.121687] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:05.377 [2024-07-14 05:21:12.121755] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3116003 ] 00:07:05.377 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.377 [2024-07-14 05:21:12.184083] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.377 [2024-07-14 05:21:12.277655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.377 05:21:12 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:05.377 05:21:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.377 05:21:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:05.377 05:21:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:05.377 05:21:12 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:05.377 05:21:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.377 05:21:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:05.377 05:21:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:05.377 05:21:12 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:05.377 05:21:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.377 05:21:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:05.377 05:21:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:05.377 05:21:12 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:05.377 05:21:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.377 05:21:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:05.377 05:21:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:05.377 05:21:12 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:05.377 05:21:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.377 05:21:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:05.377 05:21:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:05.377 05:21:12 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:05.377 05:21:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.377 05:21:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:05.377 05:21:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:05.377 05:21:12 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:05.377 05:21:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.377 05:21:12 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:05.377 05:21:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:05.377 05:21:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:05.377 05:21:12 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:05.377 05:21:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.377 05:21:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:05.377 05:21:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:05.377 05:21:12 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:05.377 05:21:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.377 05:21:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:05.378 05:21:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:05.378 05:21:12 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:05.378 05:21:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.378 05:21:12 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:05.378 05:21:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:05.378 05:21:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:05.378 05:21:12 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:05.378 05:21:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.378 05:21:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:05.378 05:21:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:05.378 05:21:12 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:05.378 05:21:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.378 05:21:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:05.378 05:21:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:05.378 05:21:12 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:05.378 05:21:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.378 05:21:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:05.378 05:21:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:05.378 05:21:12 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:05.378 05:21:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.378 05:21:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:05.378 05:21:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:05.378 05:21:12 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:05.378 05:21:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.378 05:21:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:05.378 05:21:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:05.378 05:21:12 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:05.378 05:21:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.378 05:21:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:05.378 05:21:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:05.378 05:21:12 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:05.378 05:21:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.378 05:21:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:05.378 05:21:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:05.378 05:21:12 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:05.378 05:21:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.378 05:21:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:05.378 05:21:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:06.802 05:21:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:06.802 05:21:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.802 05:21:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:06.802 05:21:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:06.802 05:21:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:06.802 05:21:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.802 05:21:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:06.802 05:21:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:06.802 05:21:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:06.802 05:21:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.802 05:21:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:06.802 05:21:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:06.802 05:21:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:06.802 05:21:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.802 05:21:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:06.802 05:21:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:06.802 05:21:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:06.802 05:21:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.802 05:21:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:06.802 05:21:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:06.802 05:21:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:06.802 05:21:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.802 05:21:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:06.802 05:21:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:06.802 05:21:13 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:06.802 05:21:13 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:06.802 05:21:13 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:06.802 00:07:06.802 real 0m1.398s 00:07:06.802 user 0m1.264s 00:07:06.802 sys 0m0.136s 00:07:06.802 05:21:13 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:06.802 05:21:13 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:06.802 ************************************ 00:07:06.802 END TEST accel_comp 00:07:06.802 ************************************ 00:07:06.802 05:21:13 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:06.802 05:21:13 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:06.802 05:21:13 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:06.802 05:21:13 accel -- common/autotest_common.sh@10 -- # set +x 00:07:06.802 ************************************ 00:07:06.802 START TEST accel_decomp 00:07:06.802 ************************************ 00:07:06.802 05:21:13 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:06.802 05:21:13 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:06.802 05:21:13 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:06.802 05:21:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:06.802 05:21:13 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:06.802 05:21:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:06.802 05:21:13 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:06.802 05:21:13 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:06.802 05:21:13 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:06.802 05:21:13 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:06.802 05:21:13 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.802 05:21:13 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.802 05:21:13 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:06.802 05:21:13 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:06.802 05:21:13 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:06.802 [2024-07-14 05:21:13.559301] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:06.802 [2024-07-14 05:21:13.559364] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3116171 ] 00:07:06.802 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.802 [2024-07-14 05:21:13.619735] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.802 [2024-07-14 05:21:13.713288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.802 05:21:13 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:06.802 05:21:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.802 05:21:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:06.802 05:21:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:06.802 05:21:13 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:06.802 05:21:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.802 05:21:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:06.802 05:21:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:06.802 05:21:13 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:06.802 05:21:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.802 05:21:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:06.802 05:21:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:06.802 05:21:13 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:06.802 05:21:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.802 05:21:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:06.802 05:21:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:06.802 05:21:13 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:06.802 05:21:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.802 05:21:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:06.802 05:21:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:06.802 05:21:13 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:06.802 05:21:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.802 05:21:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:06.802 05:21:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:06.802 05:21:13 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:06.802 05:21:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.802 05:21:13 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:06.802 05:21:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:06.802 05:21:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:06.802 05:21:13 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:06.803 05:21:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.803 05:21:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:06.803 05:21:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:06.803 05:21:13 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:06.803 05:21:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.803 05:21:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:06.803 05:21:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:06.803 05:21:13 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:06.803 05:21:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.803 05:21:13 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:06.803 05:21:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:06.803 05:21:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:06.803 05:21:13 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:06.803 05:21:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.803 05:21:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:06.803 05:21:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:06.803 05:21:13 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:06.803 05:21:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.803 05:21:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:06.803 05:21:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:06.803 05:21:13 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:06.803 05:21:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.803 05:21:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:06.803 05:21:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:06.803 05:21:13 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:06.803 05:21:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.803 05:21:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:06.803 05:21:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:06.803 05:21:13 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:06.803 05:21:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.803 05:21:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:06.803 05:21:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:06.803 05:21:13 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:06.803 05:21:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.803 05:21:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:06.803 05:21:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:06.803 05:21:13 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:06.803 05:21:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.803 05:21:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:06.803 05:21:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:06.803 05:21:13 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:06.803 05:21:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.803 05:21:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:06.803 05:21:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:08.176 05:21:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:08.176 05:21:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.176 05:21:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:08.176 05:21:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:08.176 05:21:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:08.176 05:21:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.176 05:21:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:08.176 05:21:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:08.176 05:21:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:08.176 05:21:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.176 05:21:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:08.176 05:21:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:08.176 05:21:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:08.176 05:21:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.176 05:21:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:08.176 05:21:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:08.176 05:21:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:08.176 05:21:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.176 05:21:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:08.176 05:21:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:08.176 05:21:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:08.176 05:21:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.176 05:21:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:08.176 05:21:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:08.176 05:21:14 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:08.176 05:21:14 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:08.176 05:21:14 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:08.176 00:07:08.176 real 0m1.410s 00:07:08.176 user 0m1.260s 00:07:08.176 sys 0m0.153s 00:07:08.176 05:21:14 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:08.176 05:21:14 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:08.176 ************************************ 00:07:08.176 END TEST accel_decomp 00:07:08.176 ************************************ 00:07:08.176 05:21:14 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:08.176 05:21:14 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:08.176 05:21:14 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:08.176 05:21:14 accel -- common/autotest_common.sh@10 -- # set +x 00:07:08.176 ************************************ 00:07:08.176 START TEST accel_decmop_full 00:07:08.176 ************************************ 00:07:08.176 05:21:14 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:08.176 05:21:14 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:07:08.176 05:21:14 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:07:08.176 05:21:14 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:08.176 05:21:14 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:08.176 05:21:14 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:08.176 05:21:15 accel.accel_decmop_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:08.176 05:21:15 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:07:08.176 05:21:15 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:08.176 05:21:15 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:08.176 05:21:15 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.176 05:21:15 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.176 05:21:15 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:08.176 05:21:15 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:07:08.176 05:21:15 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:07:08.176 [2024-07-14 05:21:15.015465] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:08.176 [2024-07-14 05:21:15.015529] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3116326 ] 00:07:08.176 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.176 [2024-07-14 05:21:15.077348] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.176 [2024-07-14 05:21:15.171022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.176 05:21:15 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:08.176 05:21:15 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:08.176 05:21:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:08.176 05:21:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:08.176 05:21:15 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:08.176 05:21:15 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:08.176 05:21:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:08.176 05:21:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:08.176 05:21:15 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:08.176 05:21:15 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:08.176 05:21:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:08.176 05:21:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:08.176 05:21:15 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:07:08.176 05:21:15 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:08.176 05:21:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:08.176 05:21:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:08.176 05:21:15 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:08.176 05:21:15 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:08.176 05:21:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:08.176 05:21:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:08.176 05:21:15 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:08.176 05:21:15 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:08.176 05:21:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:08.176 05:21:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:08.176 05:21:15 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:07:08.176 05:21:15 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:08.176 05:21:15 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:08.176 05:21:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:08.176 05:21:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:08.176 05:21:15 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:08.176 05:21:15 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:08.176 05:21:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:08.176 05:21:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:08.176 05:21:15 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:08.176 05:21:15 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:08.176 05:21:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:08.176 05:21:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:08.176 05:21:15 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:07:08.176 05:21:15 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:08.176 05:21:15 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:07:08.176 05:21:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:08.176 05:21:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:08.176 05:21:15 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:08.176 05:21:15 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:08.176 05:21:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:08.176 05:21:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:08.176 05:21:15 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:07:08.176 05:21:15 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:08.176 05:21:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:08.176 05:21:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:08.176 05:21:15 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:07:08.176 05:21:15 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:08.176 05:21:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:08.176 05:21:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:08.176 05:21:15 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:07:08.176 05:21:15 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:08.176 05:21:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:08.177 05:21:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:08.177 05:21:15 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:08.177 05:21:15 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:08.177 05:21:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:08.177 05:21:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:08.177 05:21:15 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:07:08.177 05:21:15 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:08.177 05:21:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:08.177 05:21:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:08.177 05:21:15 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:08.177 05:21:15 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:08.177 05:21:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:08.177 05:21:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:08.177 05:21:15 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:08.177 05:21:15 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:08.177 05:21:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:08.177 05:21:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:09.548 05:21:16 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:09.548 05:21:16 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:09.548 05:21:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:09.548 05:21:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:09.548 05:21:16 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:09.548 05:21:16 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:09.548 05:21:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:09.548 05:21:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:09.548 05:21:16 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:09.548 05:21:16 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:09.548 05:21:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:09.548 05:21:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:09.548 05:21:16 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:09.548 05:21:16 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:09.548 05:21:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:09.548 05:21:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:09.548 05:21:16 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:09.548 05:21:16 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:09.548 05:21:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:09.548 05:21:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:09.548 05:21:16 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:09.548 05:21:16 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:09.548 05:21:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:09.548 05:21:16 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:09.548 05:21:16 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:09.548 05:21:16 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:09.548 05:21:16 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.548 00:07:09.548 real 0m1.423s 00:07:09.548 user 0m1.290s 00:07:09.548 sys 0m0.137s 00:07:09.548 05:21:16 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:09.548 05:21:16 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:07:09.548 ************************************ 00:07:09.548 END TEST accel_decmop_full 00:07:09.548 ************************************ 00:07:09.548 05:21:16 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:09.548 05:21:16 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:09.548 05:21:16 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:09.548 05:21:16 accel -- common/autotest_common.sh@10 -- # set +x 00:07:09.548 ************************************ 00:07:09.548 START TEST accel_decomp_mcore 00:07:09.548 ************************************ 00:07:09.548 05:21:16 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:09.548 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:09.548 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:09.548 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.548 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:09.548 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.548 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:09.548 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:09.548 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:09.548 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:09.548 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.548 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.548 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:09.548 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:09.548 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:09.548 [2024-07-14 05:21:16.479701] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:09.548 [2024-07-14 05:21:16.479765] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3116487 ] 00:07:09.548 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.548 [2024-07-14 05:21:16.544356] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:09.548 [2024-07-14 05:21:16.639698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.548 [2024-07-14 05:21:16.639767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:09.548 [2024-07-14 05:21:16.639857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:09.548 [2024-07-14 05:21:16.639859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.807 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:09.807 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.807 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.807 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.807 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:09.807 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.807 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.807 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.807 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:09.807 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.807 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.807 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.807 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:09.807 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.807 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.807 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.807 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:09.807 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.807 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.807 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.807 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:09.807 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.807 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.807 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.807 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:09.807 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.807 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:09.807 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.807 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.807 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:09.807 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.807 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.807 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.807 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:09.807 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.807 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.807 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.807 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:09.807 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.807 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:09.807 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.807 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.807 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:09.807 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.807 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.807 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.807 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:09.807 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.807 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.807 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.807 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:09.807 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.807 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.807 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.807 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:09.808 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.808 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.808 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.808 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:09.808 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.808 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.808 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.808 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:09.808 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.808 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.808 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.808 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:09.808 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.808 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.808 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.808 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:09.808 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.808 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.808 05:21:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.180 05:21:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:11.180 05:21:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.180 05:21:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.180 05:21:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.180 05:21:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:11.180 05:21:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.180 05:21:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.180 05:21:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.180 05:21:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:11.180 05:21:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.180 05:21:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.180 05:21:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.180 05:21:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:11.180 05:21:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.180 05:21:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.180 05:21:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.180 05:21:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:11.180 05:21:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.180 05:21:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.180 05:21:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.180 05:21:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:11.180 05:21:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.180 05:21:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.180 05:21:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.180 05:21:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:11.180 05:21:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.180 05:21:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.180 05:21:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.180 05:21:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:11.180 05:21:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.180 05:21:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.180 05:21:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.180 05:21:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:11.180 05:21:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.180 05:21:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.181 05:21:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.181 05:21:17 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:11.181 05:21:17 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:11.181 05:21:17 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:11.181 00:07:11.181 real 0m1.415s 00:07:11.181 user 0m4.708s 00:07:11.181 sys 0m0.156s 00:07:11.181 05:21:17 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:11.181 05:21:17 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:11.181 ************************************ 00:07:11.181 END TEST accel_decomp_mcore 00:07:11.181 ************************************ 00:07:11.181 05:21:17 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:11.181 05:21:17 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:11.181 05:21:17 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:11.181 05:21:17 accel -- common/autotest_common.sh@10 -- # set +x 00:07:11.181 ************************************ 00:07:11.181 START TEST accel_decomp_full_mcore 00:07:11.181 ************************************ 00:07:11.181 05:21:17 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:11.181 05:21:17 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:11.181 05:21:17 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:11.181 05:21:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.181 05:21:17 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:11.181 05:21:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.181 05:21:17 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:11.181 05:21:17 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:11.181 05:21:17 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:11.181 05:21:17 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:11.181 05:21:17 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.181 05:21:17 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.181 05:21:17 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:11.181 05:21:17 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:11.181 05:21:17 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:11.181 [2024-07-14 05:21:17.942942] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:11.181 [2024-07-14 05:21:17.943001] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3116756 ] 00:07:11.181 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.181 [2024-07-14 05:21:18.005837] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:11.181 [2024-07-14 05:21:18.103286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:11.181 [2024-07-14 05:21:18.103336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:11.181 [2024-07-14 05:21:18.103395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:11.181 [2024-07-14 05:21:18.103398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.181 05:21:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:11.181 05:21:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.181 05:21:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.181 05:21:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.181 05:21:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:11.181 05:21:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.181 05:21:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.181 05:21:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.181 05:21:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:11.181 05:21:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.181 05:21:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.181 05:21:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.181 05:21:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:11.181 05:21:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.181 05:21:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.181 05:21:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.181 05:21:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:11.181 05:21:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.181 05:21:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.181 05:21:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.181 05:21:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:11.181 05:21:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.181 05:21:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.181 05:21:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.181 05:21:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:11.181 05:21:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.181 05:21:18 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:11.181 05:21:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.181 05:21:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.181 05:21:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:11.181 05:21:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.181 05:21:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.181 05:21:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.181 05:21:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:11.181 05:21:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.181 05:21:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.181 05:21:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.181 05:21:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:11.181 05:21:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.181 05:21:18 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:11.181 05:21:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.181 05:21:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.181 05:21:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:11.181 05:21:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.181 05:21:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.181 05:21:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.181 05:21:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:11.181 05:21:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.181 05:21:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.181 05:21:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.181 05:21:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:11.181 05:21:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.181 05:21:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.181 05:21:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.181 05:21:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:11.181 05:21:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.181 05:21:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.181 05:21:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.181 05:21:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:11.181 05:21:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.181 05:21:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.181 05:21:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.181 05:21:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:11.181 05:21:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.181 05:21:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.181 05:21:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.181 05:21:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:11.181 05:21:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.181 05:21:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.181 05:21:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.181 05:21:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:11.181 05:21:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.181 05:21:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.181 05:21:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.557 05:21:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:12.557 05:21:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.557 05:21:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.557 05:21:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.557 05:21:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:12.558 05:21:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.558 05:21:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.558 05:21:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.558 05:21:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:12.558 05:21:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.558 05:21:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.558 05:21:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.558 05:21:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:12.558 05:21:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.558 05:21:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.558 05:21:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.558 05:21:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:12.558 05:21:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.558 05:21:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.558 05:21:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.558 05:21:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:12.558 05:21:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.558 05:21:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.558 05:21:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.558 05:21:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:12.558 05:21:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.558 05:21:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.558 05:21:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.558 05:21:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:12.558 05:21:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.558 05:21:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.558 05:21:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.558 05:21:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:12.558 05:21:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.558 05:21:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.558 05:21:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.558 05:21:19 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:12.558 05:21:19 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:12.558 05:21:19 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:12.558 00:07:12.558 real 0m1.415s 00:07:12.558 user 0m4.717s 00:07:12.558 sys 0m0.158s 00:07:12.558 05:21:19 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:12.558 05:21:19 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:12.558 ************************************ 00:07:12.558 END TEST accel_decomp_full_mcore 00:07:12.558 ************************************ 00:07:12.558 05:21:19 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:12.558 05:21:19 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:12.558 05:21:19 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:12.558 05:21:19 accel -- common/autotest_common.sh@10 -- # set +x 00:07:12.558 ************************************ 00:07:12.558 START TEST accel_decomp_mthread 00:07:12.558 ************************************ 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:12.558 [2024-07-14 05:21:19.408955] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:12.558 [2024-07-14 05:21:19.409017] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3116926 ] 00:07:12.558 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.558 [2024-07-14 05:21:19.472559] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.558 [2024-07-14 05:21:19.563915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.558 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.559 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.559 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:12.559 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.559 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.559 05:21:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:13.936 05:21:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:13.936 05:21:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:13.936 05:21:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:13.936 05:21:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:13.936 05:21:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:13.936 05:21:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:13.936 05:21:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:13.936 05:21:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:13.936 05:21:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:13.936 05:21:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:13.936 05:21:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:13.936 05:21:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:13.936 05:21:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:13.936 05:21:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:13.936 05:21:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:13.936 05:21:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:13.936 05:21:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:13.936 05:21:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:13.936 05:21:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:13.936 05:21:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:13.936 05:21:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:13.936 05:21:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:13.936 05:21:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:13.936 05:21:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:13.936 05:21:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:13.936 05:21:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:13.936 05:21:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:13.936 05:21:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:13.936 05:21:20 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:13.936 05:21:20 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:13.936 05:21:20 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:13.936 00:07:13.936 real 0m1.420s 00:07:13.936 user 0m1.271s 00:07:13.936 sys 0m0.152s 00:07:13.936 05:21:20 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:13.936 05:21:20 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:13.936 ************************************ 00:07:13.936 END TEST accel_decomp_mthread 00:07:13.936 ************************************ 00:07:13.936 05:21:20 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:13.936 05:21:20 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:13.936 05:21:20 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:13.936 05:21:20 accel -- common/autotest_common.sh@10 -- # set +x 00:07:13.936 ************************************ 00:07:13.936 START TEST accel_decomp_full_mthread 00:07:13.936 ************************************ 00:07:13.936 05:21:20 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:13.936 05:21:20 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:13.936 05:21:20 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:13.936 05:21:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:13.936 05:21:20 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:13.937 05:21:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:13.937 05:21:20 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:13.937 05:21:20 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:13.937 05:21:20 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:13.937 05:21:20 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:13.937 05:21:20 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.937 05:21:20 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.937 05:21:20 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:13.937 05:21:20 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:13.937 05:21:20 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:13.937 [2024-07-14 05:21:20.868650] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:13.937 [2024-07-14 05:21:20.868715] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3117077 ] 00:07:13.937 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.937 [2024-07-14 05:21:20.929227] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.937 [2024-07-14 05:21:21.022095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.195 05:21:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:14.195 05:21:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.196 05:21:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.196 05:21:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.196 05:21:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:14.196 05:21:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.196 05:21:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.196 05:21:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.196 05:21:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:14.196 05:21:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.196 05:21:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.196 05:21:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.196 05:21:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:14.196 05:21:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.196 05:21:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.196 05:21:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.196 05:21:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:14.196 05:21:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.196 05:21:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.196 05:21:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.196 05:21:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:14.196 05:21:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.196 05:21:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.196 05:21:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.196 05:21:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:14.196 05:21:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.196 05:21:21 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:14.196 05:21:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.196 05:21:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.196 05:21:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:14.196 05:21:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.196 05:21:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.196 05:21:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.196 05:21:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:14.196 05:21:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.196 05:21:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.196 05:21:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.196 05:21:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:14.196 05:21:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.196 05:21:21 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:14.196 05:21:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.196 05:21:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.196 05:21:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:14.196 05:21:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.196 05:21:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.196 05:21:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.196 05:21:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:14.196 05:21:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.196 05:21:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.196 05:21:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.196 05:21:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:14.196 05:21:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.196 05:21:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.196 05:21:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.196 05:21:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:14.196 05:21:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.196 05:21:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.196 05:21:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.196 05:21:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:14.196 05:21:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.196 05:21:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.196 05:21:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.196 05:21:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:14.196 05:21:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.196 05:21:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.196 05:21:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.196 05:21:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:14.196 05:21:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.196 05:21:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.196 05:21:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.196 05:21:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:14.196 05:21:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.196 05:21:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.196 05:21:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.571 05:21:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:15.571 05:21:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.571 05:21:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.571 05:21:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.571 05:21:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:15.571 05:21:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.571 05:21:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.571 05:21:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.571 05:21:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:15.571 05:21:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.571 05:21:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.571 05:21:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.571 05:21:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:15.571 05:21:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.571 05:21:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.571 05:21:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.571 05:21:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:15.571 05:21:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.571 05:21:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.571 05:21:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.571 05:21:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:15.571 05:21:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.571 05:21:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.571 05:21:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.571 05:21:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:15.571 05:21:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.571 05:21:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.571 05:21:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.571 05:21:22 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:15.571 05:21:22 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:15.571 05:21:22 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:15.571 00:07:15.571 real 0m1.433s 00:07:15.571 user 0m1.291s 00:07:15.571 sys 0m0.145s 00:07:15.571 05:21:22 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:15.571 05:21:22 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:15.571 ************************************ 00:07:15.571 END TEST accel_decomp_full_mthread 00:07:15.571 ************************************ 00:07:15.571 05:21:22 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:15.571 05:21:22 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:15.571 05:21:22 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:15.571 05:21:22 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:15.571 05:21:22 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:15.571 05:21:22 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:15.571 05:21:22 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:15.571 05:21:22 accel -- common/autotest_common.sh@10 -- # set +x 00:07:15.571 05:21:22 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.571 05:21:22 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.571 05:21:22 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:15.571 05:21:22 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:15.571 05:21:22 accel -- accel/accel.sh@41 -- # jq -r . 00:07:15.571 ************************************ 00:07:15.571 START TEST accel_dif_functional_tests 00:07:15.571 ************************************ 00:07:15.571 05:21:22 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:15.571 [2024-07-14 05:21:22.375779] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:15.571 [2024-07-14 05:21:22.375854] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3117352 ] 00:07:15.571 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.571 [2024-07-14 05:21:22.438149] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:15.571 [2024-07-14 05:21:22.533404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:15.571 [2024-07-14 05:21:22.533469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:15.571 [2024-07-14 05:21:22.533472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.571 00:07:15.571 00:07:15.571 CUnit - A unit testing framework for C - Version 2.1-3 00:07:15.571 http://cunit.sourceforge.net/ 00:07:15.571 00:07:15.571 00:07:15.571 Suite: accel_dif 00:07:15.571 Test: verify: DIF generated, GUARD check ...passed 00:07:15.571 Test: verify: DIF generated, APPTAG check ...passed 00:07:15.571 Test: verify: DIF generated, REFTAG check ...passed 00:07:15.571 Test: verify: DIF not generated, GUARD check ...[2024-07-14 05:21:22.620577] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:15.571 passed 00:07:15.571 Test: verify: DIF not generated, APPTAG check ...[2024-07-14 05:21:22.620657] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:15.571 passed 00:07:15.571 Test: verify: DIF not generated, REFTAG check ...[2024-07-14 05:21:22.620690] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:15.571 passed 00:07:15.571 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:15.571 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-14 05:21:22.620770] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:15.571 passed 00:07:15.571 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:15.571 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:15.571 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:15.571 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-14 05:21:22.620953] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:15.571 passed 00:07:15.571 Test: verify copy: DIF generated, GUARD check ...passed 00:07:15.571 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:15.571 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:15.571 Test: verify copy: DIF not generated, GUARD check ...[2024-07-14 05:21:22.621115] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:15.571 passed 00:07:15.571 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-14 05:21:22.621154] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:15.571 passed 00:07:15.571 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-14 05:21:22.621207] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:15.571 passed 00:07:15.571 Test: generate copy: DIF generated, GUARD check ...passed 00:07:15.571 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:15.571 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:15.572 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:15.572 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:15.572 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:15.572 Test: generate copy: iovecs-len validate ...[2024-07-14 05:21:22.621457] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:15.572 passed 00:07:15.572 Test: generate copy: buffer alignment validate ...passed 00:07:15.572 00:07:15.572 Run Summary: Type Total Ran Passed Failed Inactive 00:07:15.572 suites 1 1 n/a 0 0 00:07:15.572 tests 26 26 26 0 0 00:07:15.572 asserts 115 115 115 0 n/a 00:07:15.572 00:07:15.572 Elapsed time = 0.003 seconds 00:07:15.830 00:07:15.830 real 0m0.487s 00:07:15.830 user 0m0.732s 00:07:15.830 sys 0m0.176s 00:07:15.830 05:21:22 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:15.830 05:21:22 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:15.830 ************************************ 00:07:15.830 END TEST accel_dif_functional_tests 00:07:15.830 ************************************ 00:07:15.830 00:07:15.830 real 0m31.670s 00:07:15.830 user 0m35.000s 00:07:15.830 sys 0m4.597s 00:07:15.830 05:21:22 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:15.830 05:21:22 accel -- common/autotest_common.sh@10 -- # set +x 00:07:15.830 ************************************ 00:07:15.831 END TEST accel 00:07:15.831 ************************************ 00:07:15.831 05:21:22 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:15.831 05:21:22 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:15.831 05:21:22 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:15.831 05:21:22 -- common/autotest_common.sh@10 -- # set +x 00:07:15.831 ************************************ 00:07:15.831 START TEST accel_rpc 00:07:15.831 ************************************ 00:07:15.831 05:21:22 accel_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:16.089 * Looking for test storage... 00:07:16.089 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:16.089 05:21:22 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:16.089 05:21:22 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3117424 00:07:16.089 05:21:22 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:16.089 05:21:22 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 3117424 00:07:16.089 05:21:22 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 3117424 ']' 00:07:16.089 05:21:22 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.089 05:21:22 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:16.089 05:21:22 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.089 05:21:22 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:16.089 05:21:22 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.089 [2024-07-14 05:21:22.998435] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:16.089 [2024-07-14 05:21:22.998526] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3117424 ] 00:07:16.089 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.089 [2024-07-14 05:21:23.055521] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.089 [2024-07-14 05:21:23.138799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.346 05:21:23 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:16.346 05:21:23 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:16.346 05:21:23 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:16.346 05:21:23 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:16.346 05:21:23 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:16.346 05:21:23 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:16.346 05:21:23 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:16.347 05:21:23 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:16.347 05:21:23 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:16.347 05:21:23 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.347 ************************************ 00:07:16.347 START TEST accel_assign_opcode 00:07:16.347 ************************************ 00:07:16.347 05:21:23 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:07:16.347 05:21:23 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:16.347 05:21:23 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.347 05:21:23 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:16.347 [2024-07-14 05:21:23.231521] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:16.347 05:21:23 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.347 05:21:23 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:16.347 05:21:23 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.347 05:21:23 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:16.347 [2024-07-14 05:21:23.239526] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:16.347 05:21:23 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.347 05:21:23 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:16.347 05:21:23 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.347 05:21:23 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:16.626 05:21:23 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.626 05:21:23 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:16.626 05:21:23 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.626 05:21:23 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:16.626 05:21:23 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:16.626 05:21:23 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:16.626 05:21:23 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.626 software 00:07:16.626 00:07:16.626 real 0m0.287s 00:07:16.626 user 0m0.039s 00:07:16.626 sys 0m0.006s 00:07:16.626 05:21:23 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:16.626 05:21:23 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:16.626 ************************************ 00:07:16.626 END TEST accel_assign_opcode 00:07:16.626 ************************************ 00:07:16.626 05:21:23 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 3117424 00:07:16.626 05:21:23 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 3117424 ']' 00:07:16.626 05:21:23 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 3117424 00:07:16.626 05:21:23 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:07:16.626 05:21:23 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:16.626 05:21:23 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3117424 00:07:16.626 05:21:23 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:16.626 05:21:23 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:16.626 05:21:23 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3117424' 00:07:16.626 killing process with pid 3117424 00:07:16.626 05:21:23 accel_rpc -- common/autotest_common.sh@965 -- # kill 3117424 00:07:16.626 05:21:23 accel_rpc -- common/autotest_common.sh@970 -- # wait 3117424 00:07:16.884 00:07:16.884 real 0m1.068s 00:07:16.884 user 0m1.002s 00:07:16.884 sys 0m0.423s 00:07:16.884 05:21:23 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:16.884 05:21:23 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.884 ************************************ 00:07:16.884 END TEST accel_rpc 00:07:16.884 ************************************ 00:07:16.884 05:21:23 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:16.884 05:21:23 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:16.884 05:21:23 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:16.884 05:21:23 -- common/autotest_common.sh@10 -- # set +x 00:07:17.142 ************************************ 00:07:17.142 START TEST app_cmdline 00:07:17.142 ************************************ 00:07:17.142 05:21:24 app_cmdline -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:17.142 * Looking for test storage... 00:07:17.142 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:17.142 05:21:24 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:17.142 05:21:24 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3117628 00:07:17.142 05:21:24 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:17.142 05:21:24 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3117628 00:07:17.142 05:21:24 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 3117628 ']' 00:07:17.142 05:21:24 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.142 05:21:24 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:17.142 05:21:24 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.142 05:21:24 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:17.142 05:21:24 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:17.142 [2024-07-14 05:21:24.107545] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:17.142 [2024-07-14 05:21:24.107632] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3117628 ] 00:07:17.142 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.142 [2024-07-14 05:21:24.169457] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.401 [2024-07-14 05:21:24.260804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.658 05:21:24 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:17.658 05:21:24 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:07:17.658 05:21:24 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:17.658 { 00:07:17.658 "version": "SPDK v24.05.1-pre git sha1 5fa2f5086", 00:07:17.658 "fields": { 00:07:17.658 "major": 24, 00:07:17.658 "minor": 5, 00:07:17.658 "patch": 1, 00:07:17.658 "suffix": "-pre", 00:07:17.658 "commit": "5fa2f5086" 00:07:17.658 } 00:07:17.658 } 00:07:17.916 05:21:24 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:17.916 05:21:24 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:17.916 05:21:24 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:17.916 05:21:24 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:17.916 05:21:24 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:17.916 05:21:24 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:17.916 05:21:24 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.916 05:21:24 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:17.916 05:21:24 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:17.916 05:21:24 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.916 05:21:24 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:17.916 05:21:24 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:17.916 05:21:24 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:17.916 05:21:24 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:17.916 05:21:24 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:17.916 05:21:24 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:17.916 05:21:24 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:17.916 05:21:24 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:17.916 05:21:24 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:17.916 05:21:24 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:17.916 05:21:24 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:17.916 05:21:24 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:17.916 05:21:24 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:17.916 05:21:24 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:18.174 request: 00:07:18.174 { 00:07:18.174 "method": "env_dpdk_get_mem_stats", 00:07:18.174 "req_id": 1 00:07:18.174 } 00:07:18.174 Got JSON-RPC error response 00:07:18.174 response: 00:07:18.174 { 00:07:18.174 "code": -32601, 00:07:18.174 "message": "Method not found" 00:07:18.174 } 00:07:18.174 05:21:25 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:18.174 05:21:25 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:18.174 05:21:25 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:18.174 05:21:25 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:18.174 05:21:25 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3117628 00:07:18.174 05:21:25 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 3117628 ']' 00:07:18.174 05:21:25 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 3117628 00:07:18.174 05:21:25 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:07:18.174 05:21:25 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:18.174 05:21:25 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3117628 00:07:18.174 05:21:25 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:18.174 05:21:25 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:18.174 05:21:25 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3117628' 00:07:18.174 killing process with pid 3117628 00:07:18.174 05:21:25 app_cmdline -- common/autotest_common.sh@965 -- # kill 3117628 00:07:18.174 05:21:25 app_cmdline -- common/autotest_common.sh@970 -- # wait 3117628 00:07:18.433 00:07:18.433 real 0m1.480s 00:07:18.433 user 0m1.785s 00:07:18.433 sys 0m0.465s 00:07:18.433 05:21:25 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:18.433 05:21:25 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:18.433 ************************************ 00:07:18.433 END TEST app_cmdline 00:07:18.433 ************************************ 00:07:18.433 05:21:25 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:18.433 05:21:25 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:18.433 05:21:25 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:18.433 05:21:25 -- common/autotest_common.sh@10 -- # set +x 00:07:18.692 ************************************ 00:07:18.692 START TEST version 00:07:18.692 ************************************ 00:07:18.692 05:21:25 version -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:18.692 * Looking for test storage... 00:07:18.692 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:18.692 05:21:25 version -- app/version.sh@17 -- # get_header_version major 00:07:18.692 05:21:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:18.692 05:21:25 version -- app/version.sh@14 -- # cut -f2 00:07:18.692 05:21:25 version -- app/version.sh@14 -- # tr -d '"' 00:07:18.692 05:21:25 version -- app/version.sh@17 -- # major=24 00:07:18.692 05:21:25 version -- app/version.sh@18 -- # get_header_version minor 00:07:18.692 05:21:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:18.692 05:21:25 version -- app/version.sh@14 -- # cut -f2 00:07:18.692 05:21:25 version -- app/version.sh@14 -- # tr -d '"' 00:07:18.692 05:21:25 version -- app/version.sh@18 -- # minor=5 00:07:18.692 05:21:25 version -- app/version.sh@19 -- # get_header_version patch 00:07:18.692 05:21:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:18.692 05:21:25 version -- app/version.sh@14 -- # cut -f2 00:07:18.692 05:21:25 version -- app/version.sh@14 -- # tr -d '"' 00:07:18.692 05:21:25 version -- app/version.sh@19 -- # patch=1 00:07:18.692 05:21:25 version -- app/version.sh@20 -- # get_header_version suffix 00:07:18.692 05:21:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:18.692 05:21:25 version -- app/version.sh@14 -- # cut -f2 00:07:18.692 05:21:25 version -- app/version.sh@14 -- # tr -d '"' 00:07:18.692 05:21:25 version -- app/version.sh@20 -- # suffix=-pre 00:07:18.692 05:21:25 version -- app/version.sh@22 -- # version=24.5 00:07:18.692 05:21:25 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:18.692 05:21:25 version -- app/version.sh@25 -- # version=24.5.1 00:07:18.692 05:21:25 version -- app/version.sh@28 -- # version=24.5.1rc0 00:07:18.692 05:21:25 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:18.693 05:21:25 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:18.693 05:21:25 version -- app/version.sh@30 -- # py_version=24.5.1rc0 00:07:18.693 05:21:25 version -- app/version.sh@31 -- # [[ 24.5.1rc0 == \2\4\.\5\.\1\r\c\0 ]] 00:07:18.693 00:07:18.693 real 0m0.108s 00:07:18.693 user 0m0.051s 00:07:18.693 sys 0m0.077s 00:07:18.693 05:21:25 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:18.693 05:21:25 version -- common/autotest_common.sh@10 -- # set +x 00:07:18.693 ************************************ 00:07:18.693 END TEST version 00:07:18.693 ************************************ 00:07:18.693 05:21:25 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:18.693 05:21:25 -- spdk/autotest.sh@198 -- # uname -s 00:07:18.693 05:21:25 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:18.693 05:21:25 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:18.693 05:21:25 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:18.693 05:21:25 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:18.693 05:21:25 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:18.693 05:21:25 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:18.693 05:21:25 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:18.693 05:21:25 -- common/autotest_common.sh@10 -- # set +x 00:07:18.693 05:21:25 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:18.693 05:21:25 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:18.693 05:21:25 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:18.693 05:21:25 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:18.693 05:21:25 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:18.693 05:21:25 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:18.693 05:21:25 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:18.693 05:21:25 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:18.693 05:21:25 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:18.693 05:21:25 -- common/autotest_common.sh@10 -- # set +x 00:07:18.693 ************************************ 00:07:18.693 START TEST nvmf_tcp 00:07:18.693 ************************************ 00:07:18.693 05:21:25 nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:18.693 * Looking for test storage... 00:07:18.693 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:18.693 05:21:25 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:18.693 05:21:25 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:18.693 05:21:25 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:18.693 05:21:25 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:18.693 05:21:25 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:18.693 05:21:25 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:18.693 05:21:25 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:18.693 05:21:25 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:18.693 05:21:25 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:18.693 05:21:25 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:18.693 05:21:25 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:18.693 05:21:25 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:18.693 05:21:25 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:18.693 05:21:25 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:18.693 05:21:25 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:18.693 05:21:25 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:18.693 05:21:25 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:18.693 05:21:25 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:18.693 05:21:25 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:18.693 05:21:25 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:18.693 05:21:25 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:18.693 05:21:25 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:18.693 05:21:25 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:18.693 05:21:25 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:18.693 05:21:25 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.693 05:21:25 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.693 05:21:25 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.693 05:21:25 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:18.693 05:21:25 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.693 05:21:25 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:18.693 05:21:25 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:18.693 05:21:25 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:18.693 05:21:25 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:18.693 05:21:25 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:18.693 05:21:25 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:18.693 05:21:25 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:18.693 05:21:25 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:18.693 05:21:25 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:18.693 05:21:25 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:18.693 05:21:25 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:18.693 05:21:25 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:18.693 05:21:25 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:18.693 05:21:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:18.693 05:21:25 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:18.693 05:21:25 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:18.693 05:21:25 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:18.693 05:21:25 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:18.693 05:21:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:18.952 ************************************ 00:07:18.952 START TEST nvmf_example 00:07:18.952 ************************************ 00:07:18.952 05:21:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:18.952 * Looking for test storage... 00:07:18.952 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:18.952 05:21:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:18.952 05:21:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:18.952 05:21:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:18.952 05:21:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:18.952 05:21:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:18.952 05:21:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:18.952 05:21:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:18.952 05:21:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:18.952 05:21:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:18.952 05:21:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:18.952 05:21:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:18.952 05:21:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:18.952 05:21:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:18.952 05:21:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:18.952 05:21:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:18.952 05:21:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:18.952 05:21:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:18.952 05:21:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:18.952 05:21:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:18.952 05:21:25 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:18.952 05:21:25 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:18.952 05:21:25 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:18.952 05:21:25 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.952 05:21:25 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.952 05:21:25 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.952 05:21:25 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:18.952 05:21:25 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.952 05:21:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:18.952 05:21:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:18.952 05:21:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:18.952 05:21:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:18.952 05:21:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:18.952 05:21:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:18.952 05:21:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:18.952 05:21:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:18.952 05:21:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:18.952 05:21:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:18.952 05:21:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:18.952 05:21:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:18.952 05:21:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:18.952 05:21:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:18.952 05:21:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:18.952 05:21:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:18.952 05:21:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:18.952 05:21:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:18.952 05:21:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:18.952 05:21:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:18.952 05:21:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:18.952 05:21:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:18.952 05:21:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:18.952 05:21:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:18.952 05:21:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:18.952 05:21:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:18.952 05:21:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:18.952 05:21:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:18.952 05:21:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:18.952 05:21:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:18.952 05:21:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:07:18.952 05:21:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:20.857 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:20.857 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:20.857 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:20.857 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:20.857 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:20.857 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.119 ms 00:07:20.857 00:07:20.857 --- 10.0.0.2 ping statistics --- 00:07:20.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:20.857 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:20.857 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:20.857 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.265 ms 00:07:20.857 00:07:20.857 --- 10.0.0.1 ping statistics --- 00:07:20.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:20.857 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:20.857 05:21:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3119606 00:07:20.858 05:21:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:20.858 05:21:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:20.858 05:21:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3119606 00:07:20.858 05:21:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@827 -- # '[' -z 3119606 ']' 00:07:20.858 05:21:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.858 05:21:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:20.858 05:21:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.858 05:21:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:20.858 05:21:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:21.116 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.120 05:21:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:22.120 05:21:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@860 -- # return 0 00:07:22.120 05:21:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:22.120 05:21:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:22.120 05:21:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:22.120 05:21:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:22.120 05:21:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.120 05:21:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:22.120 05:21:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.120 05:21:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:22.120 05:21:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.120 05:21:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:22.120 05:21:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.120 05:21:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:22.120 05:21:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:22.120 05:21:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.120 05:21:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:22.121 05:21:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.121 05:21:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:22.121 05:21:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:22.121 05:21:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.121 05:21:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:22.121 05:21:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.121 05:21:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:22.121 05:21:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.121 05:21:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:22.121 05:21:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.121 05:21:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:22.121 05:21:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:22.121 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.090 Initializing NVMe Controllers 00:07:32.090 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:32.090 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:32.090 Initialization complete. Launching workers. 00:07:32.090 ======================================================== 00:07:32.090 Latency(us) 00:07:32.090 Device Information : IOPS MiB/s Average min max 00:07:32.090 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15148.65 59.17 4224.38 828.03 16345.35 00:07:32.090 ======================================================== 00:07:32.090 Total : 15148.65 59.17 4224.38 828.03 16345.35 00:07:32.090 00:07:32.348 05:21:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:32.348 05:21:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:32.348 05:21:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:32.348 05:21:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:32.348 05:21:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:32.348 05:21:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:32.348 05:21:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:32.348 05:21:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:32.348 rmmod nvme_tcp 00:07:32.348 rmmod nvme_fabrics 00:07:32.348 rmmod nvme_keyring 00:07:32.348 05:21:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:32.348 05:21:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:32.348 05:21:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:32.348 05:21:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 3119606 ']' 00:07:32.348 05:21:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 3119606 00:07:32.348 05:21:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@946 -- # '[' -z 3119606 ']' 00:07:32.348 05:21:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@950 -- # kill -0 3119606 00:07:32.348 05:21:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # uname 00:07:32.348 05:21:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:32.348 05:21:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3119606 00:07:32.348 05:21:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # process_name=nvmf 00:07:32.348 05:21:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@956 -- # '[' nvmf = sudo ']' 00:07:32.348 05:21:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3119606' 00:07:32.348 killing process with pid 3119606 00:07:32.348 05:21:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@965 -- # kill 3119606 00:07:32.348 05:21:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@970 -- # wait 3119606 00:07:32.608 nvmf threads initialize successfully 00:07:32.608 bdev subsystem init successfully 00:07:32.608 created a nvmf target service 00:07:32.608 create targets's poll groups done 00:07:32.608 all subsystems of target started 00:07:32.608 nvmf target is running 00:07:32.608 all subsystems of target stopped 00:07:32.608 destroy targets's poll groups done 00:07:32.608 destroyed the nvmf target service 00:07:32.608 bdev subsystem finish successfully 00:07:32.608 nvmf threads destroy successfully 00:07:32.608 05:21:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:32.608 05:21:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:32.608 05:21:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:32.608 05:21:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:32.608 05:21:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:32.608 05:21:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:32.608 05:21:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:32.608 05:21:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:34.518 05:21:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:34.518 05:21:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:34.518 05:21:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:34.518 05:21:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:34.518 00:07:34.518 real 0m15.778s 00:07:34.519 user 0m45.091s 00:07:34.519 sys 0m3.130s 00:07:34.519 05:21:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:34.519 05:21:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:34.519 ************************************ 00:07:34.519 END TEST nvmf_example 00:07:34.519 ************************************ 00:07:34.519 05:21:41 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:34.519 05:21:41 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:34.519 05:21:41 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:34.519 05:21:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:34.780 ************************************ 00:07:34.780 START TEST nvmf_filesystem 00:07:34.780 ************************************ 00:07:34.780 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:34.780 * Looking for test storage... 00:07:34.780 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:34.780 05:21:41 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:34.780 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:34.780 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:34.780 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:34.780 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:34.780 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:34.780 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:34.780 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:34.780 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:34.780 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:34.780 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:34.780 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:34.780 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:34.780 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:34.780 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:34.780 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:34.780 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:34.780 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:34.780 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:34.780 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:34.780 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:34.780 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:34.780 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:34.780 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:34.780 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:34.780 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:34.780 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:34.780 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:34.780 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:34.780 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:34.780 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:34.780 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:34.780 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:34.780 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:34.780 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:34.780 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:34.780 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:34.780 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:34.780 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:34.780 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:34.780 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:34.780 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:34.780 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:34.780 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:34.780 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:34.780 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:34.780 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:34.780 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:34.780 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:34.780 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:34.780 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:34.780 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:34.780 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:34.780 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:34.780 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:34.780 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:34.780 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:34.780 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:34.780 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:34.780 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:34.780 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:34.780 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:34.780 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:34.780 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:34.780 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:34.780 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:34.780 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:34.780 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:34.780 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:34.780 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:34.781 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:34.781 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:34.781 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:34.781 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:34.781 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:34.781 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:34.781 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:34.781 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:34.781 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:34.781 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:34.781 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:34.781 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:34.781 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:34.781 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:34.781 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:07:34.781 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:34.781 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:34.781 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:34.781 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:34.781 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:34.781 05:21:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:34.781 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:34.781 05:21:41 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:34.781 05:21:41 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:34.781 05:21:41 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:34.781 05:21:41 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:34.781 05:21:41 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:34.781 05:21:41 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:34.781 05:21:41 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:34.781 05:21:41 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:34.781 05:21:41 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:34.781 05:21:41 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:34.781 05:21:41 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:34.781 05:21:41 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:34.781 05:21:41 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:34.781 05:21:41 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:34.781 05:21:41 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:34.781 #define SPDK_CONFIG_H 00:07:34.781 #define SPDK_CONFIG_APPS 1 00:07:34.781 #define SPDK_CONFIG_ARCH native 00:07:34.781 #undef SPDK_CONFIG_ASAN 00:07:34.781 #undef SPDK_CONFIG_AVAHI 00:07:34.781 #undef SPDK_CONFIG_CET 00:07:34.781 #define SPDK_CONFIG_COVERAGE 1 00:07:34.781 #define SPDK_CONFIG_CROSS_PREFIX 00:07:34.781 #undef SPDK_CONFIG_CRYPTO 00:07:34.781 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:34.781 #undef SPDK_CONFIG_CUSTOMOCF 00:07:34.781 #undef SPDK_CONFIG_DAOS 00:07:34.781 #define SPDK_CONFIG_DAOS_DIR 00:07:34.781 #define SPDK_CONFIG_DEBUG 1 00:07:34.781 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:34.781 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:34.781 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:34.781 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:34.781 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:34.781 #undef SPDK_CONFIG_DPDK_UADK 00:07:34.781 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:34.781 #define SPDK_CONFIG_EXAMPLES 1 00:07:34.781 #undef SPDK_CONFIG_FC 00:07:34.781 #define SPDK_CONFIG_FC_PATH 00:07:34.781 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:34.781 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:34.781 #undef SPDK_CONFIG_FUSE 00:07:34.781 #undef SPDK_CONFIG_FUZZER 00:07:34.781 #define SPDK_CONFIG_FUZZER_LIB 00:07:34.781 #undef SPDK_CONFIG_GOLANG 00:07:34.781 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:34.781 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:34.781 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:34.781 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:34.781 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:34.781 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:34.781 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:34.781 #define SPDK_CONFIG_IDXD 1 00:07:34.781 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:34.781 #undef SPDK_CONFIG_IPSEC_MB 00:07:34.781 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:34.781 #define SPDK_CONFIG_ISAL 1 00:07:34.781 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:34.781 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:34.781 #define SPDK_CONFIG_LIBDIR 00:07:34.781 #undef SPDK_CONFIG_LTO 00:07:34.781 #define SPDK_CONFIG_MAX_LCORES 00:07:34.781 #define SPDK_CONFIG_NVME_CUSE 1 00:07:34.781 #undef SPDK_CONFIG_OCF 00:07:34.781 #define SPDK_CONFIG_OCF_PATH 00:07:34.781 #define SPDK_CONFIG_OPENSSL_PATH 00:07:34.781 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:34.781 #define SPDK_CONFIG_PGO_DIR 00:07:34.781 #undef SPDK_CONFIG_PGO_USE 00:07:34.781 #define SPDK_CONFIG_PREFIX /usr/local 00:07:34.781 #undef SPDK_CONFIG_RAID5F 00:07:34.781 #undef SPDK_CONFIG_RBD 00:07:34.781 #define SPDK_CONFIG_RDMA 1 00:07:34.781 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:34.781 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:34.781 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:34.781 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:34.781 #define SPDK_CONFIG_SHARED 1 00:07:34.781 #undef SPDK_CONFIG_SMA 00:07:34.781 #define SPDK_CONFIG_TESTS 1 00:07:34.781 #undef SPDK_CONFIG_TSAN 00:07:34.781 #define SPDK_CONFIG_UBLK 1 00:07:34.781 #define SPDK_CONFIG_UBSAN 1 00:07:34.781 #undef SPDK_CONFIG_UNIT_TESTS 00:07:34.781 #undef SPDK_CONFIG_URING 00:07:34.781 #define SPDK_CONFIG_URING_PATH 00:07:34.781 #undef SPDK_CONFIG_URING_ZNS 00:07:34.781 #undef SPDK_CONFIG_USDT 00:07:34.781 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:34.781 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:34.781 #define SPDK_CONFIG_VFIO_USER 1 00:07:34.781 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:34.781 #define SPDK_CONFIG_VHOST 1 00:07:34.781 #define SPDK_CONFIG_VIRTIO 1 00:07:34.781 #undef SPDK_CONFIG_VTUNE 00:07:34.781 #define SPDK_CONFIG_VTUNE_DIR 00:07:34.781 #define SPDK_CONFIG_WERROR 1 00:07:34.781 #define SPDK_CONFIG_WPDK_DIR 00:07:34.781 #undef SPDK_CONFIG_XNVME 00:07:34.781 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:34.781 05:21:41 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:34.781 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:34.781 05:21:41 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:34.781 05:21:41 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@57 -- # : 1 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@61 -- # : 0 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # : 0 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # : 1 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # : 0 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # : 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # : 0 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # : 0 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # : 0 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # : 0 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # : 0 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # : 0 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # : 0 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # : 1 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # : 0 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # : 0 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # : 1 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # : 1 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # : 0 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # : 0 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # : 0 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # : tcp 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # : 0 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # : 0 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # : 0 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # : 0 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # : 0 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # : 0 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # : 0 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # : 0 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # : 0 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # : 1 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # : 0 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # : 0 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # : 0 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # : 0 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # : 0 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # : 0 00:07:34.782 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # : v23.11 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # : true 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # : 0 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # : 0 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # : 0 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # : 0 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # : 0 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # : 0 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # : e810 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # : 0 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # : 0 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # : 0 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # : 0 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # : 0 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # : 0 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # : 0 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # cat 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # export valgrind= 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # valgrind= 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # uname -s 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@278 -- # MAKE=make 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j48 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:07:34.783 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # TEST_MODE= 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # for i in "$@" 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # case "$i" in 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@305 -- # TEST_TRANSPORT=tcp 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # [[ -z 3121355 ]] 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # kill -0 3121355 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local mount target_dir 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.AULojP 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.AULojP/tests/target /tmp/spdk.AULojP 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # df -T 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_devtmpfs 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=67108864 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=67108864 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/pmem0 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=ext2 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=953643008 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=5284429824 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4330786816 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_root 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=overlay 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=52909158400 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=61994708992 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=9085550592 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=30941716480 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=30997352448 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=55635968 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=12390182912 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=12398944256 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=8761344 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=30996148224 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=30997356544 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=1208320 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=6199463936 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=6199468032 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:07:34.784 * Looking for test storage... 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@367 -- # local target_space new_size 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # mount=/ 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@373 -- # target_space=52909158400 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == tmpfs ]] 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == ramfs ]] 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # new_size=11300143104 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:34.784 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # return 0 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:34.784 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:34.785 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:34.785 05:21:41 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:34.785 05:21:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:34.785 05:21:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:34.785 05:21:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:34.785 05:21:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:34.785 05:21:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:34.785 05:21:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:34.785 05:21:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:34.785 05:21:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:34.785 05:21:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:34.785 05:21:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:34.785 05:21:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:34.785 05:21:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:34.785 05:21:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:34.785 05:21:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:34.785 05:21:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:34.785 05:21:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:34.785 05:21:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:34.785 05:21:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:34.785 05:21:41 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:34.785 05:21:41 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:34.785 05:21:41 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:34.785 05:21:41 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.785 05:21:41 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.785 05:21:41 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.785 05:21:41 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:34.785 05:21:41 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.785 05:21:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:34.785 05:21:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:34.785 05:21:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:34.785 05:21:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:34.785 05:21:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:34.785 05:21:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:34.785 05:21:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:34.785 05:21:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:34.785 05:21:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:34.785 05:21:41 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:34.785 05:21:41 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:34.785 05:21:41 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:34.785 05:21:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:34.785 05:21:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:34.785 05:21:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:34.785 05:21:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:34.785 05:21:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:34.785 05:21:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:34.785 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:34.785 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:34.785 05:21:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:34.785 05:21:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:34.785 05:21:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:34.785 05:21:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:36.688 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:36.688 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:36.688 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:36.688 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:36.688 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:36.688 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:07:36.688 00:07:36.688 --- 10.0.0.2 ping statistics --- 00:07:36.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:36.688 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:36.688 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:36.688 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:07:36.688 00:07:36.688 --- 10.0.0.1 ping statistics --- 00:07:36.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:36.688 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:36.688 05:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:36.947 ************************************ 00:07:36.947 START TEST nvmf_filesystem_no_in_capsule 00:07:36.947 ************************************ 00:07:36.947 05:21:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 0 00:07:36.947 05:21:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:36.947 05:21:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:36.947 05:21:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:36.947 05:21:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:36.947 05:21:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:36.947 05:21:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3122907 00:07:36.947 05:21:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:36.947 05:21:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3122907 00:07:36.947 05:21:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 3122907 ']' 00:07:36.947 05:21:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.947 05:21:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:36.947 05:21:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.947 05:21:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:36.947 05:21:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:36.947 [2024-07-14 05:21:43.855843] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:36.947 [2024-07-14 05:21:43.855958] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:36.947 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.947 [2024-07-14 05:21:43.923836] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:36.947 [2024-07-14 05:21:44.015308] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:36.947 [2024-07-14 05:21:44.015358] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:36.947 [2024-07-14 05:21:44.015387] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:36.947 [2024-07-14 05:21:44.015399] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:36.947 [2024-07-14 05:21:44.015409] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:36.947 [2024-07-14 05:21:44.015493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.947 [2024-07-14 05:21:44.015538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:36.947 [2024-07-14 05:21:44.015626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:36.947 [2024-07-14 05:21:44.015628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.205 05:21:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:37.205 05:21:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:07:37.205 05:21:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:37.205 05:21:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:37.205 05:21:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:37.205 05:21:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:37.205 05:21:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:37.205 05:21:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:37.205 05:21:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.205 05:21:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:37.205 [2024-07-14 05:21:44.170691] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:37.205 05:21:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.205 05:21:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:37.205 05:21:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.205 05:21:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:37.463 Malloc1 00:07:37.463 05:21:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.463 05:21:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:37.463 05:21:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.463 05:21:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:37.463 05:21:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.463 05:21:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:37.463 05:21:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.463 05:21:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:37.463 05:21:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.463 05:21:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:37.463 05:21:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.463 05:21:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:37.463 [2024-07-14 05:21:44.356806] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:37.463 05:21:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.463 05:21:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:37.463 05:21:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:07:37.463 05:21:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:07:37.463 05:21:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:07:37.463 05:21:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:07:37.463 05:21:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:37.463 05:21:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.463 05:21:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:37.463 05:21:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.463 05:21:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:07:37.463 { 00:07:37.463 "name": "Malloc1", 00:07:37.463 "aliases": [ 00:07:37.463 "3ca43ae1-cba4-4fce-a1b1-9b76ce87d793" 00:07:37.463 ], 00:07:37.463 "product_name": "Malloc disk", 00:07:37.463 "block_size": 512, 00:07:37.463 "num_blocks": 1048576, 00:07:37.463 "uuid": "3ca43ae1-cba4-4fce-a1b1-9b76ce87d793", 00:07:37.463 "assigned_rate_limits": { 00:07:37.463 "rw_ios_per_sec": 0, 00:07:37.463 "rw_mbytes_per_sec": 0, 00:07:37.463 "r_mbytes_per_sec": 0, 00:07:37.463 "w_mbytes_per_sec": 0 00:07:37.463 }, 00:07:37.463 "claimed": true, 00:07:37.463 "claim_type": "exclusive_write", 00:07:37.463 "zoned": false, 00:07:37.463 "supported_io_types": { 00:07:37.463 "read": true, 00:07:37.463 "write": true, 00:07:37.463 "unmap": true, 00:07:37.463 "write_zeroes": true, 00:07:37.463 "flush": true, 00:07:37.463 "reset": true, 00:07:37.463 "compare": false, 00:07:37.463 "compare_and_write": false, 00:07:37.463 "abort": true, 00:07:37.463 "nvme_admin": false, 00:07:37.463 "nvme_io": false 00:07:37.463 }, 00:07:37.463 "memory_domains": [ 00:07:37.463 { 00:07:37.463 "dma_device_id": "system", 00:07:37.463 "dma_device_type": 1 00:07:37.463 }, 00:07:37.463 { 00:07:37.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:37.463 "dma_device_type": 2 00:07:37.463 } 00:07:37.463 ], 00:07:37.463 "driver_specific": {} 00:07:37.463 } 00:07:37.463 ]' 00:07:37.463 05:21:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:07:37.463 05:21:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:07:37.463 05:21:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:07:37.463 05:21:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:07:37.463 05:21:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:07:37.463 05:21:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:07:37.463 05:21:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:37.464 05:21:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:38.397 05:21:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:38.397 05:21:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:07:38.397 05:21:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:38.397 05:21:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:38.397 05:21:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:07:40.294 05:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:40.294 05:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:40.294 05:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:40.294 05:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:40.294 05:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:40.294 05:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:07:40.295 05:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:40.295 05:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:40.295 05:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:40.295 05:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:40.295 05:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:40.295 05:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:40.295 05:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:40.295 05:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:40.295 05:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:40.295 05:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:40.295 05:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:40.295 05:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:40.860 05:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:41.793 05:21:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:41.793 05:21:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:41.793 05:21:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:41.793 05:21:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:41.793 05:21:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:41.793 ************************************ 00:07:41.793 START TEST filesystem_ext4 00:07:41.793 ************************************ 00:07:41.793 05:21:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:41.793 05:21:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:41.793 05:21:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:41.793 05:21:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:41.793 05:21:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:07:41.794 05:21:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:41.794 05:21:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:07:41.794 05:21:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local force 00:07:41.794 05:21:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:07:41.794 05:21:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:07:41.794 05:21:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:41.794 mke2fs 1.46.5 (30-Dec-2021) 00:07:41.794 Discarding device blocks: 0/522240 done 00:07:41.794 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:41.794 Filesystem UUID: 6e77f394-23fa-4599-8aff-7aa93672d369 00:07:41.794 Superblock backups stored on blocks: 00:07:41.794 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:41.794 00:07:41.794 Allocating group tables: 0/64 done 00:07:41.794 Writing inode tables: 0/64 done 00:07:42.051 Creating journal (8192 blocks): done 00:07:43.243 Writing superblocks and filesystem accounting information: 0/64 done 00:07:43.243 00:07:43.243 05:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # return 0 00:07:43.243 05:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:43.243 05:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:43.243 05:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:43.243 05:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:43.243 05:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:43.243 05:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:43.243 05:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:43.243 05:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3122907 00:07:43.243 05:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:43.243 05:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:43.243 05:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:43.243 05:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:43.243 00:07:43.243 real 0m1.530s 00:07:43.243 user 0m0.024s 00:07:43.243 sys 0m0.047s 00:07:43.243 05:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:43.243 05:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:43.243 ************************************ 00:07:43.243 END TEST filesystem_ext4 00:07:43.243 ************************************ 00:07:43.243 05:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:43.243 05:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:43.243 05:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:43.243 05:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:43.243 ************************************ 00:07:43.243 START TEST filesystem_btrfs 00:07:43.243 ************************************ 00:07:43.243 05:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:43.243 05:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:43.243 05:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:43.243 05:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:43.243 05:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:07:43.243 05:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:43.243 05:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:07:43.243 05:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local force 00:07:43.243 05:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:07:43.243 05:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:07:43.243 05:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:43.816 btrfs-progs v6.6.2 00:07:43.816 See https://btrfs.readthedocs.io for more information. 00:07:43.816 00:07:43.816 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:43.817 NOTE: several default settings have changed in version 5.15, please make sure 00:07:43.817 this does not affect your deployments: 00:07:43.817 - DUP for metadata (-m dup) 00:07:43.817 - enabled no-holes (-O no-holes) 00:07:43.817 - enabled free-space-tree (-R free-space-tree) 00:07:43.817 00:07:43.817 Label: (null) 00:07:43.817 UUID: 27837bf2-2910-4344-84ab-ba5a38937674 00:07:43.817 Node size: 16384 00:07:43.817 Sector size: 4096 00:07:43.817 Filesystem size: 510.00MiB 00:07:43.817 Block group profiles: 00:07:43.817 Data: single 8.00MiB 00:07:43.817 Metadata: DUP 32.00MiB 00:07:43.817 System: DUP 8.00MiB 00:07:43.817 SSD detected: yes 00:07:43.817 Zoned device: no 00:07:43.817 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:43.817 Runtime features: free-space-tree 00:07:43.817 Checksum: crc32c 00:07:43.817 Number of devices: 1 00:07:43.817 Devices: 00:07:43.817 ID SIZE PATH 00:07:43.817 1 510.00MiB /dev/nvme0n1p1 00:07:43.817 00:07:43.817 05:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # return 0 00:07:43.817 05:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:44.791 05:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:44.791 05:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:44.791 05:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:44.791 05:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:44.791 05:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:44.791 05:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:44.791 05:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3122907 00:07:44.791 05:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:44.791 05:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:44.791 05:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:44.791 05:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:44.791 00:07:44.791 real 0m1.313s 00:07:44.791 user 0m0.015s 00:07:44.791 sys 0m0.110s 00:07:44.791 05:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:44.791 05:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:44.791 ************************************ 00:07:44.791 END TEST filesystem_btrfs 00:07:44.791 ************************************ 00:07:44.791 05:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:44.791 05:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:44.791 05:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:44.791 05:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:44.791 ************************************ 00:07:44.791 START TEST filesystem_xfs 00:07:44.791 ************************************ 00:07:44.791 05:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:07:44.791 05:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:44.791 05:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:44.791 05:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:44.791 05:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:07:44.791 05:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:44.791 05:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local i=0 00:07:44.791 05:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local force 00:07:44.791 05:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:07:44.791 05:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # force=-f 00:07:44.792 05:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:44.792 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:44.792 = sectsz=512 attr=2, projid32bit=1 00:07:44.792 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:44.792 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:44.792 data = bsize=4096 blocks=130560, imaxpct=25 00:07:44.792 = sunit=0 swidth=0 blks 00:07:44.792 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:44.792 log =internal log bsize=4096 blocks=16384, version=2 00:07:44.792 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:44.792 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:45.724 Discarding blocks...Done. 00:07:45.724 05:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # return 0 00:07:45.724 05:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:48.264 05:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:48.264 05:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:48.264 05:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:48.264 05:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:48.264 05:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:48.264 05:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:48.264 05:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3122907 00:07:48.264 05:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:48.264 05:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:48.264 05:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:48.264 05:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:48.264 00:07:48.264 real 0m3.298s 00:07:48.264 user 0m0.018s 00:07:48.264 sys 0m0.053s 00:07:48.264 05:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:48.264 05:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:48.264 ************************************ 00:07:48.264 END TEST filesystem_xfs 00:07:48.264 ************************************ 00:07:48.264 05:21:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:48.264 05:21:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:48.264 05:21:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:48.264 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:48.264 05:21:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:48.264 05:21:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:07:48.264 05:21:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:48.264 05:21:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:48.264 05:21:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:48.264 05:21:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:48.264 05:21:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:07:48.264 05:21:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:48.264 05:21:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.264 05:21:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:48.264 05:21:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.264 05:21:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:48.264 05:21:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3122907 00:07:48.264 05:21:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 3122907 ']' 00:07:48.264 05:21:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # kill -0 3122907 00:07:48.264 05:21:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # uname 00:07:48.264 05:21:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:48.265 05:21:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3122907 00:07:48.265 05:21:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:48.265 05:21:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:48.265 05:21:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3122907' 00:07:48.265 killing process with pid 3122907 00:07:48.265 05:21:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@965 -- # kill 3122907 00:07:48.265 05:21:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # wait 3122907 00:07:48.831 05:21:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:48.831 00:07:48.831 real 0m11.855s 00:07:48.831 user 0m45.504s 00:07:48.831 sys 0m1.731s 00:07:48.831 05:21:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:48.831 05:21:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:48.831 ************************************ 00:07:48.831 END TEST nvmf_filesystem_no_in_capsule 00:07:48.831 ************************************ 00:07:48.831 05:21:55 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:48.831 05:21:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:48.831 05:21:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:48.831 05:21:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:48.831 ************************************ 00:07:48.831 START TEST nvmf_filesystem_in_capsule 00:07:48.831 ************************************ 00:07:48.831 05:21:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 4096 00:07:48.831 05:21:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:48.831 05:21:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:48.831 05:21:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:48.831 05:21:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:48.831 05:21:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:48.831 05:21:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3124547 00:07:48.831 05:21:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:48.831 05:21:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3124547 00:07:48.831 05:21:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 3124547 ']' 00:07:48.831 05:21:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.831 05:21:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:48.831 05:21:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.831 05:21:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:48.831 05:21:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:48.831 [2024-07-14 05:21:55.758437] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:48.831 [2024-07-14 05:21:55.758530] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:48.831 EAL: No free 2048 kB hugepages reported on node 1 00:07:48.831 [2024-07-14 05:21:55.827551] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:48.831 [2024-07-14 05:21:55.915553] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:48.831 [2024-07-14 05:21:55.915612] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:48.831 [2024-07-14 05:21:55.915640] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:48.831 [2024-07-14 05:21:55.915652] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:48.831 [2024-07-14 05:21:55.915662] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:48.831 [2024-07-14 05:21:55.915742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:48.831 [2024-07-14 05:21:55.915767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:48.831 [2024-07-14 05:21:55.915827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:48.831 [2024-07-14 05:21:55.915829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.089 05:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:49.089 05:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:07:49.089 05:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:49.089 05:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:49.089 05:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:49.089 05:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:49.089 05:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:49.089 05:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:49.089 05:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.090 05:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:49.090 [2024-07-14 05:21:56.078668] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:49.090 05:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.090 05:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:49.090 05:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.090 05:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:49.348 Malloc1 00:07:49.348 05:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.348 05:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:49.348 05:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.348 05:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:49.348 05:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.348 05:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:49.348 05:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.348 05:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:49.348 05:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.348 05:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:49.348 05:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.348 05:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:49.348 [2024-07-14 05:21:56.252163] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:49.348 05:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.348 05:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:49.348 05:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:07:49.348 05:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:07:49.348 05:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:07:49.348 05:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:07:49.348 05:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:49.348 05:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.348 05:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:49.348 05:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.348 05:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:07:49.348 { 00:07:49.348 "name": "Malloc1", 00:07:49.348 "aliases": [ 00:07:49.348 "3a9d163f-352f-4566-bdb6-9318b9b1ff4f" 00:07:49.348 ], 00:07:49.348 "product_name": "Malloc disk", 00:07:49.348 "block_size": 512, 00:07:49.348 "num_blocks": 1048576, 00:07:49.348 "uuid": "3a9d163f-352f-4566-bdb6-9318b9b1ff4f", 00:07:49.348 "assigned_rate_limits": { 00:07:49.348 "rw_ios_per_sec": 0, 00:07:49.348 "rw_mbytes_per_sec": 0, 00:07:49.348 "r_mbytes_per_sec": 0, 00:07:49.348 "w_mbytes_per_sec": 0 00:07:49.348 }, 00:07:49.348 "claimed": true, 00:07:49.348 "claim_type": "exclusive_write", 00:07:49.348 "zoned": false, 00:07:49.348 "supported_io_types": { 00:07:49.348 "read": true, 00:07:49.348 "write": true, 00:07:49.348 "unmap": true, 00:07:49.348 "write_zeroes": true, 00:07:49.348 "flush": true, 00:07:49.348 "reset": true, 00:07:49.348 "compare": false, 00:07:49.348 "compare_and_write": false, 00:07:49.348 "abort": true, 00:07:49.348 "nvme_admin": false, 00:07:49.348 "nvme_io": false 00:07:49.348 }, 00:07:49.348 "memory_domains": [ 00:07:49.348 { 00:07:49.348 "dma_device_id": "system", 00:07:49.348 "dma_device_type": 1 00:07:49.348 }, 00:07:49.348 { 00:07:49.348 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:49.348 "dma_device_type": 2 00:07:49.348 } 00:07:49.348 ], 00:07:49.348 "driver_specific": {} 00:07:49.348 } 00:07:49.348 ]' 00:07:49.348 05:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:07:49.348 05:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:07:49.348 05:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:07:49.348 05:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:07:49.348 05:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:07:49.348 05:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:07:49.348 05:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:49.348 05:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:50.280 05:21:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:50.280 05:21:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:07:50.280 05:21:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:50.280 05:21:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:50.280 05:21:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:07:52.176 05:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:52.176 05:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:52.176 05:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:52.176 05:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:52.176 05:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:52.176 05:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:07:52.176 05:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:52.176 05:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:52.176 05:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:52.176 05:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:52.176 05:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:52.176 05:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:52.176 05:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:52.176 05:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:52.176 05:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:52.176 05:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:52.176 05:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:52.176 05:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:52.740 05:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:53.676 05:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:53.676 05:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:53.676 05:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:53.676 05:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:53.676 05:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:53.676 ************************************ 00:07:53.676 START TEST filesystem_in_capsule_ext4 00:07:53.676 ************************************ 00:07:53.676 05:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:53.676 05:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:53.676 05:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:53.676 05:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:53.676 05:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:07:53.676 05:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:53.676 05:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:07:53.676 05:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local force 00:07:53.676 05:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:07:53.676 05:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:07:53.676 05:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:53.676 mke2fs 1.46.5 (30-Dec-2021) 00:07:53.676 Discarding device blocks: 0/522240 done 00:07:53.932 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:53.932 Filesystem UUID: 0ccc5bbe-99a7-4590-9ff5-86a3145016e0 00:07:53.932 Superblock backups stored on blocks: 00:07:53.932 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:53.932 00:07:53.932 Allocating group tables: 0/64 done 00:07:53.932 Writing inode tables: 0/64 done 00:07:56.450 Creating journal (8192 blocks): done 00:07:56.450 Writing superblocks and filesystem accounting information: 0/64 done 00:07:56.450 00:07:56.450 05:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # return 0 00:07:56.450 05:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:57.382 05:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:57.382 05:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:07:57.382 05:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:57.382 05:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:07:57.382 05:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:57.382 05:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:57.382 05:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3124547 00:07:57.382 05:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:57.382 05:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:57.382 05:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:57.382 05:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:57.382 00:07:57.382 real 0m3.608s 00:07:57.382 user 0m0.018s 00:07:57.382 sys 0m0.065s 00:07:57.382 05:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:57.382 05:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:57.382 ************************************ 00:07:57.382 END TEST filesystem_in_capsule_ext4 00:07:57.382 ************************************ 00:07:57.382 05:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:57.382 05:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:57.382 05:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:57.382 05:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:57.382 ************************************ 00:07:57.382 START TEST filesystem_in_capsule_btrfs 00:07:57.382 ************************************ 00:07:57.382 05:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:57.382 05:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:57.382 05:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:57.382 05:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:57.382 05:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:07:57.382 05:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:57.382 05:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:07:57.382 05:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local force 00:07:57.382 05:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:07:57.382 05:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:07:57.382 05:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:57.640 btrfs-progs v6.6.2 00:07:57.640 See https://btrfs.readthedocs.io for more information. 00:07:57.640 00:07:57.640 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:57.640 NOTE: several default settings have changed in version 5.15, please make sure 00:07:57.640 this does not affect your deployments: 00:07:57.640 - DUP for metadata (-m dup) 00:07:57.640 - enabled no-holes (-O no-holes) 00:07:57.640 - enabled free-space-tree (-R free-space-tree) 00:07:57.640 00:07:57.640 Label: (null) 00:07:57.640 UUID: 1a000c6b-7ef3-4cea-af38-9aaeed78b89f 00:07:57.640 Node size: 16384 00:07:57.640 Sector size: 4096 00:07:57.640 Filesystem size: 510.00MiB 00:07:57.640 Block group profiles: 00:07:57.640 Data: single 8.00MiB 00:07:57.640 Metadata: DUP 32.00MiB 00:07:57.640 System: DUP 8.00MiB 00:07:57.640 SSD detected: yes 00:07:57.640 Zoned device: no 00:07:57.640 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:57.640 Runtime features: free-space-tree 00:07:57.640 Checksum: crc32c 00:07:57.640 Number of devices: 1 00:07:57.640 Devices: 00:07:57.640 ID SIZE PATH 00:07:57.640 1 510.00MiB /dev/nvme0n1p1 00:07:57.640 00:07:57.640 05:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # return 0 00:07:57.640 05:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:58.205 05:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:58.205 05:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:07:58.205 05:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:58.463 05:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:07:58.463 05:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:58.463 05:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:58.463 05:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3124547 00:07:58.463 05:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:58.463 05:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:58.463 05:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:58.463 05:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:58.463 00:07:58.463 real 0m1.068s 00:07:58.463 user 0m0.023s 00:07:58.463 sys 0m0.115s 00:07:58.463 05:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:58.463 05:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:58.463 ************************************ 00:07:58.463 END TEST filesystem_in_capsule_btrfs 00:07:58.463 ************************************ 00:07:58.463 05:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:58.463 05:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:58.463 05:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:58.463 05:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:58.463 ************************************ 00:07:58.463 START TEST filesystem_in_capsule_xfs 00:07:58.463 ************************************ 00:07:58.463 05:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:07:58.463 05:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:58.463 05:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:58.463 05:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:58.463 05:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:07:58.463 05:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:58.463 05:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local i=0 00:07:58.463 05:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local force 00:07:58.463 05:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:07:58.463 05:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # force=-f 00:07:58.463 05:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:58.463 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:58.463 = sectsz=512 attr=2, projid32bit=1 00:07:58.464 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:58.464 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:58.464 data = bsize=4096 blocks=130560, imaxpct=25 00:07:58.464 = sunit=0 swidth=0 blks 00:07:58.464 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:58.464 log =internal log bsize=4096 blocks=16384, version=2 00:07:58.464 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:58.464 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:59.834 Discarding blocks...Done. 00:07:59.834 05:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # return 0 00:07:59.834 05:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:01.769 05:22:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:02.027 05:22:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:08:02.027 05:22:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:02.027 05:22:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:08:02.027 05:22:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:08:02.027 05:22:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:02.027 05:22:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3124547 00:08:02.027 05:22:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:02.027 05:22:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:02.027 05:22:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:02.027 05:22:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:02.027 00:08:02.027 real 0m3.531s 00:08:02.027 user 0m0.015s 00:08:02.027 sys 0m0.063s 00:08:02.027 05:22:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:02.027 05:22:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:02.027 ************************************ 00:08:02.027 END TEST filesystem_in_capsule_xfs 00:08:02.027 ************************************ 00:08:02.027 05:22:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:02.027 05:22:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:02.027 05:22:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:02.027 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:02.027 05:22:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:02.027 05:22:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:08:02.027 05:22:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:02.027 05:22:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:02.285 05:22:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:02.285 05:22:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:02.285 05:22:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:08:02.285 05:22:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:02.285 05:22:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.285 05:22:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:02.285 05:22:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.285 05:22:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:02.285 05:22:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3124547 00:08:02.285 05:22:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 3124547 ']' 00:08:02.285 05:22:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # kill -0 3124547 00:08:02.285 05:22:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # uname 00:08:02.285 05:22:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:02.285 05:22:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3124547 00:08:02.285 05:22:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:02.285 05:22:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:02.285 05:22:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3124547' 00:08:02.285 killing process with pid 3124547 00:08:02.285 05:22:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@965 -- # kill 3124547 00:08:02.285 05:22:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # wait 3124547 00:08:02.543 05:22:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:02.543 00:08:02.543 real 0m13.907s 00:08:02.543 user 0m53.489s 00:08:02.543 sys 0m1.984s 00:08:02.543 05:22:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:02.543 05:22:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:02.543 ************************************ 00:08:02.543 END TEST nvmf_filesystem_in_capsule 00:08:02.543 ************************************ 00:08:02.543 05:22:09 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:08:02.543 05:22:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:02.543 05:22:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:08:02.543 05:22:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:02.543 05:22:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:08:02.543 05:22:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:02.543 05:22:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:02.825 rmmod nvme_tcp 00:08:02.825 rmmod nvme_fabrics 00:08:02.825 rmmod nvme_keyring 00:08:02.825 05:22:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:02.825 05:22:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:08:02.825 05:22:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:08:02.825 05:22:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:02.825 05:22:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:02.825 05:22:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:02.825 05:22:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:02.825 05:22:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:02.825 05:22:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:02.825 05:22:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:02.826 05:22:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:02.826 05:22:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:04.730 05:22:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:04.730 00:08:04.730 real 0m30.119s 00:08:04.730 user 1m39.865s 00:08:04.730 sys 0m5.196s 00:08:04.730 05:22:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:04.730 05:22:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:04.730 ************************************ 00:08:04.730 END TEST nvmf_filesystem 00:08:04.730 ************************************ 00:08:04.730 05:22:11 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:04.730 05:22:11 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:04.730 05:22:11 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:04.730 05:22:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:04.730 ************************************ 00:08:04.730 START TEST nvmf_target_discovery 00:08:04.730 ************************************ 00:08:04.730 05:22:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:04.988 * Looking for test storage... 00:08:04.988 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:04.988 05:22:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:04.988 05:22:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:08:04.988 05:22:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:04.988 05:22:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:04.988 05:22:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:04.988 05:22:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:04.988 05:22:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:04.988 05:22:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:04.988 05:22:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:04.988 05:22:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:04.988 05:22:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:04.988 05:22:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:04.988 05:22:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:04.988 05:22:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:04.988 05:22:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:04.988 05:22:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:04.988 05:22:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:04.988 05:22:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:04.988 05:22:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:04.988 05:22:11 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:04.988 05:22:11 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:04.988 05:22:11 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:04.988 05:22:11 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.988 05:22:11 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.989 05:22:11 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.989 05:22:11 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:08:04.989 05:22:11 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.989 05:22:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:08:04.989 05:22:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:04.989 05:22:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:04.989 05:22:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:04.989 05:22:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:04.989 05:22:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:04.989 05:22:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:04.989 05:22:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:04.989 05:22:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:04.989 05:22:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:04.989 05:22:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:04.989 05:22:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:04.989 05:22:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:08:04.989 05:22:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:08:04.989 05:22:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:04.989 05:22:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:04.989 05:22:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:04.989 05:22:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:04.989 05:22:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:04.989 05:22:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:04.989 05:22:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:04.989 05:22:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:04.989 05:22:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:04.989 05:22:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:04.989 05:22:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:08:04.989 05:22:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:06.889 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:06.889 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:08:06.889 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:06.889 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:06.890 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:06.890 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:06.890 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:06.890 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:06.890 05:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:07.149 05:22:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:07.149 05:22:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:07.149 05:22:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:07.149 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:07.149 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:08:07.149 00:08:07.149 --- 10.0.0.2 ping statistics --- 00:08:07.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:07.149 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:08:07.149 05:22:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:07.149 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:07.149 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:08:07.149 00:08:07.149 --- 10.0.0.1 ping statistics --- 00:08:07.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:07.149 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:08:07.149 05:22:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:07.149 05:22:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:08:07.149 05:22:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:07.149 05:22:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:07.149 05:22:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:07.149 05:22:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:07.149 05:22:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:07.149 05:22:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:07.149 05:22:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:07.150 05:22:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:07.150 05:22:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:07.150 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:07.150 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.150 05:22:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=3128298 00:08:07.150 05:22:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:07.150 05:22:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 3128298 00:08:07.150 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@827 -- # '[' -z 3128298 ']' 00:08:07.150 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:07.150 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:07.150 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:07.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:07.150 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:07.150 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.150 [2024-07-14 05:22:14.122455] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:07.150 [2024-07-14 05:22:14.122537] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:07.150 EAL: No free 2048 kB hugepages reported on node 1 00:08:07.150 [2024-07-14 05:22:14.197240] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:07.408 [2024-07-14 05:22:14.289613] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:07.408 [2024-07-14 05:22:14.289667] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:07.408 [2024-07-14 05:22:14.289693] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:07.408 [2024-07-14 05:22:14.289713] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:07.408 [2024-07-14 05:22:14.289725] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:07.408 [2024-07-14 05:22:14.289803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:07.408 [2024-07-14 05:22:14.289892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:07.408 [2024-07-14 05:22:14.289977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:07.408 [2024-07-14 05:22:14.289980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.408 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:07.408 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@860 -- # return 0 00:08:07.408 05:22:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:07.408 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:07.408 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.409 05:22:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:07.409 05:22:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:07.409 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.409 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.409 [2024-07-14 05:22:14.449738] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:07.409 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.409 05:22:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:08:07.409 05:22:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:07.409 05:22:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:07.409 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.409 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.409 Null1 00:08:07.409 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.409 05:22:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:07.409 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.409 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.409 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.409 05:22:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:07.409 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.409 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.409 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.409 05:22:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:07.409 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.409 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.409 [2024-07-14 05:22:14.490092] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:07.409 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.409 05:22:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:07.409 05:22:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:07.409 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.409 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.409 Null2 00:08:07.409 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.409 05:22:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:07.409 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.409 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.409 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.409 05:22:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:07.409 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.409 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.667 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.667 05:22:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:07.667 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.667 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.667 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.667 05:22:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:07.667 05:22:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:07.667 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.667 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.667 Null3 00:08:07.667 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.667 05:22:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:07.667 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.667 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.667 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.667 05:22:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:07.667 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.667 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.667 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.667 05:22:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:07.667 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.667 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.667 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.667 05:22:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:07.667 05:22:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:07.667 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.667 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.667 Null4 00:08:07.667 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.667 05:22:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:07.667 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.667 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.667 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.667 05:22:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:07.667 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.667 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.667 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.667 05:22:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:07.667 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.667 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.667 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.668 05:22:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:07.668 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.668 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.668 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.668 05:22:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:07.668 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.668 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.668 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.668 05:22:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:08:07.668 00:08:07.668 Discovery Log Number of Records 6, Generation counter 6 00:08:07.668 =====Discovery Log Entry 0====== 00:08:07.668 trtype: tcp 00:08:07.668 adrfam: ipv4 00:08:07.668 subtype: current discovery subsystem 00:08:07.668 treq: not required 00:08:07.668 portid: 0 00:08:07.668 trsvcid: 4420 00:08:07.668 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:07.668 traddr: 10.0.0.2 00:08:07.668 eflags: explicit discovery connections, duplicate discovery information 00:08:07.668 sectype: none 00:08:07.668 =====Discovery Log Entry 1====== 00:08:07.668 trtype: tcp 00:08:07.668 adrfam: ipv4 00:08:07.668 subtype: nvme subsystem 00:08:07.668 treq: not required 00:08:07.668 portid: 0 00:08:07.668 trsvcid: 4420 00:08:07.668 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:07.668 traddr: 10.0.0.2 00:08:07.668 eflags: none 00:08:07.668 sectype: none 00:08:07.668 =====Discovery Log Entry 2====== 00:08:07.668 trtype: tcp 00:08:07.668 adrfam: ipv4 00:08:07.668 subtype: nvme subsystem 00:08:07.668 treq: not required 00:08:07.668 portid: 0 00:08:07.668 trsvcid: 4420 00:08:07.668 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:07.668 traddr: 10.0.0.2 00:08:07.668 eflags: none 00:08:07.668 sectype: none 00:08:07.668 =====Discovery Log Entry 3====== 00:08:07.668 trtype: tcp 00:08:07.668 adrfam: ipv4 00:08:07.668 subtype: nvme subsystem 00:08:07.668 treq: not required 00:08:07.668 portid: 0 00:08:07.668 trsvcid: 4420 00:08:07.668 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:07.668 traddr: 10.0.0.2 00:08:07.668 eflags: none 00:08:07.668 sectype: none 00:08:07.668 =====Discovery Log Entry 4====== 00:08:07.668 trtype: tcp 00:08:07.668 adrfam: ipv4 00:08:07.668 subtype: nvme subsystem 00:08:07.668 treq: not required 00:08:07.668 portid: 0 00:08:07.668 trsvcid: 4420 00:08:07.668 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:07.668 traddr: 10.0.0.2 00:08:07.668 eflags: none 00:08:07.668 sectype: none 00:08:07.668 =====Discovery Log Entry 5====== 00:08:07.668 trtype: tcp 00:08:07.668 adrfam: ipv4 00:08:07.668 subtype: discovery subsystem referral 00:08:07.668 treq: not required 00:08:07.668 portid: 0 00:08:07.668 trsvcid: 4430 00:08:07.668 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:07.668 traddr: 10.0.0.2 00:08:07.668 eflags: none 00:08:07.668 sectype: none 00:08:07.668 05:22:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:07.668 Perform nvmf subsystem discovery via RPC 00:08:07.668 05:22:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:07.668 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.668 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.928 [ 00:08:07.928 { 00:08:07.928 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:07.928 "subtype": "Discovery", 00:08:07.928 "listen_addresses": [ 00:08:07.928 { 00:08:07.928 "trtype": "TCP", 00:08:07.928 "adrfam": "IPv4", 00:08:07.928 "traddr": "10.0.0.2", 00:08:07.928 "trsvcid": "4420" 00:08:07.928 } 00:08:07.928 ], 00:08:07.928 "allow_any_host": true, 00:08:07.928 "hosts": [] 00:08:07.928 }, 00:08:07.928 { 00:08:07.928 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:07.928 "subtype": "NVMe", 00:08:07.928 "listen_addresses": [ 00:08:07.928 { 00:08:07.928 "trtype": "TCP", 00:08:07.928 "adrfam": "IPv4", 00:08:07.928 "traddr": "10.0.0.2", 00:08:07.928 "trsvcid": "4420" 00:08:07.928 } 00:08:07.928 ], 00:08:07.928 "allow_any_host": true, 00:08:07.928 "hosts": [], 00:08:07.928 "serial_number": "SPDK00000000000001", 00:08:07.928 "model_number": "SPDK bdev Controller", 00:08:07.928 "max_namespaces": 32, 00:08:07.928 "min_cntlid": 1, 00:08:07.928 "max_cntlid": 65519, 00:08:07.928 "namespaces": [ 00:08:07.928 { 00:08:07.928 "nsid": 1, 00:08:07.928 "bdev_name": "Null1", 00:08:07.928 "name": "Null1", 00:08:07.928 "nguid": "43901CC4FA09442DB5852E600C20E7AC", 00:08:07.928 "uuid": "43901cc4-fa09-442d-b585-2e600c20e7ac" 00:08:07.928 } 00:08:07.928 ] 00:08:07.928 }, 00:08:07.928 { 00:08:07.928 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:07.928 "subtype": "NVMe", 00:08:07.928 "listen_addresses": [ 00:08:07.928 { 00:08:07.928 "trtype": "TCP", 00:08:07.928 "adrfam": "IPv4", 00:08:07.928 "traddr": "10.0.0.2", 00:08:07.928 "trsvcid": "4420" 00:08:07.928 } 00:08:07.928 ], 00:08:07.928 "allow_any_host": true, 00:08:07.928 "hosts": [], 00:08:07.928 "serial_number": "SPDK00000000000002", 00:08:07.928 "model_number": "SPDK bdev Controller", 00:08:07.928 "max_namespaces": 32, 00:08:07.928 "min_cntlid": 1, 00:08:07.928 "max_cntlid": 65519, 00:08:07.928 "namespaces": [ 00:08:07.928 { 00:08:07.928 "nsid": 1, 00:08:07.928 "bdev_name": "Null2", 00:08:07.928 "name": "Null2", 00:08:07.928 "nguid": "38D312357E2D4D228FB0383D5E4DC116", 00:08:07.928 "uuid": "38d31235-7e2d-4d22-8fb0-383d5e4dc116" 00:08:07.928 } 00:08:07.928 ] 00:08:07.928 }, 00:08:07.928 { 00:08:07.928 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:07.928 "subtype": "NVMe", 00:08:07.928 "listen_addresses": [ 00:08:07.928 { 00:08:07.928 "trtype": "TCP", 00:08:07.928 "adrfam": "IPv4", 00:08:07.928 "traddr": "10.0.0.2", 00:08:07.928 "trsvcid": "4420" 00:08:07.928 } 00:08:07.928 ], 00:08:07.928 "allow_any_host": true, 00:08:07.928 "hosts": [], 00:08:07.928 "serial_number": "SPDK00000000000003", 00:08:07.928 "model_number": "SPDK bdev Controller", 00:08:07.928 "max_namespaces": 32, 00:08:07.928 "min_cntlid": 1, 00:08:07.928 "max_cntlid": 65519, 00:08:07.928 "namespaces": [ 00:08:07.928 { 00:08:07.928 "nsid": 1, 00:08:07.928 "bdev_name": "Null3", 00:08:07.928 "name": "Null3", 00:08:07.928 "nguid": "2490A1F56BD74A6D82B28AFB8C94F9DD", 00:08:07.928 "uuid": "2490a1f5-6bd7-4a6d-82b2-8afb8c94f9dd" 00:08:07.928 } 00:08:07.928 ] 00:08:07.928 }, 00:08:07.928 { 00:08:07.928 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:07.928 "subtype": "NVMe", 00:08:07.928 "listen_addresses": [ 00:08:07.928 { 00:08:07.928 "trtype": "TCP", 00:08:07.928 "adrfam": "IPv4", 00:08:07.928 "traddr": "10.0.0.2", 00:08:07.928 "trsvcid": "4420" 00:08:07.928 } 00:08:07.928 ], 00:08:07.928 "allow_any_host": true, 00:08:07.928 "hosts": [], 00:08:07.928 "serial_number": "SPDK00000000000004", 00:08:07.928 "model_number": "SPDK bdev Controller", 00:08:07.928 "max_namespaces": 32, 00:08:07.928 "min_cntlid": 1, 00:08:07.928 "max_cntlid": 65519, 00:08:07.928 "namespaces": [ 00:08:07.928 { 00:08:07.928 "nsid": 1, 00:08:07.928 "bdev_name": "Null4", 00:08:07.928 "name": "Null4", 00:08:07.928 "nguid": "2AF92A02D90F4E168B9DE19C02E6BAF3", 00:08:07.928 "uuid": "2af92a02-d90f-4e16-8b9d-e19c02e6baf3" 00:08:07.928 } 00:08:07.928 ] 00:08:07.928 } 00:08:07.928 ] 00:08:07.928 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.928 05:22:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:08:07.928 05:22:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:07.928 05:22:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:07.928 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.928 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.928 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.928 05:22:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:07.928 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.928 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.928 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.928 05:22:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:07.928 05:22:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:07.928 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.928 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.928 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.928 05:22:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:07.928 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.928 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.928 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.928 05:22:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:07.928 05:22:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:07.928 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.928 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.928 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.928 05:22:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:07.928 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.928 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.929 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.929 05:22:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:07.929 05:22:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:07.929 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.929 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.929 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.929 05:22:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:07.929 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.929 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.929 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.929 05:22:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:07.929 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.929 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.929 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.929 05:22:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:07.929 05:22:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:07.929 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.929 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.929 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.929 05:22:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:08:07.929 05:22:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:07.929 05:22:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:07.929 05:22:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:08:07.929 05:22:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:07.929 05:22:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:08:07.929 05:22:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:07.929 05:22:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:08:07.929 05:22:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:07.929 05:22:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:07.929 rmmod nvme_tcp 00:08:07.929 rmmod nvme_fabrics 00:08:07.929 rmmod nvme_keyring 00:08:07.929 05:22:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:07.929 05:22:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:08:07.929 05:22:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:08:07.929 05:22:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 3128298 ']' 00:08:07.929 05:22:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 3128298 00:08:07.929 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@946 -- # '[' -z 3128298 ']' 00:08:07.929 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@950 -- # kill -0 3128298 00:08:07.929 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # uname 00:08:07.929 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:07.929 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3128298 00:08:07.929 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:07.929 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:07.929 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3128298' 00:08:07.929 killing process with pid 3128298 00:08:07.929 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@965 -- # kill 3128298 00:08:07.929 05:22:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@970 -- # wait 3128298 00:08:08.187 05:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:08.187 05:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:08.187 05:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:08.187 05:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:08.187 05:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:08.187 05:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:08.187 05:22:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:08.187 05:22:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:10.725 05:22:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:10.725 00:08:10.725 real 0m5.461s 00:08:10.725 user 0m4.515s 00:08:10.725 sys 0m1.813s 00:08:10.725 05:22:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:10.725 05:22:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:10.725 ************************************ 00:08:10.725 END TEST nvmf_target_discovery 00:08:10.725 ************************************ 00:08:10.725 05:22:17 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:10.725 05:22:17 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:10.725 05:22:17 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:10.725 05:22:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:10.725 ************************************ 00:08:10.725 START TEST nvmf_referrals 00:08:10.725 ************************************ 00:08:10.725 05:22:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:10.725 * Looking for test storage... 00:08:10.725 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:10.725 05:22:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:10.725 05:22:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:08:10.725 05:22:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:10.725 05:22:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:10.725 05:22:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:10.725 05:22:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:10.725 05:22:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:10.725 05:22:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:10.725 05:22:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:10.725 05:22:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:10.725 05:22:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:10.725 05:22:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:10.725 05:22:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:10.725 05:22:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:10.725 05:22:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:10.725 05:22:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:10.725 05:22:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:10.725 05:22:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:10.725 05:22:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:10.725 05:22:17 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:10.725 05:22:17 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:10.725 05:22:17 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:10.725 05:22:17 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.725 05:22:17 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.725 05:22:17 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.725 05:22:17 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:08:10.725 05:22:17 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.725 05:22:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:08:10.725 05:22:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:10.725 05:22:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:10.725 05:22:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:10.725 05:22:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:10.725 05:22:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:10.725 05:22:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:10.725 05:22:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:10.725 05:22:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:10.725 05:22:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:10.725 05:22:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:10.725 05:22:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:10.725 05:22:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:10.725 05:22:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:10.725 05:22:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:10.725 05:22:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:08:10.725 05:22:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:10.725 05:22:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:10.725 05:22:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:10.725 05:22:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:10.725 05:22:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:10.725 05:22:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:10.725 05:22:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:10.725 05:22:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:10.725 05:22:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:10.725 05:22:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:10.725 05:22:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:08:10.725 05:22:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:12.626 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:12.626 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:08:12.626 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:12.626 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:12.626 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:12.626 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:12.626 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:12.626 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:08:12.626 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:12.626 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:08:12.626 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:08:12.626 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:08:12.626 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:08:12.626 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:08:12.626 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:08:12.626 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:12.626 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:12.626 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:12.626 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:12.626 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:12.626 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:12.626 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:12.626 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:12.626 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:12.626 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:12.626 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:12.626 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:12.626 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:12.626 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:12.626 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:12.626 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:12.626 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:12.626 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:12.626 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:12.626 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:12.626 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:12.626 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:12.626 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:12.626 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:12.626 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:12.626 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:12.627 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:12.627 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:12.627 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:12.627 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:12.627 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.263 ms 00:08:12.627 00:08:12.627 --- 10.0.0.2 ping statistics --- 00:08:12.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:12.627 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:12.627 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:12.627 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:08:12.627 00:08:12.627 --- 10.0.0.1 ping statistics --- 00:08:12.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:12.627 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=3130386 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 3130386 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@827 -- # '[' -z 3130386 ']' 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:12.627 05:22:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:12.627 [2024-07-14 05:22:19.498207] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:12.627 [2024-07-14 05:22:19.498288] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:12.627 EAL: No free 2048 kB hugepages reported on node 1 00:08:12.627 [2024-07-14 05:22:19.571443] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:12.627 [2024-07-14 05:22:19.667114] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:12.627 [2024-07-14 05:22:19.667192] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:12.627 [2024-07-14 05:22:19.667208] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:12.627 [2024-07-14 05:22:19.667222] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:12.627 [2024-07-14 05:22:19.667233] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:12.627 [2024-07-14 05:22:19.667293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:12.627 [2024-07-14 05:22:19.667349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:12.627 [2024-07-14 05:22:19.667396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:12.627 [2024-07-14 05:22:19.667398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.897 05:22:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:12.897 05:22:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@860 -- # return 0 00:08:12.897 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:12.897 05:22:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:12.897 05:22:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:12.897 05:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:12.897 05:22:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:12.897 05:22:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.897 05:22:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:12.897 [2024-07-14 05:22:19.817681] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:12.897 05:22:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.897 05:22:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:12.897 05:22:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.897 05:22:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:12.897 [2024-07-14 05:22:19.829950] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:12.897 05:22:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.897 05:22:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:12.897 05:22:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.897 05:22:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:12.897 05:22:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.897 05:22:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:12.897 05:22:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.897 05:22:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:12.897 05:22:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.897 05:22:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:12.897 05:22:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.897 05:22:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:12.897 05:22:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.897 05:22:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:12.897 05:22:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:12.897 05:22:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.897 05:22:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:12.897 05:22:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.897 05:22:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:12.897 05:22:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:12.897 05:22:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:12.897 05:22:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:12.897 05:22:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:12.897 05:22:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.897 05:22:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:12.897 05:22:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:12.897 05:22:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.897 05:22:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:12.897 05:22:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:12.897 05:22:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:12.897 05:22:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:12.897 05:22:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:12.897 05:22:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:12.897 05:22:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:12.897 05:22:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:13.154 05:22:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:13.154 05:22:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:13.154 05:22:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:13.154 05:22:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.154 05:22:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:13.154 05:22:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.154 05:22:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:13.154 05:22:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.154 05:22:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:13.154 05:22:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.154 05:22:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:13.154 05:22:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.154 05:22:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:13.154 05:22:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.154 05:22:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:13.154 05:22:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:13.154 05:22:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.154 05:22:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:13.154 05:22:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.154 05:22:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:13.154 05:22:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:13.154 05:22:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:13.154 05:22:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:13.154 05:22:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:13.154 05:22:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:13.154 05:22:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:13.154 05:22:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:13.154 05:22:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:13.154 05:22:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:13.154 05:22:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.154 05:22:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:13.154 05:22:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.154 05:22:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:13.154 05:22:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.154 05:22:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:13.411 05:22:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.411 05:22:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:13.411 05:22:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:13.411 05:22:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:13.411 05:22:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:13.411 05:22:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.411 05:22:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:13.411 05:22:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:13.411 05:22:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.411 05:22:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:13.411 05:22:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:13.412 05:22:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:13.412 05:22:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:13.412 05:22:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:13.412 05:22:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:13.412 05:22:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:13.412 05:22:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:13.412 05:22:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:13.412 05:22:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:13.412 05:22:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:13.412 05:22:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:13.412 05:22:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:13.412 05:22:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:13.412 05:22:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:13.668 05:22:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:13.668 05:22:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:13.668 05:22:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:13.668 05:22:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:13.668 05:22:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:13.668 05:22:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:13.925 05:22:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:13.925 05:22:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:13.925 05:22:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.925 05:22:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:13.925 05:22:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.925 05:22:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:13.925 05:22:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:13.925 05:22:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:13.925 05:22:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.925 05:22:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:13.925 05:22:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:13.925 05:22:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:13.925 05:22:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.925 05:22:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:13.925 05:22:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:13.925 05:22:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:13.925 05:22:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:13.925 05:22:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:13.925 05:22:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:13.925 05:22:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:13.925 05:22:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:13.925 05:22:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:13.925 05:22:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:13.925 05:22:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:13.925 05:22:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:13.925 05:22:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:13.925 05:22:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:13.925 05:22:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:13.925 05:22:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:13.925 05:22:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:13.925 05:22:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:13.925 05:22:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:13.925 05:22:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:13.925 05:22:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:14.182 05:22:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:14.182 05:22:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:14.182 05:22:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.182 05:22:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:14.182 05:22:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.182 05:22:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:14.182 05:22:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.182 05:22:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:14.182 05:22:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:14.182 05:22:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.182 05:22:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:14.182 05:22:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:14.182 05:22:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:14.182 05:22:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:14.182 05:22:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:14.182 05:22:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:14.182 05:22:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:14.439 05:22:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:14.439 05:22:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:14.439 05:22:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:14.439 05:22:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:14.439 05:22:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:14.439 05:22:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:14.439 05:22:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:14.439 05:22:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:14.439 05:22:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:14.439 05:22:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:14.439 rmmod nvme_tcp 00:08:14.439 rmmod nvme_fabrics 00:08:14.439 rmmod nvme_keyring 00:08:14.439 05:22:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:14.439 05:22:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:14.439 05:22:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:14.439 05:22:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 3130386 ']' 00:08:14.439 05:22:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 3130386 00:08:14.439 05:22:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@946 -- # '[' -z 3130386 ']' 00:08:14.439 05:22:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@950 -- # kill -0 3130386 00:08:14.439 05:22:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # uname 00:08:14.439 05:22:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:14.439 05:22:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3130386 00:08:14.439 05:22:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:14.439 05:22:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:14.439 05:22:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3130386' 00:08:14.439 killing process with pid 3130386 00:08:14.439 05:22:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@965 -- # kill 3130386 00:08:14.439 05:22:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@970 -- # wait 3130386 00:08:14.697 05:22:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:14.698 05:22:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:14.698 05:22:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:14.698 05:22:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:14.698 05:22:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:14.698 05:22:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:14.698 05:22:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:14.698 05:22:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:16.640 05:22:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:16.640 00:08:16.640 real 0m6.357s 00:08:16.640 user 0m9.130s 00:08:16.640 sys 0m2.043s 00:08:16.640 05:22:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:16.640 05:22:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:16.640 ************************************ 00:08:16.640 END TEST nvmf_referrals 00:08:16.640 ************************************ 00:08:16.640 05:22:23 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:16.640 05:22:23 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:16.640 05:22:23 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:16.640 05:22:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:16.640 ************************************ 00:08:16.640 START TEST nvmf_connect_disconnect 00:08:16.640 ************************************ 00:08:16.640 05:22:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:16.898 * Looking for test storage... 00:08:16.898 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:16.898 05:22:23 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:16.898 05:22:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:16.898 05:22:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:16.898 05:22:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:16.898 05:22:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:16.898 05:22:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:16.898 05:22:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:16.898 05:22:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:16.898 05:22:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:16.898 05:22:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:16.898 05:22:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:16.898 05:22:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:16.898 05:22:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:16.898 05:22:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:16.898 05:22:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:16.898 05:22:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:16.898 05:22:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:16.898 05:22:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:16.898 05:22:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:16.898 05:22:23 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:16.898 05:22:23 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:16.898 05:22:23 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:16.898 05:22:23 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.898 05:22:23 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.898 05:22:23 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.898 05:22:23 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:16.898 05:22:23 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.898 05:22:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:16.898 05:22:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:16.898 05:22:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:16.898 05:22:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:16.898 05:22:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:16.898 05:22:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:16.899 05:22:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:16.899 05:22:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:16.899 05:22:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:16.899 05:22:23 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:16.899 05:22:23 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:16.899 05:22:23 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:16.899 05:22:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:16.899 05:22:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:16.899 05:22:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:16.899 05:22:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:16.899 05:22:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:16.899 05:22:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:16.899 05:22:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:16.899 05:22:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:16.899 05:22:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:16.899 05:22:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:16.899 05:22:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:16.899 05:22:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:18.799 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:18.799 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:18.799 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:18.799 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:18.799 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:18.799 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:18.799 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:18.799 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:18.799 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:18.799 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:18.799 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:18.799 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:18.799 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:18.799 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:18.799 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:18.799 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:18.799 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:18.799 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:18.799 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:18.799 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:18.799 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:18.799 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:18.799 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:18.799 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:18.799 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:18.799 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:18.799 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:18.799 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:18.799 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:18.799 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:18.799 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:18.799 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:18.799 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:18.799 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:18.799 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:18.799 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:18.799 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:18.799 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:18.799 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:18.799 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:18.799 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:18.799 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:18.799 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:18.799 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:18.799 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:18.800 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:18.800 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:18.800 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:18.800 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:18.800 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:18.800 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:18.800 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:18.800 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:18.800 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:18.800 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:18.800 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:18.800 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:18.800 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:18.800 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:18.800 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:18.800 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:18.800 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:18.800 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:18.800 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:18.800 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:18.800 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:18.800 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:18.800 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:18.800 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:18.800 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:18.800 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:18.800 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:18.800 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:18.800 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:18.800 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:18.800 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:18.800 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:18.800 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:18.800 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:18.800 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:18.800 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:18.800 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:18.800 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:18.800 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:18.800 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:18.800 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:18.800 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:18.800 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:18.800 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:18.800 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:18.800 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:18.800 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:18.800 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:18.800 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:19.058 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:19.058 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:19.058 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:19.058 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:08:19.058 00:08:19.058 --- 10.0.0.2 ping statistics --- 00:08:19.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.058 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:08:19.058 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:19.058 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:19.058 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:08:19.058 00:08:19.058 --- 10.0.0.1 ping statistics --- 00:08:19.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.058 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:08:19.058 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:19.058 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:19.058 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:19.058 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:19.058 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:19.058 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:19.058 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:19.058 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:19.058 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:19.058 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:19.058 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:19.058 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:19.058 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:19.058 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=3132606 00:08:19.058 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:19.058 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 3132606 00:08:19.058 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@827 -- # '[' -z 3132606 ']' 00:08:19.058 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.058 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:19.058 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.058 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:19.058 05:22:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:19.058 [2024-07-14 05:22:26.003492] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:19.058 [2024-07-14 05:22:26.003564] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:19.058 EAL: No free 2048 kB hugepages reported on node 1 00:08:19.058 [2024-07-14 05:22:26.073393] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:19.314 [2024-07-14 05:22:26.164210] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:19.314 [2024-07-14 05:22:26.164272] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:19.314 [2024-07-14 05:22:26.164300] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:19.314 [2024-07-14 05:22:26.164314] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:19.314 [2024-07-14 05:22:26.164326] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:19.314 [2024-07-14 05:22:26.164382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:19.314 [2024-07-14 05:22:26.164451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:19.314 [2024-07-14 05:22:26.164482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:19.314 [2024-07-14 05:22:26.164488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.314 05:22:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:19.314 05:22:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # return 0 00:08:19.314 05:22:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:19.314 05:22:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:19.314 05:22:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:19.314 05:22:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:19.314 05:22:26 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:19.314 05:22:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.314 05:22:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:19.314 [2024-07-14 05:22:26.323723] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:19.314 05:22:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.314 05:22:26 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:19.314 05:22:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.314 05:22:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:19.314 05:22:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.314 05:22:26 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:19.314 05:22:26 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:19.314 05:22:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.314 05:22:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:19.314 05:22:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.314 05:22:26 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:19.314 05:22:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.314 05:22:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:19.314 05:22:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.314 05:22:26 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:19.314 05:22:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.314 05:22:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:19.314 [2024-07-14 05:22:26.376783] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:19.314 05:22:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.314 05:22:26 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:19.314 05:22:26 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:19.314 05:22:26 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:19.314 05:22:26 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:21.839 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:24.366 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:26.263 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:28.786 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:31.306 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:33.203 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:35.728 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:38.260 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:40.176 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:42.700 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:45.229 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:47.125 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:49.652 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:52.177 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:54.700 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:56.617 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:59.173 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:01.069 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:03.592 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:06.110 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.633 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:10.533 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:13.060 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.586 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:18.136 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.033 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:22.553 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:25.075 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:26.968 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:29.493 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.016 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:33.915 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:36.440 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:38.366 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:40.891 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:43.416 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:45.313 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:47.832 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:49.726 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.248 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:54.773 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:56.670 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.347 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:01.245 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:03.771 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.301 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.821 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:10.716 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.241 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.767 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.661 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.184 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.708 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.606 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.131 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.656 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.552 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.153 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.049 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.569 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.096 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:42.994 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.519 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.040 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.934 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.519 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.044 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.938 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:59.457 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.977 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.874 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.398 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.928 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.825 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.350 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.281 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.802 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:19.697 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.219 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.744 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:26.641 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.165 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.689 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.214 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.110 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.671 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.562 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.083 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.607 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.502 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.030 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.556 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.454 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.979 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.551 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.449 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.974 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.495 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:08.390 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.919 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.919 05:26:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:10.919 05:26:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:10.919 05:26:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:10.919 05:26:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:12:10.919 05:26:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:10.919 05:26:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:12:10.919 05:26:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:10.919 05:26:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:10.919 rmmod nvme_tcp 00:12:10.919 rmmod nvme_fabrics 00:12:10.919 rmmod nvme_keyring 00:12:10.919 05:26:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:10.919 05:26:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:12:10.919 05:26:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:12:10.919 05:26:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 3132606 ']' 00:12:10.919 05:26:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 3132606 00:12:10.919 05:26:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@946 -- # '[' -z 3132606 ']' 00:12:10.919 05:26:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # kill -0 3132606 00:12:10.919 05:26:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # uname 00:12:10.919 05:26:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:10.919 05:26:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3132606 00:12:10.919 05:26:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:10.919 05:26:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:10.919 05:26:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3132606' 00:12:10.919 killing process with pid 3132606 00:12:10.919 05:26:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@965 -- # kill 3132606 00:12:10.919 05:26:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # wait 3132606 00:12:10.919 05:26:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:10.919 05:26:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:10.919 05:26:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:10.919 05:26:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:10.919 05:26:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:10.919 05:26:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:10.919 05:26:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:10.919 05:26:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:13.453 05:26:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:13.453 00:12:13.453 real 3m56.310s 00:12:13.453 user 15m0.397s 00:12:13.453 sys 0m34.378s 00:12:13.453 05:26:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:13.453 05:26:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:13.453 ************************************ 00:12:13.453 END TEST nvmf_connect_disconnect 00:12:13.453 ************************************ 00:12:13.453 05:26:20 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:13.454 05:26:20 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:13.454 05:26:20 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:13.454 05:26:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:13.454 ************************************ 00:12:13.454 START TEST nvmf_multitarget 00:12:13.454 ************************************ 00:12:13.454 05:26:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:13.454 * Looking for test storage... 00:12:13.454 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:13.454 05:26:20 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:13.454 05:26:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:13.454 05:26:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:13.454 05:26:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:13.454 05:26:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:13.454 05:26:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:13.454 05:26:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:13.454 05:26:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:13.454 05:26:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:13.454 05:26:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:13.454 05:26:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:13.454 05:26:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:13.454 05:26:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:13.454 05:26:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:13.454 05:26:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:13.454 05:26:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:13.454 05:26:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:13.454 05:26:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:13.454 05:26:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:13.454 05:26:20 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:13.454 05:26:20 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:13.454 05:26:20 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:13.454 05:26:20 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.454 05:26:20 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.454 05:26:20 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.454 05:26:20 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:13.454 05:26:20 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.454 05:26:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:12:13.454 05:26:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:13.454 05:26:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:13.454 05:26:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:13.454 05:26:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:13.454 05:26:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:13.454 05:26:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:13.454 05:26:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:13.454 05:26:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:13.454 05:26:20 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:13.454 05:26:20 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:13.454 05:26:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:13.454 05:26:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:13.454 05:26:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:13.454 05:26:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:13.454 05:26:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:13.454 05:26:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:13.454 05:26:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:13.454 05:26:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:13.454 05:26:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:13.454 05:26:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:13.454 05:26:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:12:13.454 05:26:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:15.358 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:15.358 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:15.358 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:15.358 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:15.358 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:15.358 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:12:15.358 00:12:15.358 --- 10.0.0.2 ping statistics --- 00:12:15.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.358 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:15.358 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:15.358 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:12:15.358 00:12:15.358 --- 10.0.0.1 ping statistics --- 00:12:15.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.358 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=3163739 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 3163739 00:12:15.358 05:26:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@827 -- # '[' -z 3163739 ']' 00:12:15.359 05:26:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:15.359 05:26:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:15.359 05:26:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:15.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:15.359 05:26:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:15.359 05:26:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:15.359 [2024-07-14 05:26:22.432353] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:12:15.359 [2024-07-14 05:26:22.432447] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:15.618 EAL: No free 2048 kB hugepages reported on node 1 00:12:15.618 [2024-07-14 05:26:22.500334] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:15.618 [2024-07-14 05:26:22.590062] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:15.618 [2024-07-14 05:26:22.590122] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:15.618 [2024-07-14 05:26:22.590152] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:15.618 [2024-07-14 05:26:22.590165] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:15.618 [2024-07-14 05:26:22.590176] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:15.618 [2024-07-14 05:26:22.590302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:15.618 [2024-07-14 05:26:22.590369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:15.618 [2024-07-14 05:26:22.590434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:15.618 [2024-07-14 05:26:22.590436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.618 05:26:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:15.618 05:26:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@860 -- # return 0 00:12:15.618 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:15.618 05:26:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:15.618 05:26:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:15.876 05:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:15.876 05:26:22 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:15.876 05:26:22 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:15.876 05:26:22 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:15.876 05:26:22 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:15.876 05:26:22 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:15.876 "nvmf_tgt_1" 00:12:15.876 05:26:22 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:16.134 "nvmf_tgt_2" 00:12:16.134 05:26:23 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:16.134 05:26:23 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:16.134 05:26:23 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:16.134 05:26:23 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:16.391 true 00:12:16.391 05:26:23 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:16.391 true 00:12:16.391 05:26:23 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:16.391 05:26:23 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:16.648 05:26:23 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:16.648 05:26:23 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:16.648 05:26:23 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:16.648 05:26:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:16.648 05:26:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:12:16.648 05:26:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:16.648 05:26:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:12:16.648 05:26:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:16.648 05:26:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:16.648 rmmod nvme_tcp 00:12:16.648 rmmod nvme_fabrics 00:12:16.648 rmmod nvme_keyring 00:12:16.648 05:26:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:16.648 05:26:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:12:16.648 05:26:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:12:16.648 05:26:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 3163739 ']' 00:12:16.648 05:26:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 3163739 00:12:16.648 05:26:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@946 -- # '[' -z 3163739 ']' 00:12:16.648 05:26:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@950 -- # kill -0 3163739 00:12:16.648 05:26:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # uname 00:12:16.648 05:26:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:16.648 05:26:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3163739 00:12:16.648 05:26:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:16.648 05:26:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:16.648 05:26:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3163739' 00:12:16.648 killing process with pid 3163739 00:12:16.648 05:26:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@965 -- # kill 3163739 00:12:16.648 05:26:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@970 -- # wait 3163739 00:12:16.907 05:26:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:16.907 05:26:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:16.907 05:26:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:16.907 05:26:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:16.907 05:26:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:16.907 05:26:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:16.907 05:26:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:16.907 05:26:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:18.804 05:26:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:18.804 00:12:18.804 real 0m5.810s 00:12:18.804 user 0m6.412s 00:12:18.804 sys 0m1.974s 00:12:18.804 05:26:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:18.804 05:26:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:18.804 ************************************ 00:12:18.804 END TEST nvmf_multitarget 00:12:18.804 ************************************ 00:12:18.804 05:26:25 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:18.804 05:26:25 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:18.804 05:26:25 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:18.804 05:26:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:19.061 ************************************ 00:12:19.061 START TEST nvmf_rpc 00:12:19.061 ************************************ 00:12:19.061 05:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:19.061 * Looking for test storage... 00:12:19.061 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:19.061 05:26:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:19.061 05:26:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:19.061 05:26:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:19.061 05:26:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:19.061 05:26:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:19.061 05:26:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:19.061 05:26:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:19.061 05:26:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:19.061 05:26:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:19.061 05:26:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:19.061 05:26:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:19.061 05:26:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:19.061 05:26:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:19.062 05:26:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:19.062 05:26:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:19.062 05:26:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:19.062 05:26:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:19.062 05:26:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:19.062 05:26:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:19.062 05:26:25 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:19.062 05:26:25 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:19.062 05:26:25 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:19.062 05:26:25 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.062 05:26:25 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.062 05:26:25 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.062 05:26:25 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:19.062 05:26:25 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.062 05:26:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:12:19.062 05:26:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:19.062 05:26:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:19.062 05:26:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:19.062 05:26:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:19.062 05:26:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:19.062 05:26:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:19.062 05:26:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:19.062 05:26:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:19.062 05:26:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:19.062 05:26:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:19.062 05:26:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:19.062 05:26:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:19.062 05:26:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:19.062 05:26:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:19.062 05:26:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:19.062 05:26:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:19.062 05:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:19.062 05:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:19.062 05:26:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:19.062 05:26:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:19.062 05:26:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:12:19.062 05:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.963 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:20.963 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:12:20.963 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:20.963 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:20.963 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:20.963 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:20.963 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:20.963 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:12:20.963 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:20.963 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:12:20.963 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:12:20.963 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:12:20.963 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:12:20.963 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:12:20.963 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:12:20.963 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:20.964 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:20.964 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:20.964 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:20.964 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:20.964 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:20.964 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:20.964 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:20.964 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:20.964 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:20.964 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:20.964 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:20.964 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:20.964 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:20.964 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:20.964 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:20.964 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:20.964 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:20.964 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:20.964 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:20.964 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:20.964 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:20.964 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:20.964 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:20.964 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:20.964 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:20.964 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:20.964 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:20.964 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:20.964 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:20.964 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:20.964 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:20.964 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:20.964 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:20.964 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:20.964 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:20.964 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:20.964 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:20.964 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:20.964 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:20.964 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:20.964 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:20.964 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:20.964 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:20.964 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:20.964 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:20.964 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:20.964 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:20.964 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:20.964 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:20.964 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:20.964 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:20.964 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:20.964 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:20.964 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:20.964 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:20.964 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:20.964 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:12:20.964 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:20.964 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:20.964 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:20.964 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:20.964 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:20.964 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:20.964 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:20.964 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:20.964 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:20.964 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:20.964 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:20.964 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:20.964 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:20.964 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:20.964 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:20.964 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:20.964 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:20.964 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:20.964 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:20.964 05:26:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:20.964 05:26:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:20.964 05:26:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:20.964 05:26:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:20.964 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:20.964 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:12:20.964 00:12:20.964 --- 10.0.0.2 ping statistics --- 00:12:20.964 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:20.964 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:12:20.964 05:26:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:20.964 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:20.964 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:12:20.964 00:12:20.964 --- 10.0.0.1 ping statistics --- 00:12:20.964 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:20.964 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:12:20.964 05:26:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:20.964 05:26:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:12:20.964 05:26:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:20.964 05:26:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:20.964 05:26:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:20.964 05:26:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:20.964 05:26:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:20.964 05:26:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:20.964 05:26:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:20.964 05:26:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:20.964 05:26:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:20.964 05:26:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:20.964 05:26:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.223 05:26:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=3165837 00:12:21.223 05:26:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:21.223 05:26:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 3165837 00:12:21.223 05:26:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@827 -- # '[' -z 3165837 ']' 00:12:21.223 05:26:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:21.223 05:26:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:21.223 05:26:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:21.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:21.223 05:26:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:21.223 05:26:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.223 [2024-07-14 05:26:28.119723] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:12:21.223 [2024-07-14 05:26:28.119808] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:21.223 EAL: No free 2048 kB hugepages reported on node 1 00:12:21.223 [2024-07-14 05:26:28.189800] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:21.223 [2024-07-14 05:26:28.281170] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:21.223 [2024-07-14 05:26:28.281224] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:21.223 [2024-07-14 05:26:28.281240] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:21.223 [2024-07-14 05:26:28.281253] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:21.223 [2024-07-14 05:26:28.281264] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:21.223 [2024-07-14 05:26:28.281342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:21.223 [2024-07-14 05:26:28.281410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:21.223 [2024-07-14 05:26:28.281505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:21.223 [2024-07-14 05:26:28.281507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.482 05:26:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:21.482 05:26:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@860 -- # return 0 00:12:21.482 05:26:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:21.482 05:26:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:21.482 05:26:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.482 05:26:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:21.482 05:26:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:21.482 05:26:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.482 05:26:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.482 05:26:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.482 05:26:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:21.482 "tick_rate": 2700000000, 00:12:21.482 "poll_groups": [ 00:12:21.482 { 00:12:21.482 "name": "nvmf_tgt_poll_group_000", 00:12:21.482 "admin_qpairs": 0, 00:12:21.482 "io_qpairs": 0, 00:12:21.482 "current_admin_qpairs": 0, 00:12:21.482 "current_io_qpairs": 0, 00:12:21.482 "pending_bdev_io": 0, 00:12:21.482 "completed_nvme_io": 0, 00:12:21.482 "transports": [] 00:12:21.482 }, 00:12:21.482 { 00:12:21.482 "name": "nvmf_tgt_poll_group_001", 00:12:21.482 "admin_qpairs": 0, 00:12:21.482 "io_qpairs": 0, 00:12:21.482 "current_admin_qpairs": 0, 00:12:21.482 "current_io_qpairs": 0, 00:12:21.482 "pending_bdev_io": 0, 00:12:21.482 "completed_nvme_io": 0, 00:12:21.482 "transports": [] 00:12:21.482 }, 00:12:21.482 { 00:12:21.482 "name": "nvmf_tgt_poll_group_002", 00:12:21.482 "admin_qpairs": 0, 00:12:21.482 "io_qpairs": 0, 00:12:21.482 "current_admin_qpairs": 0, 00:12:21.482 "current_io_qpairs": 0, 00:12:21.482 "pending_bdev_io": 0, 00:12:21.482 "completed_nvme_io": 0, 00:12:21.482 "transports": [] 00:12:21.482 }, 00:12:21.482 { 00:12:21.482 "name": "nvmf_tgt_poll_group_003", 00:12:21.482 "admin_qpairs": 0, 00:12:21.482 "io_qpairs": 0, 00:12:21.482 "current_admin_qpairs": 0, 00:12:21.482 "current_io_qpairs": 0, 00:12:21.482 "pending_bdev_io": 0, 00:12:21.482 "completed_nvme_io": 0, 00:12:21.482 "transports": [] 00:12:21.482 } 00:12:21.482 ] 00:12:21.482 }' 00:12:21.482 05:26:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:21.482 05:26:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:21.482 05:26:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:21.482 05:26:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:21.482 05:26:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:21.482 05:26:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:21.482 05:26:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:21.482 05:26:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:21.482 05:26:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.482 05:26:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.482 [2024-07-14 05:26:28.538033] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:21.482 05:26:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.482 05:26:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:21.482 05:26:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.482 05:26:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.482 05:26:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.482 05:26:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:21.482 "tick_rate": 2700000000, 00:12:21.482 "poll_groups": [ 00:12:21.482 { 00:12:21.482 "name": "nvmf_tgt_poll_group_000", 00:12:21.482 "admin_qpairs": 0, 00:12:21.482 "io_qpairs": 0, 00:12:21.482 "current_admin_qpairs": 0, 00:12:21.482 "current_io_qpairs": 0, 00:12:21.482 "pending_bdev_io": 0, 00:12:21.482 "completed_nvme_io": 0, 00:12:21.482 "transports": [ 00:12:21.482 { 00:12:21.482 "trtype": "TCP" 00:12:21.482 } 00:12:21.482 ] 00:12:21.482 }, 00:12:21.482 { 00:12:21.482 "name": "nvmf_tgt_poll_group_001", 00:12:21.482 "admin_qpairs": 0, 00:12:21.482 "io_qpairs": 0, 00:12:21.482 "current_admin_qpairs": 0, 00:12:21.482 "current_io_qpairs": 0, 00:12:21.482 "pending_bdev_io": 0, 00:12:21.482 "completed_nvme_io": 0, 00:12:21.482 "transports": [ 00:12:21.482 { 00:12:21.482 "trtype": "TCP" 00:12:21.482 } 00:12:21.482 ] 00:12:21.482 }, 00:12:21.482 { 00:12:21.482 "name": "nvmf_tgt_poll_group_002", 00:12:21.482 "admin_qpairs": 0, 00:12:21.482 "io_qpairs": 0, 00:12:21.482 "current_admin_qpairs": 0, 00:12:21.482 "current_io_qpairs": 0, 00:12:21.482 "pending_bdev_io": 0, 00:12:21.482 "completed_nvme_io": 0, 00:12:21.482 "transports": [ 00:12:21.482 { 00:12:21.482 "trtype": "TCP" 00:12:21.482 } 00:12:21.482 ] 00:12:21.482 }, 00:12:21.482 { 00:12:21.482 "name": "nvmf_tgt_poll_group_003", 00:12:21.482 "admin_qpairs": 0, 00:12:21.482 "io_qpairs": 0, 00:12:21.482 "current_admin_qpairs": 0, 00:12:21.482 "current_io_qpairs": 0, 00:12:21.482 "pending_bdev_io": 0, 00:12:21.482 "completed_nvme_io": 0, 00:12:21.482 "transports": [ 00:12:21.482 { 00:12:21.482 "trtype": "TCP" 00:12:21.482 } 00:12:21.482 ] 00:12:21.482 } 00:12:21.482 ] 00:12:21.482 }' 00:12:21.482 05:26:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:21.482 05:26:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:21.482 05:26:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:21.482 05:26:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:21.741 05:26:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:21.741 05:26:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:21.741 05:26:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:21.741 05:26:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:21.741 05:26:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:21.741 05:26:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:21.741 05:26:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:21.741 05:26:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:21.741 05:26:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:21.741 05:26:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:21.741 05:26:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.741 05:26:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.741 Malloc1 00:12:21.741 05:26:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.741 05:26:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:21.741 05:26:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.741 05:26:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.741 05:26:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.741 05:26:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:21.741 05:26:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.741 05:26:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.741 05:26:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.741 05:26:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:21.741 05:26:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.741 05:26:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.741 05:26:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.741 05:26:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:21.741 05:26:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.741 05:26:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.741 [2024-07-14 05:26:28.689388] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:21.741 05:26:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.741 05:26:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:21.741 05:26:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:21.741 05:26:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:21.741 05:26:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:12:21.741 05:26:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:21.741 05:26:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:12:21.741 05:26:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:21.741 05:26:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:12:21.741 05:26:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:21.741 05:26:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:12:21.741 05:26:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:12:21.741 05:26:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:21.741 [2024-07-14 05:26:28.711894] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:12:21.741 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:21.741 could not add new controller: failed to write to nvme-fabrics device 00:12:21.741 05:26:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:21.741 05:26:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:21.741 05:26:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:21.741 05:26:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:21.741 05:26:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:21.741 05:26:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.741 05:26:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.741 05:26:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.741 05:26:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:22.307 05:26:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:22.307 05:26:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:22.307 05:26:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:22.307 05:26:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:22.307 05:26:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:24.833 05:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:24.833 05:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:24.833 05:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:24.833 05:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:24.833 05:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:24.833 05:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:24.833 05:26:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:24.833 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.833 05:26:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:24.833 05:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:24.833 05:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:24.833 05:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:24.833 05:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:24.833 05:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:24.833 05:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:24.833 05:26:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:24.833 05:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.833 05:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.833 05:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.833 05:26:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:24.833 05:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:24.833 05:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:24.833 05:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:12:24.833 05:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:24.833 05:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:12:24.833 05:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:24.833 05:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:12:24.833 05:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:24.833 05:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:12:24.833 05:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:12:24.833 05:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:24.833 [2024-07-14 05:26:31.461256] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:12:24.833 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:24.833 could not add new controller: failed to write to nvme-fabrics device 00:12:24.833 05:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:24.833 05:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:24.833 05:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:24.833 05:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:24.833 05:26:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:24.833 05:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.833 05:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.833 05:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.833 05:26:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:25.090 05:26:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:25.090 05:26:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:25.090 05:26:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:25.090 05:26:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:25.090 05:26:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:27.614 05:26:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:27.615 05:26:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:27.615 05:26:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:27.615 05:26:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:27.615 05:26:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:27.615 05:26:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:27.615 05:26:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:27.615 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.615 05:26:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:27.615 05:26:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:27.615 05:26:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:27.615 05:26:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:27.615 05:26:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:27.615 05:26:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:27.615 05:26:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:27.615 05:26:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:27.615 05:26:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.615 05:26:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.615 05:26:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.615 05:26:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:27.615 05:26:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:27.615 05:26:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:27.615 05:26:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.615 05:26:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.615 05:26:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.615 05:26:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:27.615 05:26:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.615 05:26:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.615 [2024-07-14 05:26:34.285460] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:27.615 05:26:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.615 05:26:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:27.615 05:26:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.615 05:26:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.615 05:26:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.615 05:26:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:27.615 05:26:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.615 05:26:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.615 05:26:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.615 05:26:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:27.873 05:26:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:27.873 05:26:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:27.873 05:26:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:27.873 05:26:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:27.873 05:26:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:30.396 05:26:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:30.396 05:26:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:30.396 05:26:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:30.396 05:26:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:30.396 05:26:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:30.396 05:26:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:30.396 05:26:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:30.396 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.396 05:26:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:30.396 05:26:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:30.396 05:26:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:30.396 05:26:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:30.396 05:26:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:30.396 05:26:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:30.396 05:26:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:30.396 05:26:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:30.396 05:26:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.396 05:26:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.396 05:26:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.396 05:26:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:30.396 05:26:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.396 05:26:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.396 05:26:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.396 05:26:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:30.396 05:26:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:30.396 05:26:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.396 05:26:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.396 05:26:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.396 05:26:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:30.396 05:26:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.396 05:26:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.396 [2024-07-14 05:26:37.059172] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:30.396 05:26:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.396 05:26:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:30.396 05:26:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.396 05:26:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.396 05:26:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.396 05:26:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:30.396 05:26:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.397 05:26:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.397 05:26:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.397 05:26:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:30.960 05:26:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:30.960 05:26:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:30.960 05:26:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:30.960 05:26:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:30.960 05:26:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:32.854 05:26:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:32.854 05:26:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:32.854 05:26:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:32.854 05:26:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:32.854 05:26:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:32.854 05:26:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:32.854 05:26:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:32.854 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:32.854 05:26:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:32.854 05:26:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:32.854 05:26:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:32.854 05:26:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:32.854 05:26:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:32.854 05:26:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:32.854 05:26:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:32.854 05:26:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:32.854 05:26:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.854 05:26:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.854 05:26:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.854 05:26:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:32.854 05:26:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.854 05:26:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.854 05:26:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.854 05:26:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:32.854 05:26:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:32.854 05:26:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.854 05:26:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.854 05:26:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.854 05:26:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:32.854 05:26:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.854 05:26:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.854 [2024-07-14 05:26:39.877891] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:32.854 05:26:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.854 05:26:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:32.854 05:26:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.854 05:26:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.854 05:26:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.854 05:26:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:32.854 05:26:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.854 05:26:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.854 05:26:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.854 05:26:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:33.419 05:26:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:33.419 05:26:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:33.419 05:26:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:33.419 05:26:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:33.419 05:26:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:35.940 05:26:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:35.940 05:26:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:35.940 05:26:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:35.940 05:26:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:35.940 05:26:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:35.940 05:26:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:35.940 05:26:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:35.940 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.940 05:26:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:35.940 05:26:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:35.940 05:26:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:35.940 05:26:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:35.940 05:26:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:35.940 05:26:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:35.940 05:26:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:35.940 05:26:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:35.940 05:26:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.940 05:26:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.940 05:26:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.940 05:26:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:35.940 05:26:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.940 05:26:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.940 05:26:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.940 05:26:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:35.940 05:26:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:35.940 05:26:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.940 05:26:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.940 05:26:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.940 05:26:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:35.940 05:26:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.940 05:26:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.940 [2024-07-14 05:26:42.638613] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:35.940 05:26:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.940 05:26:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:35.940 05:26:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.940 05:26:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.940 05:26:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.940 05:26:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:35.940 05:26:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.940 05:26:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.940 05:26:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.940 05:26:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:36.504 05:26:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:36.504 05:26:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:36.504 05:26:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:36.504 05:26:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:36.504 05:26:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:38.401 05:26:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:38.401 05:26:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:38.401 05:26:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:38.401 05:26:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:38.401 05:26:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:38.401 05:26:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:38.401 05:26:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:38.401 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.401 05:26:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:38.401 05:26:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:38.401 05:26:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:38.401 05:26:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:38.401 05:26:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:38.401 05:26:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:38.401 05:26:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:38.401 05:26:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:38.401 05:26:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.401 05:26:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.401 05:26:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.401 05:26:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:38.402 05:26:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.402 05:26:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.402 05:26:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.402 05:26:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:38.402 05:26:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:38.402 05:26:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.402 05:26:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.402 05:26:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.402 05:26:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:38.402 05:26:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.402 05:26:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.402 [2024-07-14 05:26:45.444558] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:38.402 05:26:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.402 05:26:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:38.402 05:26:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.402 05:26:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.402 05:26:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.402 05:26:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:38.402 05:26:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.402 05:26:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.402 05:26:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.402 05:26:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:38.967 05:26:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:38.967 05:26:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:38.967 05:26:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:38.967 05:26:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:38.967 05:26:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:41.521 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.521 [2024-07-14 05:26:48.180535] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.521 [2024-07-14 05:26:48.228616] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.521 [2024-07-14 05:26:48.276785] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.521 [2024-07-14 05:26:48.324973] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.521 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.522 05:26:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:41.522 05:26:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:41.522 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.522 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.522 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.522 05:26:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:41.522 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.522 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.522 [2024-07-14 05:26:48.373106] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:41.522 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.522 05:26:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:41.522 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.522 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.522 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.522 05:26:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:41.522 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.522 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.522 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.522 05:26:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:41.522 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.522 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.522 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.522 05:26:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:41.522 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.522 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.522 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.522 05:26:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:41.522 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.522 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.522 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.522 05:26:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:41.522 "tick_rate": 2700000000, 00:12:41.522 "poll_groups": [ 00:12:41.522 { 00:12:41.522 "name": "nvmf_tgt_poll_group_000", 00:12:41.522 "admin_qpairs": 2, 00:12:41.522 "io_qpairs": 84, 00:12:41.522 "current_admin_qpairs": 0, 00:12:41.522 "current_io_qpairs": 0, 00:12:41.522 "pending_bdev_io": 0, 00:12:41.522 "completed_nvme_io": 140, 00:12:41.522 "transports": [ 00:12:41.522 { 00:12:41.522 "trtype": "TCP" 00:12:41.522 } 00:12:41.522 ] 00:12:41.522 }, 00:12:41.522 { 00:12:41.522 "name": "nvmf_tgt_poll_group_001", 00:12:41.522 "admin_qpairs": 2, 00:12:41.522 "io_qpairs": 84, 00:12:41.522 "current_admin_qpairs": 0, 00:12:41.522 "current_io_qpairs": 0, 00:12:41.522 "pending_bdev_io": 0, 00:12:41.522 "completed_nvme_io": 189, 00:12:41.522 "transports": [ 00:12:41.522 { 00:12:41.522 "trtype": "TCP" 00:12:41.522 } 00:12:41.522 ] 00:12:41.522 }, 00:12:41.522 { 00:12:41.522 "name": "nvmf_tgt_poll_group_002", 00:12:41.522 "admin_qpairs": 1, 00:12:41.522 "io_qpairs": 84, 00:12:41.522 "current_admin_qpairs": 0, 00:12:41.522 "current_io_qpairs": 0, 00:12:41.522 "pending_bdev_io": 0, 00:12:41.522 "completed_nvme_io": 129, 00:12:41.522 "transports": [ 00:12:41.522 { 00:12:41.522 "trtype": "TCP" 00:12:41.522 } 00:12:41.522 ] 00:12:41.522 }, 00:12:41.522 { 00:12:41.522 "name": "nvmf_tgt_poll_group_003", 00:12:41.522 "admin_qpairs": 2, 00:12:41.522 "io_qpairs": 84, 00:12:41.522 "current_admin_qpairs": 0, 00:12:41.522 "current_io_qpairs": 0, 00:12:41.522 "pending_bdev_io": 0, 00:12:41.522 "completed_nvme_io": 228, 00:12:41.522 "transports": [ 00:12:41.522 { 00:12:41.522 "trtype": "TCP" 00:12:41.522 } 00:12:41.522 ] 00:12:41.522 } 00:12:41.522 ] 00:12:41.522 }' 00:12:41.522 05:26:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:41.522 05:26:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:41.522 05:26:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:41.522 05:26:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:41.522 05:26:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:41.522 05:26:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:41.522 05:26:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:41.522 05:26:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:41.522 05:26:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:41.522 05:26:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:12:41.522 05:26:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:41.522 05:26:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:41.522 05:26:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:41.522 05:26:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:41.522 05:26:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:12:41.522 05:26:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:41.522 05:26:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:12:41.522 05:26:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:41.522 05:26:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:41.522 rmmod nvme_tcp 00:12:41.522 rmmod nvme_fabrics 00:12:41.522 rmmod nvme_keyring 00:12:41.522 05:26:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:41.522 05:26:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:12:41.522 05:26:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:12:41.522 05:26:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 3165837 ']' 00:12:41.522 05:26:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 3165837 00:12:41.522 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@946 -- # '[' -z 3165837 ']' 00:12:41.522 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@950 -- # kill -0 3165837 00:12:41.522 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # uname 00:12:41.522 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:41.522 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3165837 00:12:41.522 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:41.522 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:41.522 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3165837' 00:12:41.522 killing process with pid 3165837 00:12:41.522 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@965 -- # kill 3165837 00:12:41.522 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@970 -- # wait 3165837 00:12:41.781 05:26:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:41.781 05:26:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:41.781 05:26:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:41.781 05:26:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:41.781 05:26:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:41.781 05:26:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:41.781 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:41.781 05:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:44.317 05:26:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:44.317 00:12:44.317 real 0m24.979s 00:12:44.317 user 1m21.324s 00:12:44.317 sys 0m4.048s 00:12:44.317 05:26:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:44.317 05:26:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.317 ************************************ 00:12:44.317 END TEST nvmf_rpc 00:12:44.317 ************************************ 00:12:44.317 05:26:50 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:44.317 05:26:50 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:44.317 05:26:50 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:44.317 05:26:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:44.317 ************************************ 00:12:44.317 START TEST nvmf_invalid 00:12:44.317 ************************************ 00:12:44.317 05:26:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:44.317 * Looking for test storage... 00:12:44.317 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:44.317 05:26:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:44.317 05:26:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:44.317 05:26:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:44.317 05:26:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:44.317 05:26:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:44.317 05:26:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:44.317 05:26:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:44.317 05:26:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:44.317 05:26:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:44.317 05:26:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:44.317 05:26:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:44.317 05:26:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:44.317 05:26:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:44.317 05:26:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:44.317 05:26:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:44.317 05:26:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:44.317 05:26:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:44.317 05:26:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:44.317 05:26:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:44.317 05:26:51 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:44.317 05:26:51 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:44.317 05:26:51 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:44.317 05:26:51 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.317 05:26:51 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.317 05:26:51 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.317 05:26:51 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:44.317 05:26:51 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.317 05:26:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:12:44.317 05:26:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:44.317 05:26:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:44.318 05:26:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:44.318 05:26:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:44.318 05:26:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:44.318 05:26:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:44.318 05:26:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:44.318 05:26:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:44.318 05:26:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:44.318 05:26:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:44.318 05:26:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:44.318 05:26:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:44.318 05:26:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:44.318 05:26:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:44.318 05:26:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:44.318 05:26:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:44.318 05:26:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:44.318 05:26:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:44.318 05:26:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:44.318 05:26:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:44.318 05:26:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:44.318 05:26:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:44.318 05:26:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:44.318 05:26:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:44.318 05:26:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:12:44.318 05:26:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:46.222 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:46.222 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:12:46.222 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:46.222 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:46.222 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:46.222 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:46.222 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:46.222 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:12:46.222 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:46.222 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:12:46.222 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:12:46.222 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:12:46.222 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:12:46.222 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:12:46.222 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:12:46.222 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:46.222 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:46.222 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:46.222 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:46.222 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:46.222 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:46.222 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:46.222 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:46.222 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:46.222 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:46.222 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:46.222 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:46.222 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:46.222 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:46.222 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:46.222 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:46.222 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:46.222 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:46.222 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:46.222 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:46.222 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:46.222 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:46.222 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:46.222 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:46.222 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:46.222 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:46.222 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:46.222 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:46.222 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:46.222 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:46.222 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:46.222 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:46.222 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:46.222 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:46.222 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:46.222 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:46.222 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:46.222 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:46.222 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:46.223 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:46.223 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:46.223 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:46.223 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:46.223 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:46.223 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:46.223 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:46.223 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:46.223 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:46.223 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:46.223 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:46.223 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:46.223 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:46.223 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:46.223 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:46.223 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:46.223 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:46.223 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:46.223 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:12:46.223 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:46.223 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:46.223 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:46.223 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:46.223 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:46.223 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:46.223 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:46.223 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:46.223 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:46.223 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:46.223 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:46.223 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:46.223 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:46.223 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:46.223 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:46.223 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:46.223 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:46.223 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:46.223 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:46.223 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:46.223 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:46.223 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:46.223 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:46.223 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:46.223 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.241 ms 00:12:46.223 00:12:46.223 --- 10.0.0.2 ping statistics --- 00:12:46.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:46.223 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:12:46.223 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:46.223 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:46.223 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:12:46.223 00:12:46.223 --- 10.0.0.1 ping statistics --- 00:12:46.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:46.223 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:12:46.224 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:46.224 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:12:46.224 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:46.224 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:46.224 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:46.224 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:46.224 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:46.224 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:46.224 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:46.224 05:26:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:46.224 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:46.224 05:26:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:46.224 05:26:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:46.224 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=3170324 00:12:46.224 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:46.224 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 3170324 00:12:46.224 05:26:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@827 -- # '[' -z 3170324 ']' 00:12:46.224 05:26:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:46.224 05:26:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:46.224 05:26:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:46.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:46.224 05:26:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:46.224 05:26:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:46.484 [2024-07-14 05:26:53.361138] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:12:46.484 [2024-07-14 05:26:53.361252] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:46.484 EAL: No free 2048 kB hugepages reported on node 1 00:12:46.484 [2024-07-14 05:26:53.427780] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:46.484 [2024-07-14 05:26:53.516399] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:46.484 [2024-07-14 05:26:53.516444] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:46.484 [2024-07-14 05:26:53.516473] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:46.484 [2024-07-14 05:26:53.516484] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:46.484 [2024-07-14 05:26:53.516494] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:46.484 [2024-07-14 05:26:53.516580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:46.484 [2024-07-14 05:26:53.516645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:46.484 [2024-07-14 05:26:53.516714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.484 [2024-07-14 05:26:53.516712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:46.742 05:26:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:46.742 05:26:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@860 -- # return 0 00:12:46.742 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:46.742 05:26:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:46.742 05:26:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:46.742 05:26:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:46.742 05:26:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:46.742 05:26:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode3462 00:12:46.999 [2024-07-14 05:26:53.885130] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:46.999 05:26:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:12:46.999 { 00:12:46.999 "nqn": "nqn.2016-06.io.spdk:cnode3462", 00:12:46.999 "tgt_name": "foobar", 00:12:46.999 "method": "nvmf_create_subsystem", 00:12:46.999 "req_id": 1 00:12:46.999 } 00:12:46.999 Got JSON-RPC error response 00:12:46.999 response: 00:12:46.999 { 00:12:46.999 "code": -32603, 00:12:46.999 "message": "Unable to find target foobar" 00:12:46.999 }' 00:12:46.999 05:26:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:12:46.999 { 00:12:46.999 "nqn": "nqn.2016-06.io.spdk:cnode3462", 00:12:46.999 "tgt_name": "foobar", 00:12:46.999 "method": "nvmf_create_subsystem", 00:12:46.999 "req_id": 1 00:12:46.999 } 00:12:46.999 Got JSON-RPC error response 00:12:46.999 response: 00:12:46.999 { 00:12:46.999 "code": -32603, 00:12:46.999 "message": "Unable to find target foobar" 00:12:46.999 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:46.999 05:26:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:46.999 05:26:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode21206 00:12:47.257 [2024-07-14 05:26:54.138024] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21206: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:47.257 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:12:47.257 { 00:12:47.257 "nqn": "nqn.2016-06.io.spdk:cnode21206", 00:12:47.257 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:47.257 "method": "nvmf_create_subsystem", 00:12:47.257 "req_id": 1 00:12:47.257 } 00:12:47.257 Got JSON-RPC error response 00:12:47.257 response: 00:12:47.257 { 00:12:47.257 "code": -32602, 00:12:47.257 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:47.257 }' 00:12:47.257 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:12:47.257 { 00:12:47.257 "nqn": "nqn.2016-06.io.spdk:cnode21206", 00:12:47.257 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:47.257 "method": "nvmf_create_subsystem", 00:12:47.257 "req_id": 1 00:12:47.257 } 00:12:47.257 Got JSON-RPC error response 00:12:47.257 response: 00:12:47.257 { 00:12:47.257 "code": -32602, 00:12:47.257 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:47.257 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:47.257 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:47.257 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode25209 00:12:47.515 [2024-07-14 05:26:54.382818] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25209: invalid model number 'SPDK_Controller' 00:12:47.515 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:12:47.515 { 00:12:47.515 "nqn": "nqn.2016-06.io.spdk:cnode25209", 00:12:47.515 "model_number": "SPDK_Controller\u001f", 00:12:47.515 "method": "nvmf_create_subsystem", 00:12:47.515 "req_id": 1 00:12:47.515 } 00:12:47.515 Got JSON-RPC error response 00:12:47.515 response: 00:12:47.515 { 00:12:47.515 "code": -32602, 00:12:47.515 "message": "Invalid MN SPDK_Controller\u001f" 00:12:47.515 }' 00:12:47.515 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:12:47.515 { 00:12:47.515 "nqn": "nqn.2016-06.io.spdk:cnode25209", 00:12:47.515 "model_number": "SPDK_Controller\u001f", 00:12:47.515 "method": "nvmf_create_subsystem", 00:12:47.515 "req_id": 1 00:12:47.515 } 00:12:47.515 Got JSON-RPC error response 00:12:47.515 response: 00:12:47.515 { 00:12:47.515 "code": -32602, 00:12:47.515 "message": "Invalid MN SPDK_Controller\u001f" 00:12:47.515 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:47.515 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:47.515 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:47.515 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:47.515 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:47.515 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:47.515 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:47.515 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.515 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:12:47.515 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:47.515 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:12:47.515 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.515 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.515 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:12:47.515 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:12:47.515 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:12:47.515 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.515 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.515 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:12:47.515 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:12:47.515 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:12:47.515 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.515 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.515 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:12:47.515 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:12:47.515 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:12:47.515 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.515 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.515 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:12:47.515 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:47.515 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:12:47.515 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.515 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.515 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:12:47.515 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:12:47.515 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:12:47.515 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.515 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.515 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:12:47.515 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:47.515 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:12:47.515 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.515 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.515 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:12:47.515 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:12:47.515 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:12:47.515 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.515 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.515 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:12:47.515 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:12:47.515 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:12:47.515 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.515 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.515 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:12:47.515 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:12:47.515 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:12:47.515 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.515 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.515 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:12:47.515 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:12:47.516 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:12:47.516 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.516 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.516 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:12:47.516 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:47.516 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:12:47.516 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.516 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.516 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:12:47.516 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:47.516 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:12:47.516 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.516 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.516 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:12:47.516 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:12:47.516 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:12:47.516 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.516 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.516 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:12:47.516 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:12:47.516 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:12:47.516 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.516 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.516 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:12:47.516 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:12:47.516 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:12:47.516 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.516 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.516 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:12:47.516 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:12:47.516 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:12:47.516 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.516 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.516 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:12:47.516 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:12:47.516 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:12:47.516 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.516 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.516 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:12:47.516 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:12:47.516 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:12:47.516 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.516 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.516 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:12:47.516 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:12:47.516 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:12:47.516 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.516 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.516 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:12:47.516 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:12:47.516 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:12:47.516 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.516 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.516 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ v == \- ]] 00:12:47.516 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'vW1NVN;^4SsvV,.`3we0h' 00:12:47.516 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'vW1NVN;^4SsvV,.`3we0h' nqn.2016-06.io.spdk:cnode31531 00:12:47.774 [2024-07-14 05:26:54.695886] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31531: invalid serial number 'vW1NVN;^4SsvV,.`3we0h' 00:12:47.774 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:12:47.774 { 00:12:47.774 "nqn": "nqn.2016-06.io.spdk:cnode31531", 00:12:47.774 "serial_number": "vW1NVN;^4SsvV,.`3we0h", 00:12:47.774 "method": "nvmf_create_subsystem", 00:12:47.774 "req_id": 1 00:12:47.774 } 00:12:47.774 Got JSON-RPC error response 00:12:47.774 response: 00:12:47.774 { 00:12:47.774 "code": -32602, 00:12:47.774 "message": "Invalid SN vW1NVN;^4SsvV,.`3we0h" 00:12:47.774 }' 00:12:47.774 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:12:47.774 { 00:12:47.774 "nqn": "nqn.2016-06.io.spdk:cnode31531", 00:12:47.774 "serial_number": "vW1NVN;^4SsvV,.`3we0h", 00:12:47.774 "method": "nvmf_create_subsystem", 00:12:47.774 "req_id": 1 00:12:47.774 } 00:12:47.775 Got JSON-RPC error response 00:12:47.775 response: 00:12:47.775 { 00:12:47.775 "code": -32602, 00:12:47.775 "message": "Invalid SN vW1NVN;^4SsvV,.`3we0h" 00:12:47.775 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.775 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ 0 == \- ]] 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '0B2XuP;iRco8QCc}x$}/zM}z.k:1n[3{2U6a H9' 00:12:47.776 05:26:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '0B2XuP;iRco8QCc}x$}/zM}z.k:1n[3{2U6a H9' nqn.2016-06.io.spdk:cnode22971 00:12:48.033 [2024-07-14 05:26:55.097134] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22971: invalid model number '0B2XuP;iRco8QCc}x$}/zM}z.k:1n[3{2U6a H9' 00:12:48.033 05:26:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:12:48.033 { 00:12:48.033 "nqn": "nqn.2016-06.io.spdk:cnode22971", 00:12:48.033 "model_number": "0B2XuP;\u007fiRco8QCc\u007f}x$}/zM}z.k:1n[3{2U6a H9", 00:12:48.033 "method": "nvmf_create_subsystem", 00:12:48.033 "req_id": 1 00:12:48.033 } 00:12:48.033 Got JSON-RPC error response 00:12:48.033 response: 00:12:48.033 { 00:12:48.033 "code": -32602, 00:12:48.033 "message": "Invalid MN 0B2XuP;\u007fiRco8QCc\u007f}x$}/zM}z.k:1n[3{2U6a H9" 00:12:48.033 }' 00:12:48.033 05:26:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:12:48.033 { 00:12:48.033 "nqn": "nqn.2016-06.io.spdk:cnode22971", 00:12:48.033 "model_number": "0B2XuP;\u007fiRco8QCc\u007f}x$}/zM}z.k:1n[3{2U6a H9", 00:12:48.033 "method": "nvmf_create_subsystem", 00:12:48.033 "req_id": 1 00:12:48.033 } 00:12:48.033 Got JSON-RPC error response 00:12:48.033 response: 00:12:48.033 { 00:12:48.033 "code": -32602, 00:12:48.034 "message": "Invalid MN 0B2XuP;\u007fiRco8QCc\u007f}x$}/zM}z.k:1n[3{2U6a H9" 00:12:48.034 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:48.034 05:26:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:48.291 [2024-07-14 05:26:55.354045] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:48.291 05:26:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:48.548 05:26:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:12:48.548 05:26:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:12:48.548 05:26:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:12:48.548 05:26:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:12:48.548 05:26:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:12:48.804 [2024-07-14 05:26:55.843610] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:48.804 05:26:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:12:48.804 { 00:12:48.804 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:48.804 "listen_address": { 00:12:48.804 "trtype": "tcp", 00:12:48.804 "traddr": "", 00:12:48.804 "trsvcid": "4421" 00:12:48.804 }, 00:12:48.804 "method": "nvmf_subsystem_remove_listener", 00:12:48.804 "req_id": 1 00:12:48.804 } 00:12:48.804 Got JSON-RPC error response 00:12:48.804 response: 00:12:48.804 { 00:12:48.804 "code": -32602, 00:12:48.804 "message": "Invalid parameters" 00:12:48.804 }' 00:12:48.804 05:26:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:12:48.804 { 00:12:48.804 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:48.804 "listen_address": { 00:12:48.804 "trtype": "tcp", 00:12:48.804 "traddr": "", 00:12:48.804 "trsvcid": "4421" 00:12:48.804 }, 00:12:48.804 "method": "nvmf_subsystem_remove_listener", 00:12:48.804 "req_id": 1 00:12:48.804 } 00:12:48.804 Got JSON-RPC error response 00:12:48.804 response: 00:12:48.804 { 00:12:48.804 "code": -32602, 00:12:48.804 "message": "Invalid parameters" 00:12:48.804 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:48.804 05:26:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1903 -i 0 00:12:49.060 [2024-07-14 05:26:56.084385] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1903: invalid cntlid range [0-65519] 00:12:49.060 05:26:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:12:49.060 { 00:12:49.060 "nqn": "nqn.2016-06.io.spdk:cnode1903", 00:12:49.060 "min_cntlid": 0, 00:12:49.060 "method": "nvmf_create_subsystem", 00:12:49.060 "req_id": 1 00:12:49.060 } 00:12:49.060 Got JSON-RPC error response 00:12:49.060 response: 00:12:49.060 { 00:12:49.060 "code": -32602, 00:12:49.060 "message": "Invalid cntlid range [0-65519]" 00:12:49.060 }' 00:12:49.060 05:26:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:12:49.060 { 00:12:49.060 "nqn": "nqn.2016-06.io.spdk:cnode1903", 00:12:49.060 "min_cntlid": 0, 00:12:49.060 "method": "nvmf_create_subsystem", 00:12:49.060 "req_id": 1 00:12:49.060 } 00:12:49.060 Got JSON-RPC error response 00:12:49.060 response: 00:12:49.060 { 00:12:49.060 "code": -32602, 00:12:49.060 "message": "Invalid cntlid range [0-65519]" 00:12:49.060 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:49.060 05:26:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6674 -i 65520 00:12:49.317 [2024-07-14 05:26:56.345261] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6674: invalid cntlid range [65520-65519] 00:12:49.317 05:26:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:12:49.317 { 00:12:49.317 "nqn": "nqn.2016-06.io.spdk:cnode6674", 00:12:49.317 "min_cntlid": 65520, 00:12:49.317 "method": "nvmf_create_subsystem", 00:12:49.317 "req_id": 1 00:12:49.317 } 00:12:49.317 Got JSON-RPC error response 00:12:49.317 response: 00:12:49.317 { 00:12:49.317 "code": -32602, 00:12:49.317 "message": "Invalid cntlid range [65520-65519]" 00:12:49.317 }' 00:12:49.317 05:26:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:12:49.317 { 00:12:49.317 "nqn": "nqn.2016-06.io.spdk:cnode6674", 00:12:49.317 "min_cntlid": 65520, 00:12:49.317 "method": "nvmf_create_subsystem", 00:12:49.317 "req_id": 1 00:12:49.317 } 00:12:49.317 Got JSON-RPC error response 00:12:49.317 response: 00:12:49.317 { 00:12:49.317 "code": -32602, 00:12:49.317 "message": "Invalid cntlid range [65520-65519]" 00:12:49.317 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:49.317 05:26:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9331 -I 0 00:12:49.574 [2024-07-14 05:26:56.586102] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9331: invalid cntlid range [1-0] 00:12:49.574 05:26:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:12:49.574 { 00:12:49.574 "nqn": "nqn.2016-06.io.spdk:cnode9331", 00:12:49.574 "max_cntlid": 0, 00:12:49.574 "method": "nvmf_create_subsystem", 00:12:49.574 "req_id": 1 00:12:49.574 } 00:12:49.574 Got JSON-RPC error response 00:12:49.574 response: 00:12:49.574 { 00:12:49.574 "code": -32602, 00:12:49.574 "message": "Invalid cntlid range [1-0]" 00:12:49.574 }' 00:12:49.574 05:26:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:12:49.574 { 00:12:49.574 "nqn": "nqn.2016-06.io.spdk:cnode9331", 00:12:49.574 "max_cntlid": 0, 00:12:49.574 "method": "nvmf_create_subsystem", 00:12:49.574 "req_id": 1 00:12:49.574 } 00:12:49.574 Got JSON-RPC error response 00:12:49.574 response: 00:12:49.574 { 00:12:49.574 "code": -32602, 00:12:49.574 "message": "Invalid cntlid range [1-0]" 00:12:49.574 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:49.574 05:26:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23245 -I 65520 00:12:49.832 [2024-07-14 05:26:56.826893] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23245: invalid cntlid range [1-65520] 00:12:49.832 05:26:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:12:49.832 { 00:12:49.832 "nqn": "nqn.2016-06.io.spdk:cnode23245", 00:12:49.832 "max_cntlid": 65520, 00:12:49.832 "method": "nvmf_create_subsystem", 00:12:49.832 "req_id": 1 00:12:49.832 } 00:12:49.832 Got JSON-RPC error response 00:12:49.832 response: 00:12:49.832 { 00:12:49.832 "code": -32602, 00:12:49.832 "message": "Invalid cntlid range [1-65520]" 00:12:49.832 }' 00:12:49.832 05:26:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:12:49.832 { 00:12:49.832 "nqn": "nqn.2016-06.io.spdk:cnode23245", 00:12:49.832 "max_cntlid": 65520, 00:12:49.832 "method": "nvmf_create_subsystem", 00:12:49.832 "req_id": 1 00:12:49.832 } 00:12:49.832 Got JSON-RPC error response 00:12:49.832 response: 00:12:49.832 { 00:12:49.832 "code": -32602, 00:12:49.832 "message": "Invalid cntlid range [1-65520]" 00:12:49.832 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:49.832 05:26:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11201 -i 6 -I 5 00:12:50.089 [2024-07-14 05:26:57.063671] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11201: invalid cntlid range [6-5] 00:12:50.089 05:26:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:12:50.089 { 00:12:50.089 "nqn": "nqn.2016-06.io.spdk:cnode11201", 00:12:50.089 "min_cntlid": 6, 00:12:50.089 "max_cntlid": 5, 00:12:50.089 "method": "nvmf_create_subsystem", 00:12:50.089 "req_id": 1 00:12:50.089 } 00:12:50.089 Got JSON-RPC error response 00:12:50.089 response: 00:12:50.089 { 00:12:50.089 "code": -32602, 00:12:50.089 "message": "Invalid cntlid range [6-5]" 00:12:50.089 }' 00:12:50.089 05:26:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:12:50.089 { 00:12:50.089 "nqn": "nqn.2016-06.io.spdk:cnode11201", 00:12:50.089 "min_cntlid": 6, 00:12:50.089 "max_cntlid": 5, 00:12:50.089 "method": "nvmf_create_subsystem", 00:12:50.089 "req_id": 1 00:12:50.089 } 00:12:50.089 Got JSON-RPC error response 00:12:50.089 response: 00:12:50.089 { 00:12:50.089 "code": -32602, 00:12:50.089 "message": "Invalid cntlid range [6-5]" 00:12:50.089 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:50.089 05:26:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:50.347 05:26:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:12:50.347 { 00:12:50.347 "name": "foobar", 00:12:50.347 "method": "nvmf_delete_target", 00:12:50.347 "req_id": 1 00:12:50.347 } 00:12:50.347 Got JSON-RPC error response 00:12:50.347 response: 00:12:50.347 { 00:12:50.347 "code": -32602, 00:12:50.347 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:50.347 }' 00:12:50.347 05:26:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:12:50.347 { 00:12:50.347 "name": "foobar", 00:12:50.347 "method": "nvmf_delete_target", 00:12:50.347 "req_id": 1 00:12:50.347 } 00:12:50.347 Got JSON-RPC error response 00:12:50.347 response: 00:12:50.347 { 00:12:50.347 "code": -32602, 00:12:50.347 "message": "The specified target doesn't exist, cannot delete it." 00:12:50.347 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:50.347 05:26:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:50.347 05:26:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:12:50.347 05:26:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:50.347 05:26:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:12:50.347 05:26:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:50.347 05:26:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:12:50.347 05:26:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:50.347 05:26:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:50.347 rmmod nvme_tcp 00:12:50.347 rmmod nvme_fabrics 00:12:50.347 rmmod nvme_keyring 00:12:50.347 05:26:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:50.347 05:26:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:12:50.347 05:26:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:12:50.347 05:26:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 3170324 ']' 00:12:50.347 05:26:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 3170324 00:12:50.347 05:26:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@946 -- # '[' -z 3170324 ']' 00:12:50.347 05:26:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@950 -- # kill -0 3170324 00:12:50.347 05:26:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # uname 00:12:50.347 05:26:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:50.347 05:26:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3170324 00:12:50.347 05:26:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:50.347 05:26:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:50.347 05:26:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3170324' 00:12:50.347 killing process with pid 3170324 00:12:50.347 05:26:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@965 -- # kill 3170324 00:12:50.347 05:26:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@970 -- # wait 3170324 00:12:50.605 05:26:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:50.605 05:26:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:50.605 05:26:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:50.605 05:26:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:50.605 05:26:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:50.605 05:26:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:50.605 05:26:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:50.605 05:26:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:52.509 05:26:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:52.509 00:12:52.509 real 0m8.597s 00:12:52.509 user 0m19.570s 00:12:52.509 sys 0m2.430s 00:12:52.509 05:26:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:52.509 05:26:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:52.509 ************************************ 00:12:52.509 END TEST nvmf_invalid 00:12:52.509 ************************************ 00:12:52.509 05:26:59 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:52.509 05:26:59 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:52.509 05:26:59 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:52.509 05:26:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:52.509 ************************************ 00:12:52.509 START TEST nvmf_abort 00:12:52.509 ************************************ 00:12:52.509 05:26:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:52.768 * Looking for test storage... 00:12:52.768 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:52.768 05:26:59 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:52.768 05:26:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:12:52.768 05:26:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:52.768 05:26:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:52.768 05:26:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:52.768 05:26:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:52.768 05:26:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:52.768 05:26:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:52.768 05:26:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:52.768 05:26:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:52.768 05:26:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:52.768 05:26:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:52.768 05:26:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:52.768 05:26:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:52.768 05:26:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:52.768 05:26:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:52.768 05:26:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:52.768 05:26:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:52.768 05:26:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:52.768 05:26:59 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:52.768 05:26:59 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:52.768 05:26:59 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:52.768 05:26:59 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.768 05:26:59 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.768 05:26:59 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.768 05:26:59 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:12:52.768 05:26:59 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.768 05:26:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:12:52.768 05:26:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:52.768 05:26:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:52.768 05:26:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:52.768 05:26:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:52.768 05:26:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:52.768 05:26:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:52.768 05:26:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:52.768 05:26:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:52.768 05:26:59 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:52.768 05:26:59 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:12:52.768 05:26:59 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:12:52.768 05:26:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:52.768 05:26:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:52.768 05:26:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:52.768 05:26:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:52.768 05:26:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:52.768 05:26:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:52.768 05:26:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:52.768 05:26:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:52.768 05:26:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:52.768 05:26:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:52.768 05:26:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:12:52.768 05:26:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:54.667 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:54.667 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:12:54.667 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:54.667 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:54.667 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:54.667 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:54.667 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:54.667 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:12:54.667 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:54.667 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:12:54.667 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:12:54.667 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:12:54.667 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:12:54.667 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:12:54.667 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:12:54.667 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:54.667 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:54.667 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:54.667 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:54.667 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:54.667 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:54.667 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:54.667 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:54.667 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:54.667 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:54.667 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:54.667 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:54.667 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:54.667 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:54.667 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:54.667 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:54.667 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:54.667 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:54.667 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:54.667 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:54.667 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:54.667 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:54.667 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:54.667 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:54.667 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:54.667 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:54.667 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:54.667 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:54.667 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:54.667 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:54.667 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:54.667 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:54.667 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:54.667 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:54.667 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:54.667 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:54.667 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:54.667 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:54.667 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:54.667 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:54.667 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:54.667 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:54.667 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:54.667 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:54.667 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:54.667 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:54.667 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:54.667 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:54.667 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:54.667 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:54.667 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:54.667 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:54.668 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:54.668 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:54.668 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:54.668 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:54.668 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:54.668 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:12:54.668 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:54.668 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:54.668 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:54.668 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:54.668 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:54.668 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:54.668 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:54.668 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:54.668 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:54.668 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:54.668 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:54.668 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:54.668 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:54.668 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:54.668 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:54.668 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:54.668 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:54.668 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:54.668 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:54.668 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:54.668 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:54.668 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:54.668 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:54.668 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:54.668 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:12:54.668 00:12:54.668 --- 10.0.0.2 ping statistics --- 00:12:54.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.668 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:12:54.668 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:54.668 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:54.668 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:12:54.668 00:12:54.668 --- 10.0.0.1 ping statistics --- 00:12:54.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.668 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:12:54.668 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:54.668 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:12:54.668 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:54.668 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:54.668 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:54.668 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:54.668 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:54.668 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:54.668 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:54.668 05:27:01 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:12:54.668 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:54.668 05:27:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:54.668 05:27:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:54.925 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=3172942 00:12:54.925 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:54.925 05:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 3172942 00:12:54.925 05:27:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@827 -- # '[' -z 3172942 ']' 00:12:54.925 05:27:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:54.925 05:27:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:54.925 05:27:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:54.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:54.925 05:27:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:54.925 05:27:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:54.925 [2024-07-14 05:27:01.818655] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:12:54.925 [2024-07-14 05:27:01.818740] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:54.925 EAL: No free 2048 kB hugepages reported on node 1 00:12:54.925 [2024-07-14 05:27:01.886889] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:54.925 [2024-07-14 05:27:01.983562] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:54.926 [2024-07-14 05:27:01.983611] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:54.926 [2024-07-14 05:27:01.983639] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:54.926 [2024-07-14 05:27:01.983651] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:54.926 [2024-07-14 05:27:01.983661] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:54.926 [2024-07-14 05:27:01.983755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:54.926 [2024-07-14 05:27:01.983785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:54.926 [2024-07-14 05:27:01.983787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:55.183 05:27:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:55.183 05:27:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@860 -- # return 0 00:12:55.183 05:27:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:55.183 05:27:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:55.183 05:27:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:55.183 05:27:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:55.183 05:27:02 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:12:55.183 05:27:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.183 05:27:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:55.183 [2024-07-14 05:27:02.113811] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:55.183 05:27:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.183 05:27:02 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:12:55.183 05:27:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.183 05:27:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:55.183 Malloc0 00:12:55.183 05:27:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.183 05:27:02 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:55.183 05:27:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.183 05:27:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:55.183 Delay0 00:12:55.183 05:27:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.183 05:27:02 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:55.183 05:27:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.183 05:27:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:55.183 05:27:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.183 05:27:02 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:12:55.183 05:27:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.183 05:27:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:55.183 05:27:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.183 05:27:02 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:55.183 05:27:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.183 05:27:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:55.183 [2024-07-14 05:27:02.189172] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:55.183 05:27:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.183 05:27:02 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:55.183 05:27:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.183 05:27:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:55.183 05:27:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.183 05:27:02 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:12:55.183 EAL: No free 2048 kB hugepages reported on node 1 00:12:55.440 [2024-07-14 05:27:02.338044] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:57.356 Initializing NVMe Controllers 00:12:57.356 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:12:57.356 controller IO queue size 128 less than required 00:12:57.356 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:12:57.356 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:12:57.356 Initialization complete. Launching workers. 00:12:57.356 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 30715 00:12:57.356 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 30780, failed to submit 62 00:12:57.356 success 30719, unsuccess 61, failed 0 00:12:57.356 05:27:04 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:57.356 05:27:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.356 05:27:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:57.356 05:27:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.356 05:27:04 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:12:57.356 05:27:04 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:12:57.356 05:27:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:57.356 05:27:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:12:57.356 05:27:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:57.356 05:27:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:12:57.356 05:27:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:57.356 05:27:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:57.626 rmmod nvme_tcp 00:12:57.626 rmmod nvme_fabrics 00:12:57.626 rmmod nvme_keyring 00:12:57.626 05:27:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:57.626 05:27:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:12:57.626 05:27:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:12:57.626 05:27:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 3172942 ']' 00:12:57.626 05:27:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 3172942 00:12:57.626 05:27:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@946 -- # '[' -z 3172942 ']' 00:12:57.626 05:27:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@950 -- # kill -0 3172942 00:12:57.626 05:27:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # uname 00:12:57.626 05:27:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:57.626 05:27:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3172942 00:12:57.626 05:27:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:12:57.626 05:27:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:12:57.626 05:27:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3172942' 00:12:57.626 killing process with pid 3172942 00:12:57.626 05:27:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@965 -- # kill 3172942 00:12:57.626 05:27:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@970 -- # wait 3172942 00:12:57.884 05:27:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:57.884 05:27:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:57.884 05:27:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:57.884 05:27:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:57.884 05:27:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:57.884 05:27:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:57.884 05:27:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:57.884 05:27:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:59.787 05:27:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:59.787 00:12:59.787 real 0m7.198s 00:12:59.787 user 0m10.157s 00:12:59.787 sys 0m2.651s 00:12:59.787 05:27:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:59.787 05:27:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:59.787 ************************************ 00:12:59.787 END TEST nvmf_abort 00:12:59.787 ************************************ 00:12:59.787 05:27:06 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:59.787 05:27:06 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:59.787 05:27:06 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:59.787 05:27:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:59.787 ************************************ 00:12:59.787 START TEST nvmf_ns_hotplug_stress 00:12:59.787 ************************************ 00:12:59.787 05:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:00.046 * Looking for test storage... 00:13:00.046 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:00.046 05:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:00.046 05:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:13:00.046 05:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:00.046 05:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:00.046 05:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:00.046 05:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:00.046 05:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:00.046 05:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:00.046 05:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:00.046 05:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:00.046 05:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:00.046 05:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:00.046 05:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:00.046 05:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:00.046 05:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:00.046 05:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:00.046 05:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:00.046 05:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:00.046 05:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:00.046 05:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:00.046 05:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:00.046 05:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:00.046 05:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.046 05:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.046 05:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.046 05:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:13:00.046 05:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.046 05:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:13:00.046 05:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:00.046 05:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:00.046 05:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:00.046 05:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:00.046 05:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:00.046 05:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:00.046 05:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:00.046 05:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:00.046 05:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:00.046 05:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:13:00.046 05:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:00.046 05:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:00.046 05:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:00.046 05:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:00.046 05:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:00.046 05:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:00.046 05:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:00.046 05:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:00.046 05:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:00.046 05:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:00.046 05:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:00.046 05:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:01.950 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:01.950 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:13:01.950 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:01.950 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:01.950 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:01.950 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:01.950 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:01.950 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:13:01.950 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:01.950 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:13:01.950 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:13:01.950 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:13:01.950 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:13:01.950 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:13:01.950 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:13:01.950 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:01.950 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:01.950 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:01.950 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:01.950 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:01.950 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:01.950 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:01.950 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:01.950 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:01.950 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:01.950 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:01.950 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:01.950 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:01.950 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:01.950 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:01.950 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:01.950 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:01.950 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:01.950 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:01.950 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:01.950 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:01.950 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:01.950 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:01.950 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:01.950 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:01.950 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:01.950 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:01.950 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:01.950 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:01.950 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:01.950 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:01.950 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:01.950 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:01.950 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:01.950 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:01.950 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:01.950 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:01.950 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:01.950 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:01.950 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:01.950 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:01.950 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:01.950 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:01.950 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:01.950 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:01.950 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:01.950 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:01.950 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:01.950 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:01.950 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:01.950 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:01.950 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:01.950 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:01.950 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:01.950 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:01.950 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:01.950 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:01.950 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:13:01.950 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:01.950 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:01.950 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:01.951 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:01.951 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:01.951 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:01.951 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:01.951 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:01.951 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:01.951 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:01.951 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:01.951 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:01.951 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:01.951 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:01.951 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:01.951 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:01.951 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:01.951 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:01.951 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:01.951 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:01.951 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:01.951 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:01.951 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:01.951 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:01.951 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:13:01.951 00:13:01.951 --- 10.0.0.2 ping statistics --- 00:13:01.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:01.951 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:13:01.951 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:01.951 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:01.951 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.265 ms 00:13:01.951 00:13:01.951 --- 10.0.0.1 ping statistics --- 00:13:01.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:01.951 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:13:01.951 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:01.951 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:13:01.951 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:01.951 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:01.951 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:01.951 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:01.951 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:01.951 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:01.951 05:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:01.951 05:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:13:01.951 05:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:01.951 05:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:01.951 05:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:01.951 05:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=3175781 00:13:01.951 05:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:01.951 05:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 3175781 00:13:01.951 05:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@827 -- # '[' -z 3175781 ']' 00:13:01.951 05:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:01.951 05:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:01.951 05:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:01.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:01.951 05:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:01.951 05:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:02.210 [2024-07-14 05:27:09.077938] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:13:02.210 [2024-07-14 05:27:09.078021] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:02.210 EAL: No free 2048 kB hugepages reported on node 1 00:13:02.210 [2024-07-14 05:27:09.144378] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:02.210 [2024-07-14 05:27:09.230348] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:02.210 [2024-07-14 05:27:09.230414] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:02.210 [2024-07-14 05:27:09.230428] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:02.210 [2024-07-14 05:27:09.230440] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:02.210 [2024-07-14 05:27:09.230449] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:02.210 [2024-07-14 05:27:09.230532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:02.210 [2024-07-14 05:27:09.230562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:02.210 [2024-07-14 05:27:09.230564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:02.468 05:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:02.468 05:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # return 0 00:13:02.468 05:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:02.468 05:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:02.468 05:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:02.468 05:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:02.468 05:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:13:02.468 05:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:02.468 [2024-07-14 05:27:09.569966] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:02.726 05:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:02.726 05:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:02.984 [2024-07-14 05:27:10.064988] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:02.984 05:27:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:03.242 05:27:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:13:03.500 Malloc0 00:13:03.500 05:27:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:03.758 Delay0 00:13:03.758 05:27:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:04.023 05:27:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:13:04.286 NULL1 00:13:04.286 05:27:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:04.544 05:27:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3176096 00:13:04.544 05:27:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:13:04.544 05:27:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3176096 00:13:04.544 05:27:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:04.544 EAL: No free 2048 kB hugepages reported on node 1 00:13:04.802 05:27:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:05.060 05:27:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:13:05.060 05:27:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:13:05.318 true 00:13:05.318 05:27:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3176096 00:13:05.318 05:27:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:05.575 05:27:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:05.833 05:27:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:13:05.833 05:27:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:13:06.091 true 00:13:06.091 05:27:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3176096 00:13:06.091 05:27:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:07.024 Read completed with error (sct=0, sc=11) 00:13:07.024 05:27:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:07.282 05:27:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:13:07.282 05:27:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:13:07.539 true 00:13:07.539 05:27:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3176096 00:13:07.539 05:27:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:07.797 05:27:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:08.054 05:27:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:13:08.054 05:27:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:13:08.312 true 00:13:08.312 05:27:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3176096 00:13:08.312 05:27:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:08.569 05:27:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:08.827 05:27:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:13:08.827 05:27:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:13:09.084 true 00:13:09.084 05:27:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3176096 00:13:09.084 05:27:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:10.014 05:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:10.014 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:10.014 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:10.271 05:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:13:10.271 05:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:13:10.529 true 00:13:10.529 05:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3176096 00:13:10.529 05:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:10.786 05:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:11.043 05:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:13:11.043 05:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:13:11.301 true 00:13:11.301 05:27:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3176096 00:13:11.301 05:27:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:12.233 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:12.233 05:27:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:12.519 05:27:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:13:12.519 05:27:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:13:12.519 true 00:13:12.777 05:27:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3176096 00:13:12.777 05:27:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:13.034 05:27:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:13.292 05:27:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:13:13.292 05:27:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:13:13.292 true 00:13:13.292 05:27:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3176096 00:13:13.292 05:27:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:14.223 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:14.223 05:27:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:14.480 05:27:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:13:14.480 05:27:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:13:14.737 true 00:13:14.737 05:27:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3176096 00:13:14.737 05:27:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:14.995 05:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:15.252 05:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:13:15.252 05:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:13:15.509 true 00:13:15.509 05:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3176096 00:13:15.509 05:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:16.440 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:16.440 05:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:16.440 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:16.440 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:16.697 05:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:13:16.697 05:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:13:16.954 true 00:13:16.954 05:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3176096 00:13:16.954 05:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:17.212 05:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:17.470 05:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:13:17.470 05:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:13:17.727 true 00:13:17.727 05:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3176096 00:13:17.727 05:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.658 05:27:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:18.916 05:27:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:13:18.916 05:27:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:13:19.173 true 00:13:19.173 05:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3176096 00:13:19.173 05:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:19.430 05:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:19.687 05:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:13:19.687 05:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:13:19.945 true 00:13:19.945 05:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3176096 00:13:19.945 05:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:20.878 05:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:20.878 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:20.878 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:21.136 05:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:13:21.136 05:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:13:21.136 true 00:13:21.394 05:27:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3176096 00:13:21.394 05:27:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:21.652 05:27:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:21.652 05:27:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:13:21.652 05:27:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:13:22.217 true 00:13:22.217 05:27:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3176096 00:13:22.217 05:27:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:22.783 05:27:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:23.040 05:27:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:13:23.040 05:27:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:13:23.297 true 00:13:23.297 05:27:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3176096 00:13:23.297 05:27:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.555 05:27:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:23.813 05:27:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:13:23.813 05:27:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:13:24.070 true 00:13:24.070 05:27:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3176096 00:13:24.070 05:27:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:25.003 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:25.003 05:27:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:25.003 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:25.260 05:27:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:13:25.260 05:27:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:13:25.517 true 00:13:25.517 05:27:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3176096 00:13:25.517 05:27:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:25.774 05:27:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:26.031 05:27:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:13:26.031 05:27:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:13:26.291 true 00:13:26.291 05:27:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3176096 00:13:26.291 05:27:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:27.224 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:27.224 05:27:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:27.224 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:27.503 05:27:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:13:27.503 05:27:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:13:27.761 true 00:13:27.761 05:27:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3176096 00:13:27.761 05:27:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:28.018 05:27:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:28.275 05:27:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:13:28.275 05:27:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:13:28.532 true 00:13:28.532 05:27:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3176096 00:13:28.532 05:27:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:29.464 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:29.464 05:27:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:29.464 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:29.464 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:29.464 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:29.464 05:27:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:13:29.464 05:27:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:13:29.721 true 00:13:29.721 05:27:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3176096 00:13:29.721 05:27:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:29.978 05:27:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:30.551 05:27:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:13:30.551 05:27:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:13:30.551 true 00:13:30.551 05:27:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3176096 00:13:30.551 05:27:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:31.484 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:31.484 05:27:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:31.745 05:27:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:13:31.746 05:27:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:13:32.007 true 00:13:32.007 05:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3176096 00:13:32.007 05:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:32.264 05:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:32.521 05:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:13:32.521 05:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:13:32.778 true 00:13:32.778 05:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3176096 00:13:32.778 05:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:33.709 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:33.709 05:27:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:33.966 05:27:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:13:33.966 05:27:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:13:34.222 true 00:13:34.222 05:27:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3176096 00:13:34.222 05:27:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.478 05:27:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:34.735 05:27:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:13:34.735 05:27:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:13:34.991 true 00:13:34.991 05:27:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3176096 00:13:34.991 05:27:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:35.922 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:35.923 05:27:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:35.923 Initializing NVMe Controllers 00:13:35.923 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:35.923 Controller IO queue size 128, less than required. 00:13:35.923 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:35.923 Controller IO queue size 128, less than required. 00:13:35.923 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:35.923 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:35.923 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:35.923 Initialization complete. Launching workers. 00:13:35.923 ======================================================== 00:13:35.923 Latency(us) 00:13:35.923 Device Information : IOPS MiB/s Average min max 00:13:35.923 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 725.97 0.35 91315.21 2630.98 1011613.60 00:13:35.923 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 10421.45 5.09 12247.24 2915.08 452682.91 00:13:35.923 ======================================================== 00:13:35.923 Total : 11147.42 5.44 17396.47 2630.98 1011613.60 00:13:35.923 00:13:36.180 05:27:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:13:36.180 05:27:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:13:36.437 true 00:13:36.437 05:27:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3176096 00:13:36.437 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3176096) - No such process 00:13:36.437 05:27:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3176096 00:13:36.437 05:27:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:36.694 05:27:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:36.951 05:27:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:13:36.951 05:27:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:13:36.951 05:27:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:13:36.951 05:27:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:36.951 05:27:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:13:36.951 null0 00:13:36.951 05:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:36.951 05:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:36.951 05:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:13:37.208 null1 00:13:37.208 05:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:37.208 05:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:37.208 05:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:13:37.465 null2 00:13:37.465 05:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:37.465 05:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:37.465 05:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:13:37.722 null3 00:13:37.722 05:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:37.722 05:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:37.722 05:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:13:37.979 null4 00:13:37.979 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:37.979 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:37.979 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:13:38.237 null5 00:13:38.237 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:38.237 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:38.237 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:13:38.495 null6 00:13:38.495 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:38.495 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:38.495 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:13:38.753 null7 00:13:38.753 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:38.753 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:38.753 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:13:38.753 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:38.753 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:38.753 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:13:38.753 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:38.753 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:13:38.753 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:38.753 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:38.753 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.753 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:38.753 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:38.753 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:13:38.753 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:38.753 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:13:38.753 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:38.753 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:38.753 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.753 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:38.753 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:38.753 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:13:38.753 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:38.753 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:13:38.753 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:38.753 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:38.754 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.754 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:38.754 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:38.754 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:13:38.754 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:38.754 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:13:38.754 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:38.754 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:38.754 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.754 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:38.754 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:38.754 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:13:38.754 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:38.754 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:38.754 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:13:38.754 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:38.754 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.754 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:38.754 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:38.754 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:13:38.754 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:38.754 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:13:38.754 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:38.754 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:38.754 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.754 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:38.754 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:38.754 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:13:38.754 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:38.754 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:13:38.754 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:38.754 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:38.754 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.754 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:38.754 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:38.754 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:13:38.754 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:38.754 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:13:38.754 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:38.754 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:38.754 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3180266 3180267 3180269 3180271 3180273 3180275 3180277 3180279 00:13:38.754 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.754 05:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:39.012 05:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:39.012 05:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:39.012 05:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:39.012 05:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:39.012 05:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:39.012 05:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:39.012 05:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:39.012 05:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:39.271 05:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.271 05:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.271 05:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:39.271 05:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.271 05:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.271 05:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:39.271 05:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.271 05:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.271 05:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:39.271 05:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.271 05:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.271 05:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:39.271 05:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.271 05:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.271 05:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:39.271 05:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.271 05:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.271 05:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:39.271 05:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.271 05:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.271 05:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:39.271 05:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.271 05:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.271 05:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:39.529 05:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:39.529 05:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:39.529 05:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:39.529 05:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:39.529 05:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:39.529 05:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:39.529 05:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:39.529 05:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:39.787 05:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.787 05:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.787 05:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:39.787 05:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.787 05:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.787 05:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:39.787 05:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.787 05:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.787 05:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:39.787 05:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.787 05:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.787 05:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:39.787 05:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.787 05:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.787 05:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:39.787 05:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.787 05:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.787 05:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:39.787 05:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.787 05:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.787 05:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:39.787 05:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.787 05:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.787 05:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:40.046 05:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:40.046 05:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:40.046 05:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:40.046 05:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:40.046 05:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:40.046 05:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:40.046 05:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:40.046 05:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:40.304 05:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.304 05:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.304 05:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:40.304 05:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.304 05:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.304 05:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:40.304 05:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.304 05:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.304 05:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:40.304 05:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.304 05:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.304 05:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:40.304 05:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.304 05:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.304 05:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:40.304 05:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.304 05:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.304 05:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:40.304 05:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.305 05:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.305 05:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:40.305 05:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.305 05:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.305 05:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:40.563 05:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:40.563 05:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:40.563 05:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:40.563 05:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:40.563 05:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:40.563 05:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:40.563 05:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:40.563 05:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:40.821 05:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.821 05:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.821 05:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:40.821 05:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.821 05:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.821 05:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:40.821 05:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.821 05:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.821 05:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:40.821 05:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.821 05:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.821 05:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:40.821 05:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.821 05:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.821 05:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:40.821 05:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.821 05:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.821 05:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:40.821 05:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.821 05:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.821 05:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:40.821 05:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.821 05:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.821 05:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:41.080 05:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:41.080 05:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:41.080 05:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:41.080 05:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:41.080 05:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:41.080 05:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:41.338 05:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:41.338 05:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:41.338 05:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.338 05:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.338 05:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:41.338 05:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.338 05:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.338 05:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:41.338 05:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.338 05:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.338 05:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:41.626 05:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.626 05:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.626 05:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:41.626 05:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.626 05:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.626 05:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:41.626 05:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.626 05:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.626 05:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:41.626 05:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.626 05:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.626 05:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:41.626 05:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.626 05:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.626 05:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:41.626 05:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:41.626 05:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:41.626 05:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:41.885 05:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:41.885 05:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:41.885 05:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:41.885 05:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:41.885 05:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:41.885 05:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.885 05:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.885 05:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:41.885 05:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.885 05:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.885 05:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:41.885 05:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.885 05:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.885 05:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:42.143 05:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.143 05:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.143 05:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:42.143 05:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.143 05:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.143 05:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:42.143 05:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.143 05:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.143 05:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:42.143 05:27:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.143 05:27:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.143 05:27:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.143 05:27:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.143 05:27:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:42.143 05:27:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:42.143 05:27:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:42.402 05:27:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:42.402 05:27:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.402 05:27:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:42.402 05:27:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:42.402 05:27:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:42.402 05:27:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:42.402 05:27:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:42.660 05:27:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.660 05:27:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.660 05:27:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:42.660 05:27:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.660 05:27:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.660 05:27:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:42.660 05:27:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.660 05:27:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.660 05:27:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:42.660 05:27:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.660 05:27:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.660 05:27:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:42.660 05:27:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.660 05:27:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.660 05:27:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:42.660 05:27:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.660 05:27:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.660 05:27:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:42.661 05:27:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.661 05:27:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.661 05:27:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:42.661 05:27:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.661 05:27:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.661 05:27:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:42.918 05:27:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.919 05:27:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:42.919 05:27:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:42.919 05:27:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:42.919 05:27:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:42.919 05:27:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:42.919 05:27:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:42.919 05:27:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:43.177 05:27:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.177 05:27:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.177 05:27:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:43.177 05:27:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.177 05:27:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.177 05:27:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:43.177 05:27:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.177 05:27:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.177 05:27:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:43.177 05:27:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.177 05:27:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.177 05:27:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:43.177 05:27:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.177 05:27:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.177 05:27:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:43.177 05:27:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.177 05:27:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.177 05:27:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.177 05:27:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:43.177 05:27:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.177 05:27:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:43.177 05:27:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.177 05:27:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.177 05:27:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:43.435 05:27:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:43.435 05:27:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:43.435 05:27:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:43.435 05:27:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:43.435 05:27:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:43.435 05:27:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:43.435 05:27:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:43.435 05:27:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:43.693 05:27:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.693 05:27:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.693 05:27:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:43.693 05:27:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.693 05:27:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.693 05:27:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:43.693 05:27:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.693 05:27:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.693 05:27:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:43.693 05:27:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.693 05:27:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.693 05:27:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:43.693 05:27:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.693 05:27:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.693 05:27:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:43.693 05:27:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.693 05:27:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.693 05:27:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:43.693 05:27:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.693 05:27:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.693 05:27:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:43.693 05:27:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.693 05:27:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.693 05:27:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:43.951 05:27:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:43.951 05:27:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:43.951 05:27:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:43.951 05:27:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:43.951 05:27:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:43.951 05:27:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:43.951 05:27:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:43.951 05:27:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:44.210 05:27:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.210 05:27:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.210 05:27:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.210 05:27:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.210 05:27:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.210 05:27:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.210 05:27:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.210 05:27:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.210 05:27:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.210 05:27:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.210 05:27:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.210 05:27:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.210 05:27:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.210 05:27:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.210 05:27:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.210 05:27:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.210 05:27:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:44.210 05:27:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:13:44.210 05:27:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:44.210 05:27:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:13:44.210 05:27:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:44.210 05:27:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:13:44.210 05:27:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:44.210 05:27:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:44.210 rmmod nvme_tcp 00:13:44.210 rmmod nvme_fabrics 00:13:44.210 rmmod nvme_keyring 00:13:44.210 05:27:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:44.210 05:27:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:13:44.210 05:27:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:13:44.210 05:27:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 3175781 ']' 00:13:44.210 05:27:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 3175781 00:13:44.210 05:27:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@946 -- # '[' -z 3175781 ']' 00:13:44.210 05:27:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # kill -0 3175781 00:13:44.210 05:27:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # uname 00:13:44.210 05:27:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:44.210 05:27:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3175781 00:13:44.210 05:27:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:44.210 05:27:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:44.210 05:27:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3175781' 00:13:44.210 killing process with pid 3175781 00:13:44.210 05:27:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@965 -- # kill 3175781 00:13:44.210 05:27:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # wait 3175781 00:13:44.468 05:27:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:44.468 05:27:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:44.468 05:27:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:44.468 05:27:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:44.468 05:27:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:44.468 05:27:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:44.468 05:27:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:44.468 05:27:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:46.998 05:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:46.999 00:13:46.999 real 0m46.696s 00:13:46.999 user 3m32.633s 00:13:46.999 sys 0m16.671s 00:13:46.999 05:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:46.999 05:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.999 ************************************ 00:13:46.999 END TEST nvmf_ns_hotplug_stress 00:13:46.999 ************************************ 00:13:46.999 05:27:53 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:46.999 05:27:53 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:46.999 05:27:53 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:46.999 05:27:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:46.999 ************************************ 00:13:46.999 START TEST nvmf_connect_stress 00:13:46.999 ************************************ 00:13:46.999 05:27:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:46.999 * Looking for test storage... 00:13:46.999 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:46.999 05:27:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:46.999 05:27:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:46.999 05:27:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:46.999 05:27:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:46.999 05:27:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:46.999 05:27:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:46.999 05:27:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:46.999 05:27:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:46.999 05:27:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:46.999 05:27:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:46.999 05:27:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:46.999 05:27:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:46.999 05:27:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:46.999 05:27:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:46.999 05:27:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:46.999 05:27:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:46.999 05:27:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:46.999 05:27:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:46.999 05:27:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:46.999 05:27:53 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:46.999 05:27:53 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:46.999 05:27:53 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:46.999 05:27:53 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.999 05:27:53 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.999 05:27:53 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.999 05:27:53 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:46.999 05:27:53 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.999 05:27:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:13:46.999 05:27:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:46.999 05:27:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:46.999 05:27:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:46.999 05:27:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:46.999 05:27:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:46.999 05:27:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:46.999 05:27:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:46.999 05:27:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:46.999 05:27:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:46.999 05:27:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:46.999 05:27:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:46.999 05:27:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:46.999 05:27:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:46.999 05:27:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:46.999 05:27:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:46.999 05:27:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:46.999 05:27:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:46.999 05:27:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:46.999 05:27:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:46.999 05:27:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:46.999 05:27:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.900 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:48.900 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:13:48.900 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:48.900 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:48.900 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:48.900 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:48.900 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:48.900 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:13:48.900 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:48.900 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:13:48.900 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:13:48.900 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:13:48.900 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:13:48.900 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:13:48.900 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:13:48.900 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:48.900 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:48.900 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:48.900 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:48.900 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:48.900 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:48.900 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:48.900 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:48.900 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:48.900 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:48.900 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:48.900 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:48.900 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:48.900 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:48.900 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:48.900 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:48.900 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:48.900 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:48.900 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:48.900 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:48.900 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:48.900 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:48.900 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:48.900 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:48.900 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:48.900 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:48.900 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:48.900 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:48.900 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:48.900 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:48.900 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:48.900 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:48.900 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:48.900 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:48.900 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:48.900 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:48.900 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:48.900 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:48.900 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:48.900 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:48.900 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:48.900 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:48.900 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:48.900 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:48.900 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:48.900 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:48.900 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:48.900 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:48.900 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:48.900 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:48.901 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:48.901 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:48.901 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:48.901 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:48.901 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:48.901 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:48.901 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:48.901 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:13:48.901 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:48.901 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:48.901 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:48.901 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:48.901 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:48.901 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:48.901 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:48.901 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:48.901 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:48.901 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:48.901 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:48.901 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:48.901 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:48.901 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:48.901 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:48.901 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:48.901 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:48.901 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:48.901 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:48.901 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:48.901 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:48.901 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:48.901 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:48.901 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:48.901 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:13:48.901 00:13:48.901 --- 10.0.0.2 ping statistics --- 00:13:48.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:48.901 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:13:48.901 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:48.901 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:48.901 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:13:48.901 00:13:48.901 --- 10.0.0.1 ping statistics --- 00:13:48.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:48.901 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:13:48.901 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:48.901 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:13:48.901 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:48.901 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:48.901 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:48.901 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:48.901 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:48.901 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:48.901 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:48.901 05:27:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:48.901 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:48.901 05:27:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:48.901 05:27:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.901 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=3183024 00:13:48.901 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:48.901 05:27:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 3183024 00:13:48.901 05:27:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@827 -- # '[' -z 3183024 ']' 00:13:48.901 05:27:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:48.901 05:27:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:48.901 05:27:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:48.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:48.901 05:27:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:48.901 05:27:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.901 [2024-07-14 05:27:55.883473] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:13:48.901 [2024-07-14 05:27:55.883575] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:48.901 EAL: No free 2048 kB hugepages reported on node 1 00:13:48.901 [2024-07-14 05:27:55.955306] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:49.160 [2024-07-14 05:27:56.045925] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:49.160 [2024-07-14 05:27:56.045985] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:49.160 [2024-07-14 05:27:56.046013] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:49.160 [2024-07-14 05:27:56.046026] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:49.160 [2024-07-14 05:27:56.046039] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:49.160 [2024-07-14 05:27:56.046137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:49.160 [2024-07-14 05:27:56.046235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:49.160 [2024-07-14 05:27:56.046238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:49.160 05:27:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:49.160 05:27:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@860 -- # return 0 00:13:49.160 05:27:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:49.160 05:27:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:49.160 05:27:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.160 05:27:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:49.160 05:27:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:49.160 05:27:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.160 05:27:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.160 [2024-07-14 05:27:56.178623] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:49.160 05:27:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.160 05:27:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:49.160 05:27:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.160 05:27:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.160 05:27:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.160 05:27:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:49.160 05:27:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.160 05:27:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.160 [2024-07-14 05:27:56.210023] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:49.160 05:27:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.160 05:27:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:49.160 05:27:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.160 05:27:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.160 NULL1 00:13:49.160 05:27:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.160 05:27:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3183049 00:13:49.160 05:27:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:49.160 05:27:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:49.160 05:27:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:49.160 05:27:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:49.160 05:27:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:49.160 05:27:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:49.160 05:27:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:49.160 05:27:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:49.160 05:27:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:49.160 05:27:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:49.160 05:27:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:49.160 05:27:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:49.160 05:27:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:49.161 05:27:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:49.161 05:27:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:49.161 05:27:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:49.161 05:27:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:49.161 05:27:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:49.161 05:27:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:49.161 05:27:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:49.161 05:27:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:49.161 05:27:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:49.161 05:27:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:49.161 05:27:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:49.161 05:27:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:49.161 05:27:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:49.161 05:27:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:49.161 05:27:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:49.161 EAL: No free 2048 kB hugepages reported on node 1 00:13:49.161 05:27:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:49.161 05:27:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:49.161 05:27:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:49.161 05:27:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:49.161 05:27:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:49.161 05:27:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:49.161 05:27:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:49.161 05:27:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:49.161 05:27:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:49.161 05:27:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:49.161 05:27:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:49.161 05:27:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:49.161 05:27:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:49.161 05:27:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:49.161 05:27:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:49.161 05:27:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:49.161 05:27:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3183049 00:13:49.161 05:27:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.161 05:27:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.161 05:27:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.726 05:27:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.726 05:27:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3183049 00:13:49.726 05:27:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.726 05:27:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.726 05:27:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.984 05:27:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.984 05:27:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3183049 00:13:49.984 05:27:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.984 05:27:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.984 05:27:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.241 05:27:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.241 05:27:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3183049 00:13:50.241 05:27:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.241 05:27:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.241 05:27:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.499 05:27:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.499 05:27:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3183049 00:13:50.499 05:27:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.499 05:27:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.499 05:27:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.064 05:27:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.064 05:27:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3183049 00:13:51.064 05:27:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.064 05:27:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.064 05:27:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.322 05:27:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.322 05:27:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3183049 00:13:51.322 05:27:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.322 05:27:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.322 05:27:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.579 05:27:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.579 05:27:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3183049 00:13:51.579 05:27:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.579 05:27:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.579 05:27:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.837 05:27:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.837 05:27:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3183049 00:13:51.837 05:27:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.837 05:27:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.837 05:27:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.095 05:27:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.095 05:27:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3183049 00:13:52.095 05:27:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.095 05:27:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.095 05:27:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.658 05:27:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.658 05:27:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3183049 00:13:52.658 05:27:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.658 05:27:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.658 05:27:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.936 05:27:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.936 05:27:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3183049 00:13:52.936 05:27:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.936 05:27:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.936 05:27:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.194 05:28:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.194 05:28:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3183049 00:13:53.194 05:28:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.194 05:28:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.194 05:28:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.452 05:28:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.452 05:28:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3183049 00:13:53.452 05:28:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.452 05:28:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.452 05:28:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.709 05:28:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.709 05:28:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3183049 00:13:53.709 05:28:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.709 05:28:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.709 05:28:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:54.279 05:28:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.279 05:28:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3183049 00:13:54.279 05:28:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.279 05:28:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.279 05:28:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:54.535 05:28:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.535 05:28:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3183049 00:13:54.535 05:28:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.535 05:28:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.535 05:28:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:54.792 05:28:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.792 05:28:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3183049 00:13:54.793 05:28:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.793 05:28:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.793 05:28:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.049 05:28:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.049 05:28:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3183049 00:13:55.049 05:28:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:55.049 05:28:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.049 05:28:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.306 05:28:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.306 05:28:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3183049 00:13:55.306 05:28:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:55.306 05:28:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.306 05:28:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.871 05:28:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.871 05:28:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3183049 00:13:55.871 05:28:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:55.871 05:28:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.871 05:28:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.129 05:28:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.129 05:28:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3183049 00:13:56.129 05:28:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.129 05:28:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.129 05:28:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.385 05:28:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.385 05:28:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3183049 00:13:56.385 05:28:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.385 05:28:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.385 05:28:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.642 05:28:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.642 05:28:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3183049 00:13:56.642 05:28:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.642 05:28:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.642 05:28:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.898 05:28:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.898 05:28:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3183049 00:13:56.898 05:28:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.899 05:28:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.899 05:28:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.507 05:28:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.507 05:28:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3183049 00:13:57.507 05:28:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:57.507 05:28:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.507 05:28:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.764 05:28:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.764 05:28:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3183049 00:13:57.764 05:28:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:57.764 05:28:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.764 05:28:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:58.021 05:28:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.021 05:28:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3183049 00:13:58.021 05:28:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:58.021 05:28:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.021 05:28:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:58.279 05:28:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.279 05:28:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3183049 00:13:58.279 05:28:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:58.279 05:28:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.279 05:28:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:58.536 05:28:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.536 05:28:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3183049 00:13:58.536 05:28:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:58.536 05:28:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.536 05:28:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:59.101 05:28:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.101 05:28:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3183049 00:13:59.101 05:28:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:59.101 05:28:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.101 05:28:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:59.358 05:28:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.358 05:28:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3183049 00:13:59.358 05:28:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:59.358 05:28:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.358 05:28:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:59.358 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:59.615 05:28:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.615 05:28:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3183049 00:13:59.615 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3183049) - No such process 00:13:59.615 05:28:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3183049 00:13:59.615 05:28:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:59.615 05:28:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:59.615 05:28:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:59.615 05:28:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:59.615 05:28:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:13:59.615 05:28:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:59.615 05:28:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:13:59.615 05:28:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:59.615 05:28:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:59.615 rmmod nvme_tcp 00:13:59.615 rmmod nvme_fabrics 00:13:59.615 rmmod nvme_keyring 00:13:59.615 05:28:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:59.615 05:28:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:13:59.615 05:28:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:13:59.615 05:28:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 3183024 ']' 00:13:59.615 05:28:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 3183024 00:13:59.615 05:28:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@946 -- # '[' -z 3183024 ']' 00:13:59.615 05:28:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@950 -- # kill -0 3183024 00:13:59.615 05:28:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # uname 00:13:59.615 05:28:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:59.615 05:28:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3183024 00:13:59.615 05:28:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:59.615 05:28:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:59.615 05:28:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3183024' 00:13:59.615 killing process with pid 3183024 00:13:59.615 05:28:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@965 -- # kill 3183024 00:13:59.615 05:28:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@970 -- # wait 3183024 00:13:59.873 05:28:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:59.873 05:28:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:59.873 05:28:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:59.873 05:28:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:59.873 05:28:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:59.873 05:28:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:59.873 05:28:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:59.873 05:28:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:02.400 05:28:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:02.400 00:14:02.400 real 0m15.317s 00:14:02.400 user 0m37.911s 00:14:02.400 sys 0m6.060s 00:14:02.400 05:28:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:02.400 05:28:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:02.400 ************************************ 00:14:02.400 END TEST nvmf_connect_stress 00:14:02.400 ************************************ 00:14:02.400 05:28:08 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:02.400 05:28:08 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:02.400 05:28:08 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:02.400 05:28:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:02.400 ************************************ 00:14:02.400 START TEST nvmf_fused_ordering 00:14:02.400 ************************************ 00:14:02.400 05:28:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:02.400 * Looking for test storage... 00:14:02.400 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:02.400 05:28:09 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:02.400 05:28:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:02.400 05:28:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:02.400 05:28:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:02.400 05:28:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:02.400 05:28:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:02.400 05:28:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:02.400 05:28:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:02.400 05:28:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:02.400 05:28:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:02.400 05:28:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:02.400 05:28:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:02.400 05:28:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:02.400 05:28:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:02.400 05:28:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:02.400 05:28:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:02.400 05:28:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:02.400 05:28:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:02.400 05:28:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:02.400 05:28:09 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:02.400 05:28:09 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:02.400 05:28:09 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:02.400 05:28:09 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.400 05:28:09 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.401 05:28:09 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.401 05:28:09 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:02.401 05:28:09 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.401 05:28:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:14:02.401 05:28:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:02.401 05:28:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:02.401 05:28:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:02.401 05:28:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:02.401 05:28:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:02.401 05:28:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:02.401 05:28:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:02.401 05:28:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:02.401 05:28:09 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:02.401 05:28:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:02.401 05:28:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:02.401 05:28:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:02.401 05:28:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:02.401 05:28:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:02.401 05:28:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:02.401 05:28:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:02.401 05:28:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:02.401 05:28:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:02.401 05:28:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:02.401 05:28:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:14:02.401 05:28:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:04.304 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:04.304 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:14:04.304 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:04.304 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:04.304 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:04.304 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:04.304 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:04.304 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:14:04.304 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:04.304 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:14:04.304 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:14:04.304 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:14:04.304 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:14:04.304 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:14:04.304 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:14:04.304 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:04.304 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:04.304 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:04.304 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:04.304 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:04.304 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:04.304 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:04.304 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:04.304 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:04.304 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:04.304 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:04.304 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:04.305 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:04.305 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:04.305 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:04.305 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:04.305 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:04.305 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:14:04.305 00:14:04.305 --- 10.0.0.2 ping statistics --- 00:14:04.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:04.305 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:04.305 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:04.305 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:14:04.305 00:14:04.305 --- 10.0.0.1 ping statistics --- 00:14:04.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:04.305 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=3186193 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 3186193 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@827 -- # '[' -z 3186193 ']' 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:04.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:04.305 05:28:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:04.305 [2024-07-14 05:28:11.296576] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:14:04.305 [2024-07-14 05:28:11.296656] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:04.305 EAL: No free 2048 kB hugepages reported on node 1 00:14:04.305 [2024-07-14 05:28:11.360989] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:04.564 [2024-07-14 05:28:11.452581] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:04.564 [2024-07-14 05:28:11.452647] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:04.564 [2024-07-14 05:28:11.452661] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:04.564 [2024-07-14 05:28:11.452672] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:04.564 [2024-07-14 05:28:11.452682] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:04.564 [2024-07-14 05:28:11.452712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:04.564 05:28:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:04.564 05:28:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # return 0 00:14:04.564 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:04.564 05:28:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:04.564 05:28:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:04.564 05:28:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:04.564 05:28:11 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:04.564 05:28:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.564 05:28:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:04.564 [2024-07-14 05:28:11.593017] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:04.564 05:28:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.564 05:28:11 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:04.564 05:28:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.564 05:28:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:04.564 05:28:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.564 05:28:11 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:04.564 05:28:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.564 05:28:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:04.564 [2024-07-14 05:28:11.609221] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:04.564 05:28:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.564 05:28:11 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:04.564 05:28:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.564 05:28:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:04.564 NULL1 00:14:04.564 05:28:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.564 05:28:11 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:04.564 05:28:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.564 05:28:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:04.564 05:28:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.564 05:28:11 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:04.564 05:28:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.564 05:28:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:04.564 05:28:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.564 05:28:11 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:04.564 [2024-07-14 05:28:11.654802] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:14:04.564 [2024-07-14 05:28:11.654843] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3186332 ] 00:14:04.822 EAL: No free 2048 kB hugepages reported on node 1 00:14:05.756 Attached to nqn.2016-06.io.spdk:cnode1 00:14:05.756 Namespace ID: 1 size: 1GB 00:14:05.756 fused_ordering(0) 00:14:05.756 fused_ordering(1) 00:14:05.756 fused_ordering(2) 00:14:05.756 fused_ordering(3) 00:14:05.756 fused_ordering(4) 00:14:05.756 fused_ordering(5) 00:14:05.756 fused_ordering(6) 00:14:05.756 fused_ordering(7) 00:14:05.756 fused_ordering(8) 00:14:05.756 fused_ordering(9) 00:14:05.756 fused_ordering(10) 00:14:05.756 fused_ordering(11) 00:14:05.756 fused_ordering(12) 00:14:05.756 fused_ordering(13) 00:14:05.756 fused_ordering(14) 00:14:05.756 fused_ordering(15) 00:14:05.756 fused_ordering(16) 00:14:05.756 fused_ordering(17) 00:14:05.756 fused_ordering(18) 00:14:05.756 fused_ordering(19) 00:14:05.756 fused_ordering(20) 00:14:05.756 fused_ordering(21) 00:14:05.756 fused_ordering(22) 00:14:05.756 fused_ordering(23) 00:14:05.756 fused_ordering(24) 00:14:05.756 fused_ordering(25) 00:14:05.756 fused_ordering(26) 00:14:05.756 fused_ordering(27) 00:14:05.756 fused_ordering(28) 00:14:05.756 fused_ordering(29) 00:14:05.756 fused_ordering(30) 00:14:05.756 fused_ordering(31) 00:14:05.756 fused_ordering(32) 00:14:05.756 fused_ordering(33) 00:14:05.756 fused_ordering(34) 00:14:05.756 fused_ordering(35) 00:14:05.756 fused_ordering(36) 00:14:05.756 fused_ordering(37) 00:14:05.756 fused_ordering(38) 00:14:05.756 fused_ordering(39) 00:14:05.756 fused_ordering(40) 00:14:05.756 fused_ordering(41) 00:14:05.756 fused_ordering(42) 00:14:05.756 fused_ordering(43) 00:14:05.756 fused_ordering(44) 00:14:05.756 fused_ordering(45) 00:14:05.756 fused_ordering(46) 00:14:05.756 fused_ordering(47) 00:14:05.756 fused_ordering(48) 00:14:05.756 fused_ordering(49) 00:14:05.756 fused_ordering(50) 00:14:05.756 fused_ordering(51) 00:14:05.756 fused_ordering(52) 00:14:05.756 fused_ordering(53) 00:14:05.756 fused_ordering(54) 00:14:05.756 fused_ordering(55) 00:14:05.756 fused_ordering(56) 00:14:05.756 fused_ordering(57) 00:14:05.756 fused_ordering(58) 00:14:05.756 fused_ordering(59) 00:14:05.756 fused_ordering(60) 00:14:05.756 fused_ordering(61) 00:14:05.756 fused_ordering(62) 00:14:05.756 fused_ordering(63) 00:14:05.756 fused_ordering(64) 00:14:05.756 fused_ordering(65) 00:14:05.756 fused_ordering(66) 00:14:05.756 fused_ordering(67) 00:14:05.756 fused_ordering(68) 00:14:05.756 fused_ordering(69) 00:14:05.756 fused_ordering(70) 00:14:05.756 fused_ordering(71) 00:14:05.756 fused_ordering(72) 00:14:05.756 fused_ordering(73) 00:14:05.756 fused_ordering(74) 00:14:05.756 fused_ordering(75) 00:14:05.756 fused_ordering(76) 00:14:05.756 fused_ordering(77) 00:14:05.756 fused_ordering(78) 00:14:05.756 fused_ordering(79) 00:14:05.756 fused_ordering(80) 00:14:05.756 fused_ordering(81) 00:14:05.756 fused_ordering(82) 00:14:05.756 fused_ordering(83) 00:14:05.756 fused_ordering(84) 00:14:05.756 fused_ordering(85) 00:14:05.756 fused_ordering(86) 00:14:05.756 fused_ordering(87) 00:14:05.756 fused_ordering(88) 00:14:05.756 fused_ordering(89) 00:14:05.756 fused_ordering(90) 00:14:05.756 fused_ordering(91) 00:14:05.756 fused_ordering(92) 00:14:05.756 fused_ordering(93) 00:14:05.756 fused_ordering(94) 00:14:05.756 fused_ordering(95) 00:14:05.756 fused_ordering(96) 00:14:05.756 fused_ordering(97) 00:14:05.756 fused_ordering(98) 00:14:05.756 fused_ordering(99) 00:14:05.756 fused_ordering(100) 00:14:05.756 fused_ordering(101) 00:14:05.756 fused_ordering(102) 00:14:05.756 fused_ordering(103) 00:14:05.756 fused_ordering(104) 00:14:05.756 fused_ordering(105) 00:14:05.756 fused_ordering(106) 00:14:05.756 fused_ordering(107) 00:14:05.756 fused_ordering(108) 00:14:05.756 fused_ordering(109) 00:14:05.756 fused_ordering(110) 00:14:05.756 fused_ordering(111) 00:14:05.756 fused_ordering(112) 00:14:05.756 fused_ordering(113) 00:14:05.756 fused_ordering(114) 00:14:05.756 fused_ordering(115) 00:14:05.756 fused_ordering(116) 00:14:05.756 fused_ordering(117) 00:14:05.756 fused_ordering(118) 00:14:05.756 fused_ordering(119) 00:14:05.756 fused_ordering(120) 00:14:05.756 fused_ordering(121) 00:14:05.756 fused_ordering(122) 00:14:05.756 fused_ordering(123) 00:14:05.756 fused_ordering(124) 00:14:05.756 fused_ordering(125) 00:14:05.756 fused_ordering(126) 00:14:05.757 fused_ordering(127) 00:14:05.757 fused_ordering(128) 00:14:05.757 fused_ordering(129) 00:14:05.757 fused_ordering(130) 00:14:05.757 fused_ordering(131) 00:14:05.757 fused_ordering(132) 00:14:05.757 fused_ordering(133) 00:14:05.757 fused_ordering(134) 00:14:05.757 fused_ordering(135) 00:14:05.757 fused_ordering(136) 00:14:05.757 fused_ordering(137) 00:14:05.757 fused_ordering(138) 00:14:05.757 fused_ordering(139) 00:14:05.757 fused_ordering(140) 00:14:05.757 fused_ordering(141) 00:14:05.757 fused_ordering(142) 00:14:05.757 fused_ordering(143) 00:14:05.757 fused_ordering(144) 00:14:05.757 fused_ordering(145) 00:14:05.757 fused_ordering(146) 00:14:05.757 fused_ordering(147) 00:14:05.757 fused_ordering(148) 00:14:05.757 fused_ordering(149) 00:14:05.757 fused_ordering(150) 00:14:05.757 fused_ordering(151) 00:14:05.757 fused_ordering(152) 00:14:05.757 fused_ordering(153) 00:14:05.757 fused_ordering(154) 00:14:05.757 fused_ordering(155) 00:14:05.757 fused_ordering(156) 00:14:05.757 fused_ordering(157) 00:14:05.757 fused_ordering(158) 00:14:05.757 fused_ordering(159) 00:14:05.757 fused_ordering(160) 00:14:05.757 fused_ordering(161) 00:14:05.757 fused_ordering(162) 00:14:05.757 fused_ordering(163) 00:14:05.757 fused_ordering(164) 00:14:05.757 fused_ordering(165) 00:14:05.757 fused_ordering(166) 00:14:05.757 fused_ordering(167) 00:14:05.757 fused_ordering(168) 00:14:05.757 fused_ordering(169) 00:14:05.757 fused_ordering(170) 00:14:05.757 fused_ordering(171) 00:14:05.757 fused_ordering(172) 00:14:05.757 fused_ordering(173) 00:14:05.757 fused_ordering(174) 00:14:05.757 fused_ordering(175) 00:14:05.757 fused_ordering(176) 00:14:05.757 fused_ordering(177) 00:14:05.757 fused_ordering(178) 00:14:05.757 fused_ordering(179) 00:14:05.757 fused_ordering(180) 00:14:05.757 fused_ordering(181) 00:14:05.757 fused_ordering(182) 00:14:05.757 fused_ordering(183) 00:14:05.757 fused_ordering(184) 00:14:05.757 fused_ordering(185) 00:14:05.757 fused_ordering(186) 00:14:05.757 fused_ordering(187) 00:14:05.757 fused_ordering(188) 00:14:05.757 fused_ordering(189) 00:14:05.757 fused_ordering(190) 00:14:05.757 fused_ordering(191) 00:14:05.757 fused_ordering(192) 00:14:05.757 fused_ordering(193) 00:14:05.757 fused_ordering(194) 00:14:05.757 fused_ordering(195) 00:14:05.757 fused_ordering(196) 00:14:05.757 fused_ordering(197) 00:14:05.757 fused_ordering(198) 00:14:05.757 fused_ordering(199) 00:14:05.757 fused_ordering(200) 00:14:05.757 fused_ordering(201) 00:14:05.757 fused_ordering(202) 00:14:05.757 fused_ordering(203) 00:14:05.757 fused_ordering(204) 00:14:05.757 fused_ordering(205) 00:14:06.323 fused_ordering(206) 00:14:06.323 fused_ordering(207) 00:14:06.323 fused_ordering(208) 00:14:06.323 fused_ordering(209) 00:14:06.323 fused_ordering(210) 00:14:06.323 fused_ordering(211) 00:14:06.323 fused_ordering(212) 00:14:06.323 fused_ordering(213) 00:14:06.323 fused_ordering(214) 00:14:06.323 fused_ordering(215) 00:14:06.323 fused_ordering(216) 00:14:06.323 fused_ordering(217) 00:14:06.323 fused_ordering(218) 00:14:06.323 fused_ordering(219) 00:14:06.323 fused_ordering(220) 00:14:06.323 fused_ordering(221) 00:14:06.323 fused_ordering(222) 00:14:06.323 fused_ordering(223) 00:14:06.323 fused_ordering(224) 00:14:06.323 fused_ordering(225) 00:14:06.323 fused_ordering(226) 00:14:06.323 fused_ordering(227) 00:14:06.323 fused_ordering(228) 00:14:06.323 fused_ordering(229) 00:14:06.323 fused_ordering(230) 00:14:06.323 fused_ordering(231) 00:14:06.323 fused_ordering(232) 00:14:06.323 fused_ordering(233) 00:14:06.323 fused_ordering(234) 00:14:06.323 fused_ordering(235) 00:14:06.323 fused_ordering(236) 00:14:06.323 fused_ordering(237) 00:14:06.323 fused_ordering(238) 00:14:06.323 fused_ordering(239) 00:14:06.323 fused_ordering(240) 00:14:06.323 fused_ordering(241) 00:14:06.323 fused_ordering(242) 00:14:06.323 fused_ordering(243) 00:14:06.323 fused_ordering(244) 00:14:06.323 fused_ordering(245) 00:14:06.323 fused_ordering(246) 00:14:06.323 fused_ordering(247) 00:14:06.323 fused_ordering(248) 00:14:06.323 fused_ordering(249) 00:14:06.323 fused_ordering(250) 00:14:06.323 fused_ordering(251) 00:14:06.323 fused_ordering(252) 00:14:06.323 fused_ordering(253) 00:14:06.323 fused_ordering(254) 00:14:06.323 fused_ordering(255) 00:14:06.323 fused_ordering(256) 00:14:06.323 fused_ordering(257) 00:14:06.323 fused_ordering(258) 00:14:06.323 fused_ordering(259) 00:14:06.323 fused_ordering(260) 00:14:06.323 fused_ordering(261) 00:14:06.323 fused_ordering(262) 00:14:06.323 fused_ordering(263) 00:14:06.323 fused_ordering(264) 00:14:06.323 fused_ordering(265) 00:14:06.323 fused_ordering(266) 00:14:06.323 fused_ordering(267) 00:14:06.323 fused_ordering(268) 00:14:06.323 fused_ordering(269) 00:14:06.323 fused_ordering(270) 00:14:06.323 fused_ordering(271) 00:14:06.323 fused_ordering(272) 00:14:06.323 fused_ordering(273) 00:14:06.323 fused_ordering(274) 00:14:06.323 fused_ordering(275) 00:14:06.323 fused_ordering(276) 00:14:06.324 fused_ordering(277) 00:14:06.324 fused_ordering(278) 00:14:06.324 fused_ordering(279) 00:14:06.324 fused_ordering(280) 00:14:06.324 fused_ordering(281) 00:14:06.324 fused_ordering(282) 00:14:06.324 fused_ordering(283) 00:14:06.324 fused_ordering(284) 00:14:06.324 fused_ordering(285) 00:14:06.324 fused_ordering(286) 00:14:06.324 fused_ordering(287) 00:14:06.324 fused_ordering(288) 00:14:06.324 fused_ordering(289) 00:14:06.324 fused_ordering(290) 00:14:06.324 fused_ordering(291) 00:14:06.324 fused_ordering(292) 00:14:06.324 fused_ordering(293) 00:14:06.324 fused_ordering(294) 00:14:06.324 fused_ordering(295) 00:14:06.324 fused_ordering(296) 00:14:06.324 fused_ordering(297) 00:14:06.324 fused_ordering(298) 00:14:06.324 fused_ordering(299) 00:14:06.324 fused_ordering(300) 00:14:06.324 fused_ordering(301) 00:14:06.324 fused_ordering(302) 00:14:06.324 fused_ordering(303) 00:14:06.324 fused_ordering(304) 00:14:06.324 fused_ordering(305) 00:14:06.324 fused_ordering(306) 00:14:06.324 fused_ordering(307) 00:14:06.324 fused_ordering(308) 00:14:06.324 fused_ordering(309) 00:14:06.324 fused_ordering(310) 00:14:06.324 fused_ordering(311) 00:14:06.324 fused_ordering(312) 00:14:06.324 fused_ordering(313) 00:14:06.324 fused_ordering(314) 00:14:06.324 fused_ordering(315) 00:14:06.324 fused_ordering(316) 00:14:06.324 fused_ordering(317) 00:14:06.324 fused_ordering(318) 00:14:06.324 fused_ordering(319) 00:14:06.324 fused_ordering(320) 00:14:06.324 fused_ordering(321) 00:14:06.324 fused_ordering(322) 00:14:06.324 fused_ordering(323) 00:14:06.324 fused_ordering(324) 00:14:06.324 fused_ordering(325) 00:14:06.324 fused_ordering(326) 00:14:06.324 fused_ordering(327) 00:14:06.324 fused_ordering(328) 00:14:06.324 fused_ordering(329) 00:14:06.324 fused_ordering(330) 00:14:06.324 fused_ordering(331) 00:14:06.324 fused_ordering(332) 00:14:06.324 fused_ordering(333) 00:14:06.324 fused_ordering(334) 00:14:06.324 fused_ordering(335) 00:14:06.324 fused_ordering(336) 00:14:06.324 fused_ordering(337) 00:14:06.324 fused_ordering(338) 00:14:06.324 fused_ordering(339) 00:14:06.324 fused_ordering(340) 00:14:06.324 fused_ordering(341) 00:14:06.324 fused_ordering(342) 00:14:06.324 fused_ordering(343) 00:14:06.324 fused_ordering(344) 00:14:06.324 fused_ordering(345) 00:14:06.324 fused_ordering(346) 00:14:06.324 fused_ordering(347) 00:14:06.324 fused_ordering(348) 00:14:06.324 fused_ordering(349) 00:14:06.324 fused_ordering(350) 00:14:06.324 fused_ordering(351) 00:14:06.324 fused_ordering(352) 00:14:06.324 fused_ordering(353) 00:14:06.324 fused_ordering(354) 00:14:06.324 fused_ordering(355) 00:14:06.324 fused_ordering(356) 00:14:06.324 fused_ordering(357) 00:14:06.324 fused_ordering(358) 00:14:06.324 fused_ordering(359) 00:14:06.324 fused_ordering(360) 00:14:06.324 fused_ordering(361) 00:14:06.324 fused_ordering(362) 00:14:06.324 fused_ordering(363) 00:14:06.324 fused_ordering(364) 00:14:06.324 fused_ordering(365) 00:14:06.324 fused_ordering(366) 00:14:06.324 fused_ordering(367) 00:14:06.324 fused_ordering(368) 00:14:06.324 fused_ordering(369) 00:14:06.324 fused_ordering(370) 00:14:06.324 fused_ordering(371) 00:14:06.324 fused_ordering(372) 00:14:06.324 fused_ordering(373) 00:14:06.324 fused_ordering(374) 00:14:06.324 fused_ordering(375) 00:14:06.324 fused_ordering(376) 00:14:06.324 fused_ordering(377) 00:14:06.324 fused_ordering(378) 00:14:06.324 fused_ordering(379) 00:14:06.324 fused_ordering(380) 00:14:06.324 fused_ordering(381) 00:14:06.324 fused_ordering(382) 00:14:06.324 fused_ordering(383) 00:14:06.324 fused_ordering(384) 00:14:06.324 fused_ordering(385) 00:14:06.324 fused_ordering(386) 00:14:06.324 fused_ordering(387) 00:14:06.324 fused_ordering(388) 00:14:06.324 fused_ordering(389) 00:14:06.324 fused_ordering(390) 00:14:06.324 fused_ordering(391) 00:14:06.324 fused_ordering(392) 00:14:06.324 fused_ordering(393) 00:14:06.324 fused_ordering(394) 00:14:06.324 fused_ordering(395) 00:14:06.324 fused_ordering(396) 00:14:06.324 fused_ordering(397) 00:14:06.324 fused_ordering(398) 00:14:06.324 fused_ordering(399) 00:14:06.324 fused_ordering(400) 00:14:06.324 fused_ordering(401) 00:14:06.324 fused_ordering(402) 00:14:06.324 fused_ordering(403) 00:14:06.324 fused_ordering(404) 00:14:06.324 fused_ordering(405) 00:14:06.324 fused_ordering(406) 00:14:06.324 fused_ordering(407) 00:14:06.324 fused_ordering(408) 00:14:06.324 fused_ordering(409) 00:14:06.324 fused_ordering(410) 00:14:07.258 fused_ordering(411) 00:14:07.258 fused_ordering(412) 00:14:07.258 fused_ordering(413) 00:14:07.258 fused_ordering(414) 00:14:07.258 fused_ordering(415) 00:14:07.258 fused_ordering(416) 00:14:07.258 fused_ordering(417) 00:14:07.258 fused_ordering(418) 00:14:07.258 fused_ordering(419) 00:14:07.258 fused_ordering(420) 00:14:07.258 fused_ordering(421) 00:14:07.258 fused_ordering(422) 00:14:07.258 fused_ordering(423) 00:14:07.258 fused_ordering(424) 00:14:07.258 fused_ordering(425) 00:14:07.258 fused_ordering(426) 00:14:07.258 fused_ordering(427) 00:14:07.258 fused_ordering(428) 00:14:07.258 fused_ordering(429) 00:14:07.258 fused_ordering(430) 00:14:07.258 fused_ordering(431) 00:14:07.258 fused_ordering(432) 00:14:07.258 fused_ordering(433) 00:14:07.258 fused_ordering(434) 00:14:07.258 fused_ordering(435) 00:14:07.258 fused_ordering(436) 00:14:07.258 fused_ordering(437) 00:14:07.258 fused_ordering(438) 00:14:07.258 fused_ordering(439) 00:14:07.258 fused_ordering(440) 00:14:07.258 fused_ordering(441) 00:14:07.258 fused_ordering(442) 00:14:07.258 fused_ordering(443) 00:14:07.258 fused_ordering(444) 00:14:07.258 fused_ordering(445) 00:14:07.258 fused_ordering(446) 00:14:07.258 fused_ordering(447) 00:14:07.258 fused_ordering(448) 00:14:07.258 fused_ordering(449) 00:14:07.258 fused_ordering(450) 00:14:07.259 fused_ordering(451) 00:14:07.259 fused_ordering(452) 00:14:07.259 fused_ordering(453) 00:14:07.259 fused_ordering(454) 00:14:07.259 fused_ordering(455) 00:14:07.259 fused_ordering(456) 00:14:07.259 fused_ordering(457) 00:14:07.259 fused_ordering(458) 00:14:07.259 fused_ordering(459) 00:14:07.259 fused_ordering(460) 00:14:07.259 fused_ordering(461) 00:14:07.259 fused_ordering(462) 00:14:07.259 fused_ordering(463) 00:14:07.259 fused_ordering(464) 00:14:07.259 fused_ordering(465) 00:14:07.259 fused_ordering(466) 00:14:07.259 fused_ordering(467) 00:14:07.259 fused_ordering(468) 00:14:07.259 fused_ordering(469) 00:14:07.259 fused_ordering(470) 00:14:07.259 fused_ordering(471) 00:14:07.259 fused_ordering(472) 00:14:07.259 fused_ordering(473) 00:14:07.259 fused_ordering(474) 00:14:07.259 fused_ordering(475) 00:14:07.259 fused_ordering(476) 00:14:07.259 fused_ordering(477) 00:14:07.259 fused_ordering(478) 00:14:07.259 fused_ordering(479) 00:14:07.259 fused_ordering(480) 00:14:07.259 fused_ordering(481) 00:14:07.259 fused_ordering(482) 00:14:07.259 fused_ordering(483) 00:14:07.259 fused_ordering(484) 00:14:07.259 fused_ordering(485) 00:14:07.259 fused_ordering(486) 00:14:07.259 fused_ordering(487) 00:14:07.259 fused_ordering(488) 00:14:07.259 fused_ordering(489) 00:14:07.259 fused_ordering(490) 00:14:07.259 fused_ordering(491) 00:14:07.259 fused_ordering(492) 00:14:07.259 fused_ordering(493) 00:14:07.259 fused_ordering(494) 00:14:07.259 fused_ordering(495) 00:14:07.259 fused_ordering(496) 00:14:07.259 fused_ordering(497) 00:14:07.259 fused_ordering(498) 00:14:07.259 fused_ordering(499) 00:14:07.259 fused_ordering(500) 00:14:07.259 fused_ordering(501) 00:14:07.259 fused_ordering(502) 00:14:07.259 fused_ordering(503) 00:14:07.259 fused_ordering(504) 00:14:07.259 fused_ordering(505) 00:14:07.259 fused_ordering(506) 00:14:07.259 fused_ordering(507) 00:14:07.259 fused_ordering(508) 00:14:07.259 fused_ordering(509) 00:14:07.259 fused_ordering(510) 00:14:07.259 fused_ordering(511) 00:14:07.259 fused_ordering(512) 00:14:07.259 fused_ordering(513) 00:14:07.259 fused_ordering(514) 00:14:07.259 fused_ordering(515) 00:14:07.259 fused_ordering(516) 00:14:07.259 fused_ordering(517) 00:14:07.259 fused_ordering(518) 00:14:07.259 fused_ordering(519) 00:14:07.259 fused_ordering(520) 00:14:07.259 fused_ordering(521) 00:14:07.259 fused_ordering(522) 00:14:07.259 fused_ordering(523) 00:14:07.259 fused_ordering(524) 00:14:07.259 fused_ordering(525) 00:14:07.259 fused_ordering(526) 00:14:07.259 fused_ordering(527) 00:14:07.259 fused_ordering(528) 00:14:07.259 fused_ordering(529) 00:14:07.259 fused_ordering(530) 00:14:07.259 fused_ordering(531) 00:14:07.259 fused_ordering(532) 00:14:07.259 fused_ordering(533) 00:14:07.259 fused_ordering(534) 00:14:07.259 fused_ordering(535) 00:14:07.259 fused_ordering(536) 00:14:07.259 fused_ordering(537) 00:14:07.259 fused_ordering(538) 00:14:07.259 fused_ordering(539) 00:14:07.259 fused_ordering(540) 00:14:07.259 fused_ordering(541) 00:14:07.259 fused_ordering(542) 00:14:07.259 fused_ordering(543) 00:14:07.259 fused_ordering(544) 00:14:07.259 fused_ordering(545) 00:14:07.259 fused_ordering(546) 00:14:07.259 fused_ordering(547) 00:14:07.259 fused_ordering(548) 00:14:07.259 fused_ordering(549) 00:14:07.259 fused_ordering(550) 00:14:07.259 fused_ordering(551) 00:14:07.259 fused_ordering(552) 00:14:07.259 fused_ordering(553) 00:14:07.259 fused_ordering(554) 00:14:07.259 fused_ordering(555) 00:14:07.259 fused_ordering(556) 00:14:07.259 fused_ordering(557) 00:14:07.259 fused_ordering(558) 00:14:07.259 fused_ordering(559) 00:14:07.259 fused_ordering(560) 00:14:07.259 fused_ordering(561) 00:14:07.259 fused_ordering(562) 00:14:07.259 fused_ordering(563) 00:14:07.259 fused_ordering(564) 00:14:07.259 fused_ordering(565) 00:14:07.259 fused_ordering(566) 00:14:07.259 fused_ordering(567) 00:14:07.259 fused_ordering(568) 00:14:07.259 fused_ordering(569) 00:14:07.259 fused_ordering(570) 00:14:07.259 fused_ordering(571) 00:14:07.259 fused_ordering(572) 00:14:07.259 fused_ordering(573) 00:14:07.259 fused_ordering(574) 00:14:07.259 fused_ordering(575) 00:14:07.259 fused_ordering(576) 00:14:07.259 fused_ordering(577) 00:14:07.259 fused_ordering(578) 00:14:07.259 fused_ordering(579) 00:14:07.259 fused_ordering(580) 00:14:07.259 fused_ordering(581) 00:14:07.259 fused_ordering(582) 00:14:07.259 fused_ordering(583) 00:14:07.259 fused_ordering(584) 00:14:07.259 fused_ordering(585) 00:14:07.259 fused_ordering(586) 00:14:07.259 fused_ordering(587) 00:14:07.259 fused_ordering(588) 00:14:07.259 fused_ordering(589) 00:14:07.259 fused_ordering(590) 00:14:07.259 fused_ordering(591) 00:14:07.259 fused_ordering(592) 00:14:07.259 fused_ordering(593) 00:14:07.259 fused_ordering(594) 00:14:07.259 fused_ordering(595) 00:14:07.259 fused_ordering(596) 00:14:07.259 fused_ordering(597) 00:14:07.259 fused_ordering(598) 00:14:07.259 fused_ordering(599) 00:14:07.259 fused_ordering(600) 00:14:07.259 fused_ordering(601) 00:14:07.259 fused_ordering(602) 00:14:07.259 fused_ordering(603) 00:14:07.259 fused_ordering(604) 00:14:07.259 fused_ordering(605) 00:14:07.259 fused_ordering(606) 00:14:07.259 fused_ordering(607) 00:14:07.259 fused_ordering(608) 00:14:07.259 fused_ordering(609) 00:14:07.259 fused_ordering(610) 00:14:07.259 fused_ordering(611) 00:14:07.259 fused_ordering(612) 00:14:07.259 fused_ordering(613) 00:14:07.259 fused_ordering(614) 00:14:07.259 fused_ordering(615) 00:14:07.826 fused_ordering(616) 00:14:07.826 fused_ordering(617) 00:14:07.826 fused_ordering(618) 00:14:07.826 fused_ordering(619) 00:14:07.826 fused_ordering(620) 00:14:07.826 fused_ordering(621) 00:14:07.826 fused_ordering(622) 00:14:07.826 fused_ordering(623) 00:14:07.826 fused_ordering(624) 00:14:07.826 fused_ordering(625) 00:14:07.826 fused_ordering(626) 00:14:07.826 fused_ordering(627) 00:14:07.826 fused_ordering(628) 00:14:07.826 fused_ordering(629) 00:14:07.826 fused_ordering(630) 00:14:07.826 fused_ordering(631) 00:14:07.826 fused_ordering(632) 00:14:07.826 fused_ordering(633) 00:14:07.826 fused_ordering(634) 00:14:07.826 fused_ordering(635) 00:14:07.826 fused_ordering(636) 00:14:07.826 fused_ordering(637) 00:14:07.826 fused_ordering(638) 00:14:07.826 fused_ordering(639) 00:14:07.826 fused_ordering(640) 00:14:07.826 fused_ordering(641) 00:14:07.826 fused_ordering(642) 00:14:07.826 fused_ordering(643) 00:14:07.826 fused_ordering(644) 00:14:07.826 fused_ordering(645) 00:14:07.826 fused_ordering(646) 00:14:07.826 fused_ordering(647) 00:14:07.826 fused_ordering(648) 00:14:07.826 fused_ordering(649) 00:14:07.826 fused_ordering(650) 00:14:07.827 fused_ordering(651) 00:14:07.827 fused_ordering(652) 00:14:07.827 fused_ordering(653) 00:14:07.827 fused_ordering(654) 00:14:07.827 fused_ordering(655) 00:14:07.827 fused_ordering(656) 00:14:07.827 fused_ordering(657) 00:14:07.827 fused_ordering(658) 00:14:07.827 fused_ordering(659) 00:14:07.827 fused_ordering(660) 00:14:07.827 fused_ordering(661) 00:14:07.827 fused_ordering(662) 00:14:07.827 fused_ordering(663) 00:14:07.827 fused_ordering(664) 00:14:07.827 fused_ordering(665) 00:14:07.827 fused_ordering(666) 00:14:07.827 fused_ordering(667) 00:14:07.827 fused_ordering(668) 00:14:07.827 fused_ordering(669) 00:14:07.827 fused_ordering(670) 00:14:07.827 fused_ordering(671) 00:14:07.827 fused_ordering(672) 00:14:07.827 fused_ordering(673) 00:14:07.827 fused_ordering(674) 00:14:07.827 fused_ordering(675) 00:14:07.827 fused_ordering(676) 00:14:07.827 fused_ordering(677) 00:14:07.827 fused_ordering(678) 00:14:07.827 fused_ordering(679) 00:14:07.827 fused_ordering(680) 00:14:07.827 fused_ordering(681) 00:14:07.827 fused_ordering(682) 00:14:07.827 fused_ordering(683) 00:14:07.827 fused_ordering(684) 00:14:07.827 fused_ordering(685) 00:14:07.827 fused_ordering(686) 00:14:07.827 fused_ordering(687) 00:14:07.827 fused_ordering(688) 00:14:07.827 fused_ordering(689) 00:14:07.827 fused_ordering(690) 00:14:07.827 fused_ordering(691) 00:14:07.827 fused_ordering(692) 00:14:07.827 fused_ordering(693) 00:14:07.827 fused_ordering(694) 00:14:07.827 fused_ordering(695) 00:14:07.827 fused_ordering(696) 00:14:07.827 fused_ordering(697) 00:14:07.827 fused_ordering(698) 00:14:07.827 fused_ordering(699) 00:14:07.827 fused_ordering(700) 00:14:07.827 fused_ordering(701) 00:14:07.827 fused_ordering(702) 00:14:07.827 fused_ordering(703) 00:14:07.827 fused_ordering(704) 00:14:07.827 fused_ordering(705) 00:14:07.827 fused_ordering(706) 00:14:07.827 fused_ordering(707) 00:14:07.827 fused_ordering(708) 00:14:07.827 fused_ordering(709) 00:14:07.827 fused_ordering(710) 00:14:07.827 fused_ordering(711) 00:14:07.827 fused_ordering(712) 00:14:07.827 fused_ordering(713) 00:14:07.827 fused_ordering(714) 00:14:07.827 fused_ordering(715) 00:14:07.827 fused_ordering(716) 00:14:07.827 fused_ordering(717) 00:14:07.827 fused_ordering(718) 00:14:07.827 fused_ordering(719) 00:14:07.827 fused_ordering(720) 00:14:07.827 fused_ordering(721) 00:14:07.827 fused_ordering(722) 00:14:07.827 fused_ordering(723) 00:14:07.827 fused_ordering(724) 00:14:07.827 fused_ordering(725) 00:14:07.827 fused_ordering(726) 00:14:07.827 fused_ordering(727) 00:14:07.827 fused_ordering(728) 00:14:07.827 fused_ordering(729) 00:14:07.827 fused_ordering(730) 00:14:07.827 fused_ordering(731) 00:14:07.827 fused_ordering(732) 00:14:07.827 fused_ordering(733) 00:14:07.827 fused_ordering(734) 00:14:07.827 fused_ordering(735) 00:14:07.827 fused_ordering(736) 00:14:07.827 fused_ordering(737) 00:14:07.827 fused_ordering(738) 00:14:07.827 fused_ordering(739) 00:14:07.827 fused_ordering(740) 00:14:07.827 fused_ordering(741) 00:14:07.827 fused_ordering(742) 00:14:07.827 fused_ordering(743) 00:14:07.827 fused_ordering(744) 00:14:07.827 fused_ordering(745) 00:14:07.827 fused_ordering(746) 00:14:07.827 fused_ordering(747) 00:14:07.827 fused_ordering(748) 00:14:07.827 fused_ordering(749) 00:14:07.827 fused_ordering(750) 00:14:07.827 fused_ordering(751) 00:14:07.827 fused_ordering(752) 00:14:07.827 fused_ordering(753) 00:14:07.827 fused_ordering(754) 00:14:07.827 fused_ordering(755) 00:14:07.827 fused_ordering(756) 00:14:07.827 fused_ordering(757) 00:14:07.827 fused_ordering(758) 00:14:07.827 fused_ordering(759) 00:14:07.827 fused_ordering(760) 00:14:07.827 fused_ordering(761) 00:14:07.827 fused_ordering(762) 00:14:07.827 fused_ordering(763) 00:14:07.827 fused_ordering(764) 00:14:07.827 fused_ordering(765) 00:14:07.827 fused_ordering(766) 00:14:07.827 fused_ordering(767) 00:14:07.827 fused_ordering(768) 00:14:07.827 fused_ordering(769) 00:14:07.827 fused_ordering(770) 00:14:07.827 fused_ordering(771) 00:14:07.827 fused_ordering(772) 00:14:07.827 fused_ordering(773) 00:14:07.827 fused_ordering(774) 00:14:07.827 fused_ordering(775) 00:14:07.827 fused_ordering(776) 00:14:07.827 fused_ordering(777) 00:14:07.827 fused_ordering(778) 00:14:07.827 fused_ordering(779) 00:14:07.827 fused_ordering(780) 00:14:07.827 fused_ordering(781) 00:14:07.827 fused_ordering(782) 00:14:07.827 fused_ordering(783) 00:14:07.827 fused_ordering(784) 00:14:07.827 fused_ordering(785) 00:14:07.827 fused_ordering(786) 00:14:07.827 fused_ordering(787) 00:14:07.827 fused_ordering(788) 00:14:07.827 fused_ordering(789) 00:14:07.827 fused_ordering(790) 00:14:07.827 fused_ordering(791) 00:14:07.827 fused_ordering(792) 00:14:07.827 fused_ordering(793) 00:14:07.827 fused_ordering(794) 00:14:07.827 fused_ordering(795) 00:14:07.827 fused_ordering(796) 00:14:07.827 fused_ordering(797) 00:14:07.827 fused_ordering(798) 00:14:07.827 fused_ordering(799) 00:14:07.827 fused_ordering(800) 00:14:07.827 fused_ordering(801) 00:14:07.827 fused_ordering(802) 00:14:07.827 fused_ordering(803) 00:14:07.827 fused_ordering(804) 00:14:07.827 fused_ordering(805) 00:14:07.827 fused_ordering(806) 00:14:07.827 fused_ordering(807) 00:14:07.827 fused_ordering(808) 00:14:07.827 fused_ordering(809) 00:14:07.827 fused_ordering(810) 00:14:07.827 fused_ordering(811) 00:14:07.827 fused_ordering(812) 00:14:07.827 fused_ordering(813) 00:14:07.827 fused_ordering(814) 00:14:07.827 fused_ordering(815) 00:14:07.827 fused_ordering(816) 00:14:07.827 fused_ordering(817) 00:14:07.827 fused_ordering(818) 00:14:07.827 fused_ordering(819) 00:14:07.827 fused_ordering(820) 00:14:08.763 fused_ordering(821) 00:14:08.763 fused_ordering(822) 00:14:08.763 fused_ordering(823) 00:14:08.763 fused_ordering(824) 00:14:08.763 fused_ordering(825) 00:14:08.763 fused_ordering(826) 00:14:08.763 fused_ordering(827) 00:14:08.763 fused_ordering(828) 00:14:08.763 fused_ordering(829) 00:14:08.764 fused_ordering(830) 00:14:08.764 fused_ordering(831) 00:14:08.764 fused_ordering(832) 00:14:08.764 fused_ordering(833) 00:14:08.764 fused_ordering(834) 00:14:08.764 fused_ordering(835) 00:14:08.764 fused_ordering(836) 00:14:08.764 fused_ordering(837) 00:14:08.764 fused_ordering(838) 00:14:08.764 fused_ordering(839) 00:14:08.764 fused_ordering(840) 00:14:08.764 fused_ordering(841) 00:14:08.764 fused_ordering(842) 00:14:08.764 fused_ordering(843) 00:14:08.764 fused_ordering(844) 00:14:08.764 fused_ordering(845) 00:14:08.764 fused_ordering(846) 00:14:08.764 fused_ordering(847) 00:14:08.764 fused_ordering(848) 00:14:08.764 fused_ordering(849) 00:14:08.764 fused_ordering(850) 00:14:08.764 fused_ordering(851) 00:14:08.764 fused_ordering(852) 00:14:08.764 fused_ordering(853) 00:14:08.764 fused_ordering(854) 00:14:08.764 fused_ordering(855) 00:14:08.764 fused_ordering(856) 00:14:08.764 fused_ordering(857) 00:14:08.764 fused_ordering(858) 00:14:08.764 fused_ordering(859) 00:14:08.764 fused_ordering(860) 00:14:08.764 fused_ordering(861) 00:14:08.764 fused_ordering(862) 00:14:08.764 fused_ordering(863) 00:14:08.764 fused_ordering(864) 00:14:08.764 fused_ordering(865) 00:14:08.764 fused_ordering(866) 00:14:08.764 fused_ordering(867) 00:14:08.764 fused_ordering(868) 00:14:08.764 fused_ordering(869) 00:14:08.764 fused_ordering(870) 00:14:08.764 fused_ordering(871) 00:14:08.764 fused_ordering(872) 00:14:08.764 fused_ordering(873) 00:14:08.764 fused_ordering(874) 00:14:08.764 fused_ordering(875) 00:14:08.764 fused_ordering(876) 00:14:08.764 fused_ordering(877) 00:14:08.764 fused_ordering(878) 00:14:08.764 fused_ordering(879) 00:14:08.764 fused_ordering(880) 00:14:08.764 fused_ordering(881) 00:14:08.764 fused_ordering(882) 00:14:08.764 fused_ordering(883) 00:14:08.764 fused_ordering(884) 00:14:08.764 fused_ordering(885) 00:14:08.764 fused_ordering(886) 00:14:08.764 fused_ordering(887) 00:14:08.764 fused_ordering(888) 00:14:08.764 fused_ordering(889) 00:14:08.764 fused_ordering(890) 00:14:08.764 fused_ordering(891) 00:14:08.764 fused_ordering(892) 00:14:08.764 fused_ordering(893) 00:14:08.764 fused_ordering(894) 00:14:08.764 fused_ordering(895) 00:14:08.764 fused_ordering(896) 00:14:08.764 fused_ordering(897) 00:14:08.764 fused_ordering(898) 00:14:08.764 fused_ordering(899) 00:14:08.764 fused_ordering(900) 00:14:08.764 fused_ordering(901) 00:14:08.764 fused_ordering(902) 00:14:08.764 fused_ordering(903) 00:14:08.764 fused_ordering(904) 00:14:08.764 fused_ordering(905) 00:14:08.764 fused_ordering(906) 00:14:08.764 fused_ordering(907) 00:14:08.764 fused_ordering(908) 00:14:08.764 fused_ordering(909) 00:14:08.764 fused_ordering(910) 00:14:08.764 fused_ordering(911) 00:14:08.764 fused_ordering(912) 00:14:08.764 fused_ordering(913) 00:14:08.764 fused_ordering(914) 00:14:08.764 fused_ordering(915) 00:14:08.764 fused_ordering(916) 00:14:08.764 fused_ordering(917) 00:14:08.764 fused_ordering(918) 00:14:08.764 fused_ordering(919) 00:14:08.764 fused_ordering(920) 00:14:08.764 fused_ordering(921) 00:14:08.764 fused_ordering(922) 00:14:08.764 fused_ordering(923) 00:14:08.764 fused_ordering(924) 00:14:08.764 fused_ordering(925) 00:14:08.764 fused_ordering(926) 00:14:08.764 fused_ordering(927) 00:14:08.764 fused_ordering(928) 00:14:08.764 fused_ordering(929) 00:14:08.764 fused_ordering(930) 00:14:08.764 fused_ordering(931) 00:14:08.764 fused_ordering(932) 00:14:08.764 fused_ordering(933) 00:14:08.764 fused_ordering(934) 00:14:08.764 fused_ordering(935) 00:14:08.764 fused_ordering(936) 00:14:08.764 fused_ordering(937) 00:14:08.764 fused_ordering(938) 00:14:08.764 fused_ordering(939) 00:14:08.764 fused_ordering(940) 00:14:08.764 fused_ordering(941) 00:14:08.764 fused_ordering(942) 00:14:08.764 fused_ordering(943) 00:14:08.764 fused_ordering(944) 00:14:08.764 fused_ordering(945) 00:14:08.764 fused_ordering(946) 00:14:08.764 fused_ordering(947) 00:14:08.764 fused_ordering(948) 00:14:08.764 fused_ordering(949) 00:14:08.764 fused_ordering(950) 00:14:08.764 fused_ordering(951) 00:14:08.764 fused_ordering(952) 00:14:08.764 fused_ordering(953) 00:14:08.764 fused_ordering(954) 00:14:08.764 fused_ordering(955) 00:14:08.764 fused_ordering(956) 00:14:08.764 fused_ordering(957) 00:14:08.764 fused_ordering(958) 00:14:08.764 fused_ordering(959) 00:14:08.764 fused_ordering(960) 00:14:08.764 fused_ordering(961) 00:14:08.764 fused_ordering(962) 00:14:08.764 fused_ordering(963) 00:14:08.764 fused_ordering(964) 00:14:08.764 fused_ordering(965) 00:14:08.764 fused_ordering(966) 00:14:08.764 fused_ordering(967) 00:14:08.764 fused_ordering(968) 00:14:08.764 fused_ordering(969) 00:14:08.764 fused_ordering(970) 00:14:08.764 fused_ordering(971) 00:14:08.764 fused_ordering(972) 00:14:08.764 fused_ordering(973) 00:14:08.764 fused_ordering(974) 00:14:08.764 fused_ordering(975) 00:14:08.764 fused_ordering(976) 00:14:08.764 fused_ordering(977) 00:14:08.764 fused_ordering(978) 00:14:08.764 fused_ordering(979) 00:14:08.764 fused_ordering(980) 00:14:08.764 fused_ordering(981) 00:14:08.764 fused_ordering(982) 00:14:08.764 fused_ordering(983) 00:14:08.764 fused_ordering(984) 00:14:08.764 fused_ordering(985) 00:14:08.764 fused_ordering(986) 00:14:08.764 fused_ordering(987) 00:14:08.764 fused_ordering(988) 00:14:08.764 fused_ordering(989) 00:14:08.764 fused_ordering(990) 00:14:08.764 fused_ordering(991) 00:14:08.764 fused_ordering(992) 00:14:08.764 fused_ordering(993) 00:14:08.764 fused_ordering(994) 00:14:08.764 fused_ordering(995) 00:14:08.764 fused_ordering(996) 00:14:08.764 fused_ordering(997) 00:14:08.764 fused_ordering(998) 00:14:08.764 fused_ordering(999) 00:14:08.764 fused_ordering(1000) 00:14:08.764 fused_ordering(1001) 00:14:08.764 fused_ordering(1002) 00:14:08.764 fused_ordering(1003) 00:14:08.764 fused_ordering(1004) 00:14:08.764 fused_ordering(1005) 00:14:08.764 fused_ordering(1006) 00:14:08.764 fused_ordering(1007) 00:14:08.764 fused_ordering(1008) 00:14:08.764 fused_ordering(1009) 00:14:08.764 fused_ordering(1010) 00:14:08.764 fused_ordering(1011) 00:14:08.764 fused_ordering(1012) 00:14:08.764 fused_ordering(1013) 00:14:08.764 fused_ordering(1014) 00:14:08.764 fused_ordering(1015) 00:14:08.764 fused_ordering(1016) 00:14:08.764 fused_ordering(1017) 00:14:08.764 fused_ordering(1018) 00:14:08.764 fused_ordering(1019) 00:14:08.764 fused_ordering(1020) 00:14:08.764 fused_ordering(1021) 00:14:08.764 fused_ordering(1022) 00:14:08.764 fused_ordering(1023) 00:14:08.764 05:28:15 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:08.764 05:28:15 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:08.764 05:28:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:08.764 05:28:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:14:08.764 05:28:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:08.764 05:28:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:14:08.764 05:28:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:08.764 05:28:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:08.764 rmmod nvme_tcp 00:14:08.764 rmmod nvme_fabrics 00:14:08.764 rmmod nvme_keyring 00:14:08.764 05:28:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:08.764 05:28:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:14:08.764 05:28:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:14:08.764 05:28:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 3186193 ']' 00:14:08.764 05:28:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 3186193 00:14:08.764 05:28:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@946 -- # '[' -z 3186193 ']' 00:14:08.764 05:28:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # kill -0 3186193 00:14:08.764 05:28:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # uname 00:14:08.764 05:28:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:08.764 05:28:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3186193 00:14:08.764 05:28:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:08.764 05:28:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:08.764 05:28:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3186193' 00:14:08.764 killing process with pid 3186193 00:14:08.764 05:28:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@965 -- # kill 3186193 00:14:08.764 05:28:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # wait 3186193 00:14:09.022 05:28:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:09.022 05:28:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:09.022 05:28:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:09.022 05:28:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:09.022 05:28:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:09.022 05:28:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:09.022 05:28:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:09.022 05:28:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:11.582 05:28:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:11.582 00:14:11.582 real 0m9.169s 00:14:11.582 user 0m7.376s 00:14:11.582 sys 0m4.135s 00:14:11.582 05:28:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:11.582 05:28:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:11.582 ************************************ 00:14:11.582 END TEST nvmf_fused_ordering 00:14:11.582 ************************************ 00:14:11.582 05:28:18 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:11.582 05:28:18 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:11.582 05:28:18 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:11.582 05:28:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:11.582 ************************************ 00:14:11.582 START TEST nvmf_delete_subsystem 00:14:11.582 ************************************ 00:14:11.582 05:28:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:11.582 * Looking for test storage... 00:14:11.582 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:11.582 05:28:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:11.582 05:28:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:14:11.582 05:28:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:11.582 05:28:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:11.582 05:28:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:11.582 05:28:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:11.582 05:28:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:11.582 05:28:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:11.582 05:28:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:11.582 05:28:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:11.582 05:28:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:11.582 05:28:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:11.582 05:28:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:11.582 05:28:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:11.582 05:28:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:11.582 05:28:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:11.582 05:28:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:11.582 05:28:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:11.582 05:28:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:11.582 05:28:18 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:11.582 05:28:18 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:11.583 05:28:18 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:11.583 05:28:18 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.583 05:28:18 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.583 05:28:18 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.583 05:28:18 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:14:11.583 05:28:18 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.583 05:28:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:14:11.583 05:28:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:11.583 05:28:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:11.583 05:28:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:11.583 05:28:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:11.583 05:28:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:11.583 05:28:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:11.583 05:28:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:11.583 05:28:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:11.583 05:28:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:14:11.583 05:28:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:11.583 05:28:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:11.583 05:28:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:11.583 05:28:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:11.583 05:28:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:11.583 05:28:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:11.583 05:28:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:11.583 05:28:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:11.583 05:28:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:11.583 05:28:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:11.583 05:28:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:14:11.583 05:28:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:13.479 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:13.479 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:14:13.479 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:13.479 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:13.479 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:13.479 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:13.479 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:13.479 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:14:13.479 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:13.480 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:13.480 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:13.480 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:13.480 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:13.480 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:13.480 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:14:13.480 00:14:13.480 --- 10.0.0.2 ping statistics --- 00:14:13.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:13.480 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:14:13.480 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:13.480 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:13.480 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:14:13.480 00:14:13.480 --- 10.0.0.1 ping statistics --- 00:14:13.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:13.481 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:14:13.481 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:13.481 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:14:13.481 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:13.481 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:13.481 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:13.481 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:13.481 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:13.481 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:13.481 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:13.481 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:14:13.481 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:13.481 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:13.481 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:13.481 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=3188671 00:14:13.481 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:14:13.481 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 3188671 00:14:13.481 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@827 -- # '[' -z 3188671 ']' 00:14:13.481 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:13.481 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:13.481 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:13.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:13.481 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:13.481 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:13.481 [2024-07-14 05:28:20.360026] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:14:13.481 [2024-07-14 05:28:20.360099] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:13.481 EAL: No free 2048 kB hugepages reported on node 1 00:14:13.481 [2024-07-14 05:28:20.424344] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:13.481 [2024-07-14 05:28:20.508143] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:13.481 [2024-07-14 05:28:20.508209] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:13.481 [2024-07-14 05:28:20.508222] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:13.481 [2024-07-14 05:28:20.508233] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:13.481 [2024-07-14 05:28:20.508251] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:13.481 [2024-07-14 05:28:20.508336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:13.481 [2024-07-14 05:28:20.508341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.739 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:13.739 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # return 0 00:14:13.739 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:13.739 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:13.739 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:13.739 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:13.739 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:13.739 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.739 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:13.739 [2024-07-14 05:28:20.645116] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:13.739 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.739 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:13.739 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.739 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:13.739 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.739 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:13.739 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.739 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:13.739 [2024-07-14 05:28:20.661370] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:13.739 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.739 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:13.739 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.739 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:13.739 NULL1 00:14:13.739 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.739 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:13.739 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.739 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:13.739 Delay0 00:14:13.739 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.739 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:13.739 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.739 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:13.739 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.739 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3188807 00:14:13.739 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:14:13.739 05:28:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:13.739 EAL: No free 2048 kB hugepages reported on node 1 00:14:13.739 [2024-07-14 05:28:20.746140] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:15.665 05:28:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:15.665 05:28:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.665 05:28:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:15.924 Write completed with error (sct=0, sc=8) 00:14:15.924 starting I/O failed: -6 00:14:15.924 Write completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 starting I/O failed: -6 00:14:15.924 Write completed with error (sct=0, sc=8) 00:14:15.924 Write completed with error (sct=0, sc=8) 00:14:15.924 Write completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 starting I/O failed: -6 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 starting I/O failed: -6 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 starting I/O failed: -6 00:14:15.924 Write completed with error (sct=0, sc=8) 00:14:15.924 Write completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 starting I/O failed: -6 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Write completed with error (sct=0, sc=8) 00:14:15.924 starting I/O failed: -6 00:14:15.924 Write completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 starting I/O failed: -6 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 starting I/O failed: -6 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Write completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Write completed with error (sct=0, sc=8) 00:14:15.924 starting I/O failed: -6 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 starting I/O failed: -6 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 [2024-07-14 05:28:22.836730] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd568000c00 is same with the state(5) to be set 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Write completed with error (sct=0, sc=8) 00:14:15.924 Write completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Write completed with error (sct=0, sc=8) 00:14:15.924 Write completed with error (sct=0, sc=8) 00:14:15.924 starting I/O failed: -6 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Write completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 starting I/O failed: -6 00:14:15.924 Write completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Write completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Write completed with error (sct=0, sc=8) 00:14:15.924 Write completed with error (sct=0, sc=8) 00:14:15.924 Write completed with error (sct=0, sc=8) 00:14:15.924 Write completed with error (sct=0, sc=8) 00:14:15.924 Write completed with error (sct=0, sc=8) 00:14:15.924 Write completed with error (sct=0, sc=8) 00:14:15.924 starting I/O failed: -6 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Write completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Write completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 starting I/O failed: -6 00:14:15.924 Write completed with error (sct=0, sc=8) 00:14:15.924 Write completed with error (sct=0, sc=8) 00:14:15.924 Write completed with error (sct=0, sc=8) 00:14:15.924 Write completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Write completed with error (sct=0, sc=8) 00:14:15.924 starting I/O failed: -6 00:14:15.924 Write completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Write completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 starting I/O failed: -6 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Write completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 starting I/O failed: -6 00:14:15.924 Write completed with error (sct=0, sc=8) 00:14:15.924 Write completed with error (sct=0, sc=8) 00:14:15.924 Write completed with error (sct=0, sc=8) 00:14:15.924 Write completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Write completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 starting I/O failed: -6 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Write completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Write completed with error (sct=0, sc=8) 00:14:15.924 Write completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Write completed with error (sct=0, sc=8) 00:14:15.924 Write completed with error (sct=0, sc=8) 00:14:15.924 starting I/O failed: -6 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Write completed with error (sct=0, sc=8) 00:14:15.924 Write completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Write completed with error (sct=0, sc=8) 00:14:15.924 starting I/O failed: -6 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 [2024-07-14 05:28:22.837566] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb00 is same with the state(5) to be set 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Write completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Write completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Write completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Write completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Write completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Write completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Write completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Write completed with error (sct=0, sc=8) 00:14:15.924 Write completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Write completed with error (sct=0, sc=8) 00:14:15.924 Read completed with error (sct=0, sc=8) 00:14:15.924 Write completed with error (sct=0, sc=8) 00:14:15.925 Write completed with error (sct=0, sc=8) 00:14:15.925 Write completed with error (sct=0, sc=8) 00:14:16.858 [2024-07-14 05:28:23.804313] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9620 is same with the state(5) to be set 00:14:16.858 Read completed with error (sct=0, sc=8) 00:14:16.858 Write completed with error (sct=0, sc=8) 00:14:16.858 Write completed with error (sct=0, sc=8) 00:14:16.858 Write completed with error (sct=0, sc=8) 00:14:16.858 Read completed with error (sct=0, sc=8) 00:14:16.858 Write completed with error (sct=0, sc=8) 00:14:16.858 Write completed with error (sct=0, sc=8) 00:14:16.858 Read completed with error (sct=0, sc=8) 00:14:16.858 Write completed with error (sct=0, sc=8) 00:14:16.858 Read completed with error (sct=0, sc=8) 00:14:16.858 Read completed with error (sct=0, sc=8) 00:14:16.858 Read completed with error (sct=0, sc=8) 00:14:16.858 Read completed with error (sct=0, sc=8) 00:14:16.858 Read completed with error (sct=0, sc=8) 00:14:16.858 Read completed with error (sct=0, sc=8) 00:14:16.858 Read completed with error (sct=0, sc=8) 00:14:16.858 Write completed with error (sct=0, sc=8) 00:14:16.858 Read completed with error (sct=0, sc=8) 00:14:16.858 Write completed with error (sct=0, sc=8) 00:14:16.858 Read completed with error (sct=0, sc=8) 00:14:16.858 Read completed with error (sct=0, sc=8) 00:14:16.858 Read completed with error (sct=0, sc=8) 00:14:16.858 [2024-07-14 05:28:23.839416] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd56800bfe0 is same with the state(5) to be set 00:14:16.858 Read completed with error (sct=0, sc=8) 00:14:16.858 Write completed with error (sct=0, sc=8) 00:14:16.858 Read completed with error (sct=0, sc=8) 00:14:16.858 Read completed with error (sct=0, sc=8) 00:14:16.858 Read completed with error (sct=0, sc=8) 00:14:16.858 Read completed with error (sct=0, sc=8) 00:14:16.858 Read completed with error (sct=0, sc=8) 00:14:16.858 Read completed with error (sct=0, sc=8) 00:14:16.858 Read completed with error (sct=0, sc=8) 00:14:16.858 Write completed with error (sct=0, sc=8) 00:14:16.858 Read completed with error (sct=0, sc=8) 00:14:16.858 Write completed with error (sct=0, sc=8) 00:14:16.858 Read completed with error (sct=0, sc=8) 00:14:16.858 Read completed with error (sct=0, sc=8) 00:14:16.858 Read completed with error (sct=0, sc=8) 00:14:16.858 Read completed with error (sct=0, sc=8) 00:14:16.858 Write completed with error (sct=0, sc=8) 00:14:16.858 Read completed with error (sct=0, sc=8) 00:14:16.858 Write completed with error (sct=0, sc=8) 00:14:16.858 Read completed with error (sct=0, sc=8) 00:14:16.858 Write completed with error (sct=0, sc=8) 00:14:16.858 [2024-07-14 05:28:23.839587] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd56800c780 is same with the state(5) to be set 00:14:16.858 Write completed with error (sct=0, sc=8) 00:14:16.858 Read completed with error (sct=0, sc=8) 00:14:16.858 Read completed with error (sct=0, sc=8) 00:14:16.858 Read completed with error (sct=0, sc=8) 00:14:16.858 Read completed with error (sct=0, sc=8) 00:14:16.858 Read completed with error (sct=0, sc=8) 00:14:16.858 Write completed with error (sct=0, sc=8) 00:14:16.858 Write completed with error (sct=0, sc=8) 00:14:16.858 Read completed with error (sct=0, sc=8) 00:14:16.858 Read completed with error (sct=0, sc=8) 00:14:16.858 Write completed with error (sct=0, sc=8) 00:14:16.858 Read completed with error (sct=0, sc=8) 00:14:16.858 Read completed with error (sct=0, sc=8) 00:14:16.858 Read completed with error (sct=0, sc=8) 00:14:16.858 Read completed with error (sct=0, sc=8) 00:14:16.858 Write completed with error (sct=0, sc=8) 00:14:16.858 Read completed with error (sct=0, sc=8) 00:14:16.858 Read completed with error (sct=0, sc=8) 00:14:16.858 Read completed with error (sct=0, sc=8) 00:14:16.858 Write completed with error (sct=0, sc=8) 00:14:16.858 Write completed with error (sct=0, sc=8) 00:14:16.858 [2024-07-14 05:28:23.839894] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd1d40 is same with the state(5) to be set 00:14:16.858 Read completed with error (sct=0, sc=8) 00:14:16.858 Write completed with error (sct=0, sc=8) 00:14:16.858 Read completed with error (sct=0, sc=8) 00:14:16.858 Read completed with error (sct=0, sc=8) 00:14:16.858 Write completed with error (sct=0, sc=8) 00:14:16.858 Read completed with error (sct=0, sc=8) 00:14:16.858 Read completed with error (sct=0, sc=8) 00:14:16.858 Read completed with error (sct=0, sc=8) 00:14:16.858 Read completed with error (sct=0, sc=8) 00:14:16.858 Read completed with error (sct=0, sc=8) 00:14:16.858 Read completed with error (sct=0, sc=8) 00:14:16.858 Read completed with error (sct=0, sc=8) 00:14:16.858 Read completed with error (sct=0, sc=8) 00:14:16.858 Read completed with error (sct=0, sc=8) 00:14:16.858 Read completed with error (sct=0, sc=8) 00:14:16.858 Read completed with error (sct=0, sc=8) 00:14:16.858 Read completed with error (sct=0, sc=8) 00:14:16.858 Read completed with error (sct=0, sc=8) 00:14:16.858 Read completed with error (sct=0, sc=8) 00:14:16.858 Read completed with error (sct=0, sc=8) 00:14:16.858 [2024-07-14 05:28:23.840726] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccce0 is same with the state(5) to be set 00:14:16.858 Initializing NVMe Controllers 00:14:16.858 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:16.858 Controller IO queue size 128, less than required. 00:14:16.858 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:16.859 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:16.859 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:16.859 Initialization complete. Launching workers. 00:14:16.859 ======================================================== 00:14:16.859 Latency(us) 00:14:16.859 Device Information : IOPS MiB/s Average min max 00:14:16.859 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 162.29 0.08 914285.30 569.31 1013134.68 00:14:16.859 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 164.77 0.08 906255.71 785.63 1011723.67 00:14:16.859 ======================================================== 00:14:16.859 Total : 327.06 0.16 910240.05 569.31 1013134.68 00:14:16.859 00:14:16.859 [2024-07-14 05:28:23.841238] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9620 (9): Bad file descriptor 00:14:16.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:14:16.859 05:28:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.859 05:28:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:14:16.859 05:28:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3188807 00:14:16.859 05:28:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:14:17.424 05:28:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:14:17.424 05:28:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3188807 00:14:17.424 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3188807) - No such process 00:14:17.424 05:28:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3188807 00:14:17.424 05:28:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:14:17.424 05:28:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 3188807 00:14:17.424 05:28:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:14:17.424 05:28:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:17.424 05:28:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:14:17.424 05:28:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:17.424 05:28:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 3188807 00:14:17.424 05:28:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:14:17.424 05:28:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:17.424 05:28:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:17.424 05:28:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:17.424 05:28:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:17.424 05:28:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.424 05:28:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:17.424 05:28:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.424 05:28:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:17.424 05:28:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.424 05:28:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:17.424 [2024-07-14 05:28:24.365072] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:17.424 05:28:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.424 05:28:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:17.424 05:28:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.424 05:28:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:17.424 05:28:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.424 05:28:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3189212 00:14:17.424 05:28:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:14:17.424 05:28:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3189212 00:14:17.424 05:28:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:17.424 05:28:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:17.424 EAL: No free 2048 kB hugepages reported on node 1 00:14:17.424 [2024-07-14 05:28:24.428174] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:17.989 05:28:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:17.989 05:28:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3189212 00:14:17.989 05:28:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:18.554 05:28:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:18.554 05:28:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3189212 00:14:18.554 05:28:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:18.810 05:28:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:18.810 05:28:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3189212 00:14:18.810 05:28:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:19.372 05:28:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:19.372 05:28:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3189212 00:14:19.372 05:28:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:19.935 05:28:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:19.935 05:28:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3189212 00:14:19.935 05:28:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:20.499 05:28:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:20.499 05:28:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3189212 00:14:20.499 05:28:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:20.499 Initializing NVMe Controllers 00:14:20.499 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:20.499 Controller IO queue size 128, less than required. 00:14:20.499 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:20.499 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:20.499 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:20.499 Initialization complete. Launching workers. 00:14:20.499 ======================================================== 00:14:20.499 Latency(us) 00:14:20.499 Device Information : IOPS MiB/s Average min max 00:14:20.499 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003348.27 1000247.00 1041308.06 00:14:20.499 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005170.53 1000446.61 1042255.34 00:14:20.499 ======================================================== 00:14:20.499 Total : 256.00 0.12 1004259.40 1000247.00 1042255.34 00:14:20.499 00:14:21.071 05:28:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:21.071 05:28:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3189212 00:14:21.071 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3189212) - No such process 00:14:21.071 05:28:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3189212 00:14:21.071 05:28:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:14:21.071 05:28:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:14:21.071 05:28:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:21.071 05:28:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:14:21.071 05:28:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:21.071 05:28:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:14:21.071 05:28:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:21.071 05:28:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:21.071 rmmod nvme_tcp 00:14:21.071 rmmod nvme_fabrics 00:14:21.071 rmmod nvme_keyring 00:14:21.071 05:28:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:21.071 05:28:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:14:21.071 05:28:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:14:21.071 05:28:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 3188671 ']' 00:14:21.071 05:28:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 3188671 00:14:21.071 05:28:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@946 -- # '[' -z 3188671 ']' 00:14:21.071 05:28:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # kill -0 3188671 00:14:21.071 05:28:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # uname 00:14:21.071 05:28:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:21.071 05:28:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3188671 00:14:21.071 05:28:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:21.071 05:28:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:21.071 05:28:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3188671' 00:14:21.071 killing process with pid 3188671 00:14:21.071 05:28:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@965 -- # kill 3188671 00:14:21.071 05:28:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # wait 3188671 00:14:21.330 05:28:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:21.330 05:28:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:21.330 05:28:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:21.330 05:28:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:21.330 05:28:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:21.330 05:28:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:21.330 05:28:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:21.330 05:28:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:23.230 05:28:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:23.230 00:14:23.230 real 0m12.086s 00:14:23.230 user 0m27.489s 00:14:23.230 sys 0m2.856s 00:14:23.230 05:28:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:23.230 05:28:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:23.230 ************************************ 00:14:23.230 END TEST nvmf_delete_subsystem 00:14:23.230 ************************************ 00:14:23.230 05:28:30 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:23.230 05:28:30 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:23.230 05:28:30 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:23.230 05:28:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:23.230 ************************************ 00:14:23.230 START TEST nvmf_ns_masking 00:14:23.230 ************************************ 00:14:23.230 05:28:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1121 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:23.489 * Looking for test storage... 00:14:23.489 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:23.489 05:28:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:23.489 05:28:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:23.489 05:28:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:23.489 05:28:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:23.489 05:28:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:23.489 05:28:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:23.489 05:28:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:23.489 05:28:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:23.489 05:28:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:23.489 05:28:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:23.489 05:28:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:23.489 05:28:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:23.489 05:28:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:23.489 05:28:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:23.489 05:28:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:23.489 05:28:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:23.489 05:28:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:23.489 05:28:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:23.489 05:28:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:23.489 05:28:30 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:23.489 05:28:30 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:23.489 05:28:30 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:23.490 05:28:30 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.490 05:28:30 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.490 05:28:30 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.490 05:28:30 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:23.490 05:28:30 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.490 05:28:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:14:23.490 05:28:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:23.490 05:28:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:23.490 05:28:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:23.490 05:28:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:23.490 05:28:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:23.490 05:28:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:23.490 05:28:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:23.490 05:28:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:23.490 05:28:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:23.490 05:28:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:14:23.490 05:28:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:23.490 05:28:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:14:23.490 05:28:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:14:23.490 05:28:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=0dc7bcce-89e7-476c-ba85-42ed06fcc786 00:14:23.490 05:28:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:14:23.490 05:28:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:23.490 05:28:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:23.490 05:28:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:23.490 05:28:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:23.490 05:28:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:23.490 05:28:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:23.490 05:28:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:23.490 05:28:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:23.490 05:28:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:23.490 05:28:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:23.490 05:28:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:14:23.490 05:28:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:25.391 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:25.391 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:14:25.391 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:25.391 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:25.391 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:25.391 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:25.391 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:25.391 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:14:25.391 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:25.391 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:14:25.391 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:14:25.391 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:14:25.391 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:14:25.391 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:14:25.391 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:14:25.391 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:25.391 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:25.391 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:25.391 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:25.391 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:25.391 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:25.391 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:25.391 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:25.391 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:25.391 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:25.391 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:25.392 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:25.392 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:25.392 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:25.392 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:25.392 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:25.392 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:14:25.392 00:14:25.392 --- 10.0.0.2 ping statistics --- 00:14:25.392 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:25.392 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:25.392 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:25.392 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:14:25.392 00:14:25.392 --- 10.0.0.1 ping statistics --- 00:14:25.392 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:25.392 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=3191567 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 3191567 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@827 -- # '[' -z 3191567 ']' 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:25.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:25.392 05:28:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:25.392 [2024-07-14 05:28:32.473413] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:14:25.392 [2024-07-14 05:28:32.473503] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:25.651 EAL: No free 2048 kB hugepages reported on node 1 00:14:25.651 [2024-07-14 05:28:32.544785] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:25.651 [2024-07-14 05:28:32.637429] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:25.651 [2024-07-14 05:28:32.637491] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:25.651 [2024-07-14 05:28:32.637509] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:25.651 [2024-07-14 05:28:32.637522] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:25.651 [2024-07-14 05:28:32.637534] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:25.651 [2024-07-14 05:28:32.637596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:25.651 [2024-07-14 05:28:32.637652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:25.651 [2024-07-14 05:28:32.637694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:25.651 [2024-07-14 05:28:32.637696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:25.909 05:28:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:25.909 05:28:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@860 -- # return 0 00:14:25.909 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:25.909 05:28:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:25.909 05:28:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:25.909 05:28:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:25.909 05:28:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:25.909 [2024-07-14 05:28:33.009422] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:26.166 05:28:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:14:26.166 05:28:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:14:26.166 05:28:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:26.423 Malloc1 00:14:26.423 05:28:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:26.680 Malloc2 00:14:26.680 05:28:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:26.937 05:28:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:27.193 05:28:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:27.451 [2024-07-14 05:28:34.312740] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:27.451 05:28:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:14:27.451 05:28:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 0dc7bcce-89e7-476c-ba85-42ed06fcc786 -a 10.0.0.2 -s 4420 -i 4 00:14:27.451 05:28:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:14:27.451 05:28:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:14:27.451 05:28:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:27.451 05:28:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:14:27.451 05:28:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:14:29.980 05:28:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:29.980 05:28:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:29.980 05:28:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:29.980 05:28:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:14:29.980 05:28:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:29.980 05:28:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:14:29.980 05:28:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:14:29.980 05:28:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:29.980 05:28:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:14:29.980 05:28:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:14:29.980 05:28:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:14:29.980 05:28:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:29.980 05:28:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:29.980 [ 0]:0x1 00:14:29.980 05:28:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:29.980 05:28:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:29.980 05:28:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=74550a7701524ae69b24d709d59b0565 00:14:29.980 05:28:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 74550a7701524ae69b24d709d59b0565 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:29.980 05:28:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:29.980 05:28:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:14:29.980 05:28:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:29.980 05:28:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:29.980 [ 0]:0x1 00:14:29.980 05:28:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:29.980 05:28:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:29.980 05:28:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=74550a7701524ae69b24d709d59b0565 00:14:29.980 05:28:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 74550a7701524ae69b24d709d59b0565 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:29.980 05:28:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:14:29.980 05:28:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:29.980 05:28:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:29.980 [ 1]:0x2 00:14:29.980 05:28:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:29.980 05:28:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:29.980 05:28:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=36800dce9fc3494c86c8257341f8d015 00:14:29.980 05:28:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 36800dce9fc3494c86c8257341f8d015 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:29.980 05:28:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:14:29.980 05:28:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:29.980 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:29.980 05:28:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:30.238 05:28:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:30.497 05:28:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:14:30.497 05:28:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 0dc7bcce-89e7-476c-ba85-42ed06fcc786 -a 10.0.0.2 -s 4420 -i 4 00:14:30.754 05:28:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:30.754 05:28:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:14:30.754 05:28:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:30.754 05:28:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 1 ]] 00:14:30.754 05:28:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=1 00:14:30.754 05:28:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:14:32.653 05:28:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:32.653 05:28:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:32.653 05:28:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:32.653 05:28:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:14:32.653 05:28:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:32.653 05:28:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:14:32.653 05:28:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:14:32.653 05:28:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:32.653 05:28:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:14:32.653 05:28:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:14:32.653 05:28:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:14:32.653 05:28:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:32.653 05:28:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:32.653 05:28:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:32.653 05:28:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:32.653 05:28:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:32.653 05:28:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:32.653 05:28:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:32.653 05:28:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:32.653 05:28:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:32.653 05:28:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:32.653 05:28:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:32.954 05:28:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:32.954 05:28:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:32.954 05:28:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:32.954 05:28:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:32.954 05:28:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:32.954 05:28:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:32.954 05:28:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:14:32.954 05:28:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:32.954 05:28:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:32.954 [ 0]:0x2 00:14:32.954 05:28:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:32.954 05:28:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:32.954 05:28:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=36800dce9fc3494c86c8257341f8d015 00:14:32.954 05:28:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 36800dce9fc3494c86c8257341f8d015 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:32.954 05:28:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:33.212 05:28:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:14:33.212 05:28:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:33.212 05:28:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:33.212 [ 0]:0x1 00:14:33.212 05:28:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:33.212 05:28:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:33.212 05:28:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=74550a7701524ae69b24d709d59b0565 00:14:33.212 05:28:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 74550a7701524ae69b24d709d59b0565 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:33.212 05:28:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:14:33.212 05:28:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:33.212 05:28:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:33.212 [ 1]:0x2 00:14:33.212 05:28:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:33.212 05:28:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:33.212 05:28:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=36800dce9fc3494c86c8257341f8d015 00:14:33.212 05:28:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 36800dce9fc3494c86c8257341f8d015 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:33.213 05:28:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:33.470 05:28:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:14:33.470 05:28:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:33.470 05:28:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:33.470 05:28:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:33.470 05:28:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:33.470 05:28:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:33.470 05:28:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:33.470 05:28:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:33.470 05:28:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:33.470 05:28:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:33.470 05:28:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:33.470 05:28:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:33.470 05:28:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:33.470 05:28:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:33.470 05:28:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:33.470 05:28:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:33.470 05:28:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:33.470 05:28:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:33.470 05:28:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:14:33.470 05:28:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:33.470 05:28:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:33.470 [ 0]:0x2 00:14:33.728 05:28:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:33.728 05:28:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:33.728 05:28:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=36800dce9fc3494c86c8257341f8d015 00:14:33.728 05:28:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 36800dce9fc3494c86c8257341f8d015 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:33.728 05:28:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:14:33.728 05:28:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:33.728 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:33.728 05:28:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:33.985 05:28:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:14:33.985 05:28:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 0dc7bcce-89e7-476c-ba85-42ed06fcc786 -a 10.0.0.2 -s 4420 -i 4 00:14:34.242 05:28:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:34.242 05:28:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:14:34.242 05:28:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:34.242 05:28:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:14:34.242 05:28:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:14:34.242 05:28:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:14:36.138 05:28:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:36.138 05:28:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:36.138 05:28:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:36.138 05:28:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:14:36.138 05:28:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:36.138 05:28:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:14:36.138 05:28:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:14:36.138 05:28:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:36.138 05:28:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:14:36.138 05:28:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:14:36.138 05:28:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:14:36.138 05:28:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:36.138 05:28:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:36.138 [ 0]:0x1 00:14:36.138 05:28:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:36.138 05:28:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:36.138 05:28:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=74550a7701524ae69b24d709d59b0565 00:14:36.138 05:28:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 74550a7701524ae69b24d709d59b0565 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:36.138 05:28:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:14:36.138 05:28:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:36.138 05:28:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:36.395 [ 1]:0x2 00:14:36.395 05:28:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:36.395 05:28:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:36.395 05:28:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=36800dce9fc3494c86c8257341f8d015 00:14:36.395 05:28:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 36800dce9fc3494c86c8257341f8d015 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:36.395 05:28:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:36.652 05:28:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:14:36.652 05:28:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:36.652 05:28:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:36.652 05:28:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:36.652 05:28:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:36.652 05:28:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:36.652 05:28:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:36.652 05:28:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:36.652 05:28:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:36.652 05:28:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:36.652 05:28:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:36.652 05:28:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:36.652 05:28:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:36.652 05:28:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:36.652 05:28:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:36.652 05:28:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:36.652 05:28:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:36.652 05:28:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:36.652 05:28:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:14:36.652 05:28:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:36.652 05:28:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:36.652 [ 0]:0x2 00:14:36.652 05:28:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:36.652 05:28:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:36.652 05:28:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=36800dce9fc3494c86c8257341f8d015 00:14:36.652 05:28:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 36800dce9fc3494c86c8257341f8d015 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:36.652 05:28:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:36.652 05:28:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:36.652 05:28:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:36.652 05:28:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:36.652 05:28:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:36.652 05:28:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:36.652 05:28:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:36.652 05:28:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:36.652 05:28:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:36.652 05:28:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:36.652 05:28:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:36.652 05:28:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:36.910 [2024-07-14 05:28:43.903772] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:36.910 request: 00:14:36.910 { 00:14:36.910 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:36.910 "nsid": 2, 00:14:36.910 "host": "nqn.2016-06.io.spdk:host1", 00:14:36.910 "method": "nvmf_ns_remove_host", 00:14:36.910 "req_id": 1 00:14:36.910 } 00:14:36.910 Got JSON-RPC error response 00:14:36.910 response: 00:14:36.910 { 00:14:36.910 "code": -32602, 00:14:36.910 "message": "Invalid parameters" 00:14:36.910 } 00:14:36.910 05:28:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:36.910 05:28:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:36.910 05:28:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:36.910 05:28:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:36.910 05:28:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:14:36.910 05:28:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:36.910 05:28:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:36.910 05:28:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:36.910 05:28:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:36.910 05:28:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:36.910 05:28:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:36.910 05:28:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:36.910 05:28:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:36.910 05:28:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:36.910 05:28:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:36.910 05:28:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:36.910 05:28:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:36.910 05:28:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:36.910 05:28:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:36.910 05:28:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:36.910 05:28:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:36.910 05:28:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:36.910 05:28:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:14:36.910 05:28:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:36.910 05:28:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:36.910 [ 0]:0x2 00:14:36.910 05:28:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:36.910 05:28:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:36.910 05:28:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=36800dce9fc3494c86c8257341f8d015 00:14:36.910 05:28:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 36800dce9fc3494c86c8257341f8d015 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:36.910 05:28:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:14:36.910 05:28:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:37.167 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:37.167 05:28:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:37.425 05:28:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:14:37.425 05:28:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:14:37.425 05:28:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:37.425 05:28:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:14:37.425 05:28:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:37.425 05:28:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:14:37.425 05:28:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:37.425 05:28:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:37.425 rmmod nvme_tcp 00:14:37.425 rmmod nvme_fabrics 00:14:37.425 rmmod nvme_keyring 00:14:37.425 05:28:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:37.425 05:28:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:14:37.425 05:28:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:14:37.425 05:28:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 3191567 ']' 00:14:37.425 05:28:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 3191567 00:14:37.425 05:28:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@946 -- # '[' -z 3191567 ']' 00:14:37.425 05:28:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@950 -- # kill -0 3191567 00:14:37.425 05:28:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # uname 00:14:37.425 05:28:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:37.425 05:28:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3191567 00:14:37.425 05:28:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:37.425 05:28:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:37.425 05:28:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3191567' 00:14:37.425 killing process with pid 3191567 00:14:37.425 05:28:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@965 -- # kill 3191567 00:14:37.425 05:28:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@970 -- # wait 3191567 00:14:37.684 05:28:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:37.684 05:28:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:37.684 05:28:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:37.684 05:28:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:37.684 05:28:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:37.684 05:28:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:37.684 05:28:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:37.684 05:28:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:39.586 05:28:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:39.586 00:14:39.586 real 0m16.379s 00:14:39.586 user 0m51.136s 00:14:39.586 sys 0m3.784s 00:14:39.586 05:28:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:39.586 05:28:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:39.586 ************************************ 00:14:39.586 END TEST nvmf_ns_masking 00:14:39.586 ************************************ 00:14:39.846 05:28:46 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:14:39.846 05:28:46 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:39.846 05:28:46 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:39.846 05:28:46 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:39.846 05:28:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:39.846 ************************************ 00:14:39.846 START TEST nvmf_nvme_cli 00:14:39.846 ************************************ 00:14:39.846 05:28:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:39.846 * Looking for test storage... 00:14:39.846 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:39.846 05:28:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:39.846 05:28:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:39.846 05:28:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:39.846 05:28:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:39.846 05:28:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:39.846 05:28:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:39.846 05:28:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:39.846 05:28:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:39.846 05:28:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:39.846 05:28:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:39.846 05:28:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:39.846 05:28:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:39.846 05:28:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:39.846 05:28:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:39.846 05:28:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:39.846 05:28:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:39.846 05:28:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:39.846 05:28:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:39.846 05:28:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:39.846 05:28:46 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:39.846 05:28:46 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:39.846 05:28:46 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:39.846 05:28:46 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.846 05:28:46 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.846 05:28:46 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.846 05:28:46 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:39.847 05:28:46 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.847 05:28:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:14:39.847 05:28:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:39.847 05:28:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:39.847 05:28:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:39.847 05:28:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:39.847 05:28:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:39.847 05:28:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:39.847 05:28:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:39.847 05:28:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:39.847 05:28:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:39.847 05:28:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:39.847 05:28:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:39.847 05:28:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:39.847 05:28:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:39.847 05:28:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:39.847 05:28:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:39.847 05:28:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:39.847 05:28:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:39.847 05:28:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:39.847 05:28:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:39.847 05:28:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:39.847 05:28:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:39.847 05:28:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:39.847 05:28:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:14:39.847 05:28:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:41.747 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:41.747 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:14:41.747 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:41.747 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:41.747 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:41.747 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:41.747 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:41.747 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:14:41.747 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:41.747 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:14:41.747 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:14:41.747 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:14:41.747 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:14:41.747 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:14:41.747 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:14:41.747 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:41.747 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:41.747 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:41.747 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:41.747 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:41.747 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:41.747 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:41.747 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:41.747 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:41.747 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:41.747 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:41.747 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:41.747 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:41.747 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:41.747 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:41.747 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:41.747 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:41.747 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:41.747 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:41.747 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:41.747 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:41.747 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:41.747 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:41.747 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:41.747 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:41.747 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:41.747 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:41.747 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:41.747 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:41.747 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:41.747 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:41.747 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:41.747 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:41.747 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:41.747 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:41.747 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:41.747 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:41.747 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:41.747 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:41.747 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:41.747 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:41.747 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:41.747 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:41.747 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:41.747 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:41.747 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:41.747 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:41.747 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:41.747 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:41.747 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:41.747 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:41.747 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:41.747 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:41.747 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:41.748 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:41.748 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:41.748 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:41.748 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:14:41.748 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:41.748 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:41.748 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:41.748 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:41.748 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:41.748 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:41.748 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:41.748 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:41.748 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:41.748 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:41.748 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:41.748 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:41.748 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:41.748 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:41.748 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:41.748 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:41.748 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:41.748 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:41.748 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:41.748 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:42.006 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:42.006 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:42.006 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:42.006 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:42.006 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.115 ms 00:14:42.006 00:14:42.006 --- 10.0.0.2 ping statistics --- 00:14:42.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:42.006 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:14:42.006 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:42.006 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:42.006 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:14:42.006 00:14:42.006 --- 10.0.0.1 ping statistics --- 00:14:42.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:42.006 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:14:42.006 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:42.006 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:14:42.006 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:42.006 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:42.006 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:42.006 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:42.006 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:42.006 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:42.006 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:42.006 05:28:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:42.006 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:42.006 05:28:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:42.006 05:28:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:42.006 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=3195065 00:14:42.006 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:42.006 05:28:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 3195065 00:14:42.006 05:28:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@827 -- # '[' -z 3195065 ']' 00:14:42.006 05:28:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:42.006 05:28:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:42.006 05:28:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:42.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:42.006 05:28:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:42.006 05:28:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:42.006 [2024-07-14 05:28:48.976874] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:14:42.006 [2024-07-14 05:28:48.976964] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:42.006 EAL: No free 2048 kB hugepages reported on node 1 00:14:42.006 [2024-07-14 05:28:49.045527] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:42.264 [2024-07-14 05:28:49.137408] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:42.264 [2024-07-14 05:28:49.137464] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:42.264 [2024-07-14 05:28:49.137480] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:42.264 [2024-07-14 05:28:49.137493] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:42.264 [2024-07-14 05:28:49.137507] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:42.264 [2024-07-14 05:28:49.137601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:42.264 [2024-07-14 05:28:49.137672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:42.264 [2024-07-14 05:28:49.137702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:42.264 [2024-07-14 05:28:49.137704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:42.264 05:28:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:42.264 05:28:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # return 0 00:14:42.264 05:28:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:42.264 05:28:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:42.264 05:28:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:42.264 05:28:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:42.264 05:28:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:42.264 05:28:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.264 05:28:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:42.265 [2024-07-14 05:28:49.303735] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:42.265 05:28:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.265 05:28:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:42.265 05:28:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.265 05:28:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:42.265 Malloc0 00:14:42.265 05:28:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.265 05:28:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:42.265 05:28:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.265 05:28:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:42.265 Malloc1 00:14:42.265 05:28:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.265 05:28:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:42.265 05:28:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.265 05:28:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:42.265 05:28:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.265 05:28:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:42.265 05:28:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.265 05:28:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:42.523 05:28:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.523 05:28:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:42.523 05:28:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.523 05:28:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:42.523 05:28:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.523 05:28:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:42.523 05:28:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.523 05:28:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:42.523 [2024-07-14 05:28:49.389830] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:42.523 05:28:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.523 05:28:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:42.523 05:28:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.523 05:28:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:42.523 05:28:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.523 05:28:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:14:42.523 00:14:42.523 Discovery Log Number of Records 2, Generation counter 2 00:14:42.523 =====Discovery Log Entry 0====== 00:14:42.523 trtype: tcp 00:14:42.523 adrfam: ipv4 00:14:42.523 subtype: current discovery subsystem 00:14:42.523 treq: not required 00:14:42.523 portid: 0 00:14:42.523 trsvcid: 4420 00:14:42.523 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:42.523 traddr: 10.0.0.2 00:14:42.523 eflags: explicit discovery connections, duplicate discovery information 00:14:42.523 sectype: none 00:14:42.523 =====Discovery Log Entry 1====== 00:14:42.523 trtype: tcp 00:14:42.523 adrfam: ipv4 00:14:42.523 subtype: nvme subsystem 00:14:42.523 treq: not required 00:14:42.523 portid: 0 00:14:42.523 trsvcid: 4420 00:14:42.523 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:42.523 traddr: 10.0.0.2 00:14:42.523 eflags: none 00:14:42.523 sectype: none 00:14:42.523 05:28:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:42.523 05:28:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:42.523 05:28:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:42.523 05:28:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:42.523 05:28:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:42.523 05:28:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:42.523 05:28:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:42.523 05:28:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:42.523 05:28:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:42.523 05:28:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:42.523 05:28:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:43.086 05:28:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:43.086 05:28:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1194 -- # local i=0 00:14:43.086 05:28:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:43.086 05:28:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:14:43.086 05:28:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:14:43.086 05:28:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # sleep 2 00:14:45.612 05:28:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:45.612 05:28:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:45.612 05:28:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:45.612 05:28:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:14:45.612 05:28:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:45.612 05:28:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # return 0 00:14:45.612 05:28:52 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:45.612 05:28:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:45.612 05:28:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:45.612 05:28:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:45.612 05:28:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:45.612 05:28:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:45.612 05:28:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:45.612 05:28:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:45.612 05:28:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:45.612 05:28:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:14:45.612 05:28:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:45.612 05:28:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:45.612 05:28:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:14:45.612 05:28:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:45.612 05:28:52 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:14:45.612 /dev/nvme0n1 ]] 00:14:45.612 05:28:52 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:45.612 05:28:52 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:45.612 05:28:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:45.612 05:28:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:45.612 05:28:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:45.612 05:28:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:45.612 05:28:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:45.612 05:28:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:45.612 05:28:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:45.612 05:28:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:45.612 05:28:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:14:45.612 05:28:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:45.612 05:28:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:45.612 05:28:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:14:45.612 05:28:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:45.612 05:28:52 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:45.612 05:28:52 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:45.612 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:45.612 05:28:52 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:45.612 05:28:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1215 -- # local i=0 00:14:45.612 05:28:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:14:45.612 05:28:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:45.612 05:28:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:14:45.612 05:28:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:45.612 05:28:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # return 0 00:14:45.613 05:28:52 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:45.613 05:28:52 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:45.613 05:28:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.613 05:28:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:45.613 05:28:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.613 05:28:52 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:45.613 05:28:52 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:45.613 05:28:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:45.613 05:28:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:14:45.613 05:28:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:45.613 05:28:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:14:45.613 05:28:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:45.613 05:28:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:45.613 rmmod nvme_tcp 00:14:45.613 rmmod nvme_fabrics 00:14:45.613 rmmod nvme_keyring 00:14:45.613 05:28:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:45.613 05:28:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:14:45.613 05:28:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:14:45.613 05:28:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 3195065 ']' 00:14:45.613 05:28:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 3195065 00:14:45.613 05:28:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@946 -- # '[' -z 3195065 ']' 00:14:45.613 05:28:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # kill -0 3195065 00:14:45.613 05:28:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # uname 00:14:45.613 05:28:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:45.613 05:28:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3195065 00:14:45.613 05:28:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:45.613 05:28:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:45.613 05:28:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3195065' 00:14:45.613 killing process with pid 3195065 00:14:45.613 05:28:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@965 -- # kill 3195065 00:14:45.613 05:28:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # wait 3195065 00:14:45.613 05:28:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:45.613 05:28:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:45.613 05:28:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:45.613 05:28:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:45.613 05:28:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:45.613 05:28:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:45.613 05:28:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:45.613 05:28:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:48.145 05:28:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:48.145 00:14:48.145 real 0m7.959s 00:14:48.145 user 0m14.400s 00:14:48.145 sys 0m2.147s 00:14:48.145 05:28:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:48.145 05:28:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:48.145 ************************************ 00:14:48.145 END TEST nvmf_nvme_cli 00:14:48.145 ************************************ 00:14:48.145 05:28:54 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:14:48.145 05:28:54 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:48.145 05:28:54 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:48.145 05:28:54 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:48.145 05:28:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:48.145 ************************************ 00:14:48.145 START TEST nvmf_vfio_user 00:14:48.145 ************************************ 00:14:48.145 05:28:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:48.145 * Looking for test storage... 00:14:48.145 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:48.145 05:28:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:48.145 05:28:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:48.145 05:28:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:48.145 05:28:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:48.145 05:28:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:48.145 05:28:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:48.145 05:28:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:48.145 05:28:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:48.145 05:28:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:48.145 05:28:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:48.145 05:28:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:48.145 05:28:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:48.145 05:28:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:48.145 05:28:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:48.145 05:28:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:48.145 05:28:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:48.145 05:28:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:48.145 05:28:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:48.145 05:28:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:48.145 05:28:54 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:48.145 05:28:54 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:48.145 05:28:54 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:48.145 05:28:54 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.145 05:28:54 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.145 05:28:54 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.145 05:28:54 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:48.145 05:28:54 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.145 05:28:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:14:48.145 05:28:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:48.145 05:28:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:48.145 05:28:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:48.145 05:28:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:48.145 05:28:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:48.145 05:28:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:48.145 05:28:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:48.145 05:28:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:48.145 05:28:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:48.145 05:28:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:48.145 05:28:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:48.145 05:28:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:48.145 05:28:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:48.145 05:28:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:48.145 05:28:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:48.145 05:28:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:48.145 05:28:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:48.145 05:28:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:48.145 05:28:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3195906 00:14:48.145 05:28:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:48.145 05:28:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3195906' 00:14:48.145 Process pid: 3195906 00:14:48.145 05:28:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:48.145 05:28:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3195906 00:14:48.145 05:28:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 3195906 ']' 00:14:48.145 05:28:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:48.145 05:28:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:48.145 05:28:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:48.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:48.145 05:28:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:48.145 05:28:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:48.145 [2024-07-14 05:28:54.858974] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:14:48.145 [2024-07-14 05:28:54.859053] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:48.145 EAL: No free 2048 kB hugepages reported on node 1 00:14:48.145 [2024-07-14 05:28:54.919497] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:48.145 [2024-07-14 05:28:55.004306] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:48.146 [2024-07-14 05:28:55.004362] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:48.146 [2024-07-14 05:28:55.004375] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:48.146 [2024-07-14 05:28:55.004386] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:48.146 [2024-07-14 05:28:55.004396] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:48.146 [2024-07-14 05:28:55.004481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:48.146 [2024-07-14 05:28:55.004544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:48.146 [2024-07-14 05:28:55.004573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:48.146 [2024-07-14 05:28:55.004574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:48.146 05:28:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:48.146 05:28:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:14:48.146 05:28:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:49.077 05:28:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:49.334 05:28:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:49.334 05:28:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:49.334 05:28:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:49.334 05:28:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:49.334 05:28:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:49.898 Malloc1 00:14:49.898 05:28:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:49.898 05:28:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:50.462 05:28:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:50.748 05:28:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:50.748 05:28:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:50.748 05:28:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:51.006 Malloc2 00:14:51.006 05:28:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:51.263 05:28:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:51.521 05:28:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:51.521 05:28:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:51.780 05:28:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:51.780 05:28:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:51.780 05:28:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:51.780 05:28:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:51.780 05:28:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:51.780 [2024-07-14 05:28:58.648182] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:14:51.780 [2024-07-14 05:28:58.648235] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3196336 ] 00:14:51.780 EAL: No free 2048 kB hugepages reported on node 1 00:14:51.780 [2024-07-14 05:28:58.683140] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:51.780 [2024-07-14 05:28:58.689363] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:51.780 [2024-07-14 05:28:58.689391] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f654d2bc000 00:14:51.780 [2024-07-14 05:28:58.690355] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:51.780 [2024-07-14 05:28:58.691353] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:51.780 [2024-07-14 05:28:58.692356] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:51.780 [2024-07-14 05:28:58.693359] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:51.780 [2024-07-14 05:28:58.694362] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:51.780 [2024-07-14 05:28:58.695367] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:51.780 [2024-07-14 05:28:58.696369] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:51.780 [2024-07-14 05:28:58.697375] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:51.780 [2024-07-14 05:28:58.698382] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:51.780 [2024-07-14 05:28:58.698403] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f654c06e000 00:14:51.780 [2024-07-14 05:28:58.699518] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:51.780 [2024-07-14 05:28:58.713498] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:51.780 [2024-07-14 05:28:58.713528] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:14:51.780 [2024-07-14 05:28:58.722522] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:51.780 [2024-07-14 05:28:58.722575] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:51.780 [2024-07-14 05:28:58.722657] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:14:51.780 [2024-07-14 05:28:58.722685] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:14:51.780 [2024-07-14 05:28:58.722695] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:14:51.780 [2024-07-14 05:28:58.723511] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:51.780 [2024-07-14 05:28:58.723535] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:14:51.780 [2024-07-14 05:28:58.723547] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:14:51.780 [2024-07-14 05:28:58.724513] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:51.780 [2024-07-14 05:28:58.724530] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:14:51.780 [2024-07-14 05:28:58.724543] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:14:51.780 [2024-07-14 05:28:58.725518] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:51.780 [2024-07-14 05:28:58.725535] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:51.780 [2024-07-14 05:28:58.726524] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:51.780 [2024-07-14 05:28:58.726543] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:14:51.780 [2024-07-14 05:28:58.726551] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:14:51.780 [2024-07-14 05:28:58.726563] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:51.780 [2024-07-14 05:28:58.726676] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:14:51.780 [2024-07-14 05:28:58.726685] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:51.780 [2024-07-14 05:28:58.726693] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:51.780 [2024-07-14 05:28:58.727534] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:51.780 [2024-07-14 05:28:58.728529] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:51.780 [2024-07-14 05:28:58.729537] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:51.780 [2024-07-14 05:28:58.730534] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:51.780 [2024-07-14 05:28:58.730645] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:51.780 [2024-07-14 05:28:58.731545] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:51.780 [2024-07-14 05:28:58.731563] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:51.780 [2024-07-14 05:28:58.731572] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:14:51.780 [2024-07-14 05:28:58.731596] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:14:51.780 [2024-07-14 05:28:58.731613] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:14:51.780 [2024-07-14 05:28:58.731641] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:51.780 [2024-07-14 05:28:58.731651] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:51.780 [2024-07-14 05:28:58.731669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:51.780 [2024-07-14 05:28:58.731736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:51.780 [2024-07-14 05:28:58.731755] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:14:51.780 [2024-07-14 05:28:58.731763] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:14:51.780 [2024-07-14 05:28:58.731770] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:14:51.780 [2024-07-14 05:28:58.731778] nvme_ctrlr.c:2004:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:51.780 [2024-07-14 05:28:58.731785] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:14:51.780 [2024-07-14 05:28:58.731792] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:14:51.781 [2024-07-14 05:28:58.731799] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:14:51.781 [2024-07-14 05:28:58.731810] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:14:51.781 [2024-07-14 05:28:58.731828] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:51.781 [2024-07-14 05:28:58.731860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:51.781 [2024-07-14 05:28:58.731886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:51.781 [2024-07-14 05:28:58.731900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:51.781 [2024-07-14 05:28:58.731911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:51.781 [2024-07-14 05:28:58.731923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:51.781 [2024-07-14 05:28:58.731931] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:51.781 [2024-07-14 05:28:58.731948] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:51.781 [2024-07-14 05:28:58.731963] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:51.781 [2024-07-14 05:28:58.731974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:51.781 [2024-07-14 05:28:58.731984] nvme_ctrlr.c:2892:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:14:51.781 [2024-07-14 05:28:58.731992] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:51.781 [2024-07-14 05:28:58.732003] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:14:51.781 [2024-07-14 05:28:58.732015] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:51.781 [2024-07-14 05:28:58.732029] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:51.781 [2024-07-14 05:28:58.732040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:51.781 [2024-07-14 05:28:58.732105] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:14:51.781 [2024-07-14 05:28:58.732120] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:51.781 [2024-07-14 05:28:58.732132] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:51.781 [2024-07-14 05:28:58.732140] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:51.781 [2024-07-14 05:28:58.732150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:51.781 [2024-07-14 05:28:58.732164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:51.781 [2024-07-14 05:28:58.732179] nvme_ctrlr.c:4570:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:14:51.781 [2024-07-14 05:28:58.732193] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:14:51.781 [2024-07-14 05:28:58.732223] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:14:51.781 [2024-07-14 05:28:58.732238] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:51.781 [2024-07-14 05:28:58.732246] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:51.781 [2024-07-14 05:28:58.732255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:51.781 [2024-07-14 05:28:58.732277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:51.781 [2024-07-14 05:28:58.732297] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:51.781 [2024-07-14 05:28:58.732310] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:51.781 [2024-07-14 05:28:58.732321] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:51.781 [2024-07-14 05:28:58.732329] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:51.781 [2024-07-14 05:28:58.732338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:51.781 [2024-07-14 05:28:58.732348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:51.781 [2024-07-14 05:28:58.732361] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:51.781 [2024-07-14 05:28:58.732371] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:14:51.781 [2024-07-14 05:28:58.732384] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:14:51.781 [2024-07-14 05:28:58.732393] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:51.781 [2024-07-14 05:28:58.732401] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:14:51.781 [2024-07-14 05:28:58.732409] nvme_ctrlr.c:2992:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:14:51.781 [2024-07-14 05:28:58.732416] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:14:51.781 [2024-07-14 05:28:58.732424] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:14:51.781 [2024-07-14 05:28:58.732452] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:51.781 [2024-07-14 05:28:58.732470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:51.781 [2024-07-14 05:28:58.732488] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:51.781 [2024-07-14 05:28:58.732499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:51.781 [2024-07-14 05:28:58.732515] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:51.781 [2024-07-14 05:28:58.732528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:51.781 [2024-07-14 05:28:58.732544] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:51.781 [2024-07-14 05:28:58.732559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:51.781 [2024-07-14 05:28:58.732577] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:51.781 [2024-07-14 05:28:58.732586] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:51.781 [2024-07-14 05:28:58.732591] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:51.781 [2024-07-14 05:28:58.732597] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:51.781 [2024-07-14 05:28:58.732606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:51.781 [2024-07-14 05:28:58.732617] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:51.781 [2024-07-14 05:28:58.732624] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:51.781 [2024-07-14 05:28:58.732633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:51.781 [2024-07-14 05:28:58.732643] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:51.781 [2024-07-14 05:28:58.732650] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:51.781 [2024-07-14 05:28:58.732658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:51.781 [2024-07-14 05:28:58.732670] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:51.781 [2024-07-14 05:28:58.732677] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:51.781 [2024-07-14 05:28:58.732685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:51.781 [2024-07-14 05:28:58.732696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:51.781 [2024-07-14 05:28:58.732715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:51.781 [2024-07-14 05:28:58.732730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:51.781 [2024-07-14 05:28:58.732744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:51.781 ===================================================== 00:14:51.781 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:51.781 ===================================================== 00:14:51.781 Controller Capabilities/Features 00:14:51.781 ================================ 00:14:51.781 Vendor ID: 4e58 00:14:51.781 Subsystem Vendor ID: 4e58 00:14:51.781 Serial Number: SPDK1 00:14:51.781 Model Number: SPDK bdev Controller 00:14:51.781 Firmware Version: 24.05.1 00:14:51.781 Recommended Arb Burst: 6 00:14:51.781 IEEE OUI Identifier: 8d 6b 50 00:14:51.781 Multi-path I/O 00:14:51.781 May have multiple subsystem ports: Yes 00:14:51.781 May have multiple controllers: Yes 00:14:51.781 Associated with SR-IOV VF: No 00:14:51.781 Max Data Transfer Size: 131072 00:14:51.781 Max Number of Namespaces: 32 00:14:51.781 Max Number of I/O Queues: 127 00:14:51.781 NVMe Specification Version (VS): 1.3 00:14:51.781 NVMe Specification Version (Identify): 1.3 00:14:51.781 Maximum Queue Entries: 256 00:14:51.781 Contiguous Queues Required: Yes 00:14:51.781 Arbitration Mechanisms Supported 00:14:51.781 Weighted Round Robin: Not Supported 00:14:51.781 Vendor Specific: Not Supported 00:14:51.781 Reset Timeout: 15000 ms 00:14:51.781 Doorbell Stride: 4 bytes 00:14:51.781 NVM Subsystem Reset: Not Supported 00:14:51.781 Command Sets Supported 00:14:51.781 NVM Command Set: Supported 00:14:51.781 Boot Partition: Not Supported 00:14:51.781 Memory Page Size Minimum: 4096 bytes 00:14:51.781 Memory Page Size Maximum: 4096 bytes 00:14:51.781 Persistent Memory Region: Not Supported 00:14:51.781 Optional Asynchronous Events Supported 00:14:51.781 Namespace Attribute Notices: Supported 00:14:51.781 Firmware Activation Notices: Not Supported 00:14:51.782 ANA Change Notices: Not Supported 00:14:51.782 PLE Aggregate Log Change Notices: Not Supported 00:14:51.782 LBA Status Info Alert Notices: Not Supported 00:14:51.782 EGE Aggregate Log Change Notices: Not Supported 00:14:51.782 Normal NVM Subsystem Shutdown event: Not Supported 00:14:51.782 Zone Descriptor Change Notices: Not Supported 00:14:51.782 Discovery Log Change Notices: Not Supported 00:14:51.782 Controller Attributes 00:14:51.782 128-bit Host Identifier: Supported 00:14:51.782 Non-Operational Permissive Mode: Not Supported 00:14:51.782 NVM Sets: Not Supported 00:14:51.782 Read Recovery Levels: Not Supported 00:14:51.782 Endurance Groups: Not Supported 00:14:51.782 Predictable Latency Mode: Not Supported 00:14:51.782 Traffic Based Keep ALive: Not Supported 00:14:51.782 Namespace Granularity: Not Supported 00:14:51.782 SQ Associations: Not Supported 00:14:51.782 UUID List: Not Supported 00:14:51.782 Multi-Domain Subsystem: Not Supported 00:14:51.782 Fixed Capacity Management: Not Supported 00:14:51.782 Variable Capacity Management: Not Supported 00:14:51.782 Delete Endurance Group: Not Supported 00:14:51.782 Delete NVM Set: Not Supported 00:14:51.782 Extended LBA Formats Supported: Not Supported 00:14:51.782 Flexible Data Placement Supported: Not Supported 00:14:51.782 00:14:51.782 Controller Memory Buffer Support 00:14:51.782 ================================ 00:14:51.782 Supported: No 00:14:51.782 00:14:51.782 Persistent Memory Region Support 00:14:51.782 ================================ 00:14:51.782 Supported: No 00:14:51.782 00:14:51.782 Admin Command Set Attributes 00:14:51.782 ============================ 00:14:51.782 Security Send/Receive: Not Supported 00:14:51.782 Format NVM: Not Supported 00:14:51.782 Firmware Activate/Download: Not Supported 00:14:51.782 Namespace Management: Not Supported 00:14:51.782 Device Self-Test: Not Supported 00:14:51.782 Directives: Not Supported 00:14:51.782 NVMe-MI: Not Supported 00:14:51.782 Virtualization Management: Not Supported 00:14:51.782 Doorbell Buffer Config: Not Supported 00:14:51.782 Get LBA Status Capability: Not Supported 00:14:51.782 Command & Feature Lockdown Capability: Not Supported 00:14:51.782 Abort Command Limit: 4 00:14:51.782 Async Event Request Limit: 4 00:14:51.782 Number of Firmware Slots: N/A 00:14:51.782 Firmware Slot 1 Read-Only: N/A 00:14:51.782 Firmware Activation Without Reset: N/A 00:14:51.782 Multiple Update Detection Support: N/A 00:14:51.782 Firmware Update Granularity: No Information Provided 00:14:51.782 Per-Namespace SMART Log: No 00:14:51.782 Asymmetric Namespace Access Log Page: Not Supported 00:14:51.782 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:51.782 Command Effects Log Page: Supported 00:14:51.782 Get Log Page Extended Data: Supported 00:14:51.782 Telemetry Log Pages: Not Supported 00:14:51.782 Persistent Event Log Pages: Not Supported 00:14:51.782 Supported Log Pages Log Page: May Support 00:14:51.782 Commands Supported & Effects Log Page: Not Supported 00:14:51.782 Feature Identifiers & Effects Log Page:May Support 00:14:51.782 NVMe-MI Commands & Effects Log Page: May Support 00:14:51.782 Data Area 4 for Telemetry Log: Not Supported 00:14:51.782 Error Log Page Entries Supported: 128 00:14:51.782 Keep Alive: Supported 00:14:51.782 Keep Alive Granularity: 10000 ms 00:14:51.782 00:14:51.782 NVM Command Set Attributes 00:14:51.782 ========================== 00:14:51.782 Submission Queue Entry Size 00:14:51.782 Max: 64 00:14:51.782 Min: 64 00:14:51.782 Completion Queue Entry Size 00:14:51.782 Max: 16 00:14:51.782 Min: 16 00:14:51.782 Number of Namespaces: 32 00:14:51.782 Compare Command: Supported 00:14:51.782 Write Uncorrectable Command: Not Supported 00:14:51.782 Dataset Management Command: Supported 00:14:51.782 Write Zeroes Command: Supported 00:14:51.782 Set Features Save Field: Not Supported 00:14:51.782 Reservations: Not Supported 00:14:51.782 Timestamp: Not Supported 00:14:51.782 Copy: Supported 00:14:51.782 Volatile Write Cache: Present 00:14:51.782 Atomic Write Unit (Normal): 1 00:14:51.782 Atomic Write Unit (PFail): 1 00:14:51.782 Atomic Compare & Write Unit: 1 00:14:51.782 Fused Compare & Write: Supported 00:14:51.782 Scatter-Gather List 00:14:51.782 SGL Command Set: Supported (Dword aligned) 00:14:51.782 SGL Keyed: Not Supported 00:14:51.782 SGL Bit Bucket Descriptor: Not Supported 00:14:51.782 SGL Metadata Pointer: Not Supported 00:14:51.782 Oversized SGL: Not Supported 00:14:51.782 SGL Metadata Address: Not Supported 00:14:51.782 SGL Offset: Not Supported 00:14:51.782 Transport SGL Data Block: Not Supported 00:14:51.782 Replay Protected Memory Block: Not Supported 00:14:51.782 00:14:51.782 Firmware Slot Information 00:14:51.782 ========================= 00:14:51.782 Active slot: 1 00:14:51.782 Slot 1 Firmware Revision: 24.05.1 00:14:51.782 00:14:51.782 00:14:51.782 Commands Supported and Effects 00:14:51.782 ============================== 00:14:51.782 Admin Commands 00:14:51.782 -------------- 00:14:51.782 Get Log Page (02h): Supported 00:14:51.782 Identify (06h): Supported 00:14:51.782 Abort (08h): Supported 00:14:51.782 Set Features (09h): Supported 00:14:51.782 Get Features (0Ah): Supported 00:14:51.782 Asynchronous Event Request (0Ch): Supported 00:14:51.782 Keep Alive (18h): Supported 00:14:51.782 I/O Commands 00:14:51.782 ------------ 00:14:51.782 Flush (00h): Supported LBA-Change 00:14:51.782 Write (01h): Supported LBA-Change 00:14:51.782 Read (02h): Supported 00:14:51.782 Compare (05h): Supported 00:14:51.782 Write Zeroes (08h): Supported LBA-Change 00:14:51.782 Dataset Management (09h): Supported LBA-Change 00:14:51.782 Copy (19h): Supported LBA-Change 00:14:51.782 Unknown (79h): Supported LBA-Change 00:14:51.782 Unknown (7Ah): Supported 00:14:51.782 00:14:51.782 Error Log 00:14:51.782 ========= 00:14:51.782 00:14:51.782 Arbitration 00:14:51.782 =========== 00:14:51.782 Arbitration Burst: 1 00:14:51.782 00:14:51.782 Power Management 00:14:51.782 ================ 00:14:51.782 Number of Power States: 1 00:14:51.782 Current Power State: Power State #0 00:14:51.782 Power State #0: 00:14:51.782 Max Power: 0.00 W 00:14:51.782 Non-Operational State: Operational 00:14:51.782 Entry Latency: Not Reported 00:14:51.782 Exit Latency: Not Reported 00:14:51.782 Relative Read Throughput: 0 00:14:51.782 Relative Read Latency: 0 00:14:51.782 Relative Write Throughput: 0 00:14:51.782 Relative Write Latency: 0 00:14:51.782 Idle Power: Not Reported 00:14:51.782 Active Power: Not Reported 00:14:51.782 Non-Operational Permissive Mode: Not Supported 00:14:51.782 00:14:51.782 Health Information 00:14:51.782 ================== 00:14:51.782 Critical Warnings: 00:14:51.782 Available Spare Space: OK 00:14:51.782 Temperature: OK 00:14:51.782 Device Reliability: OK 00:14:51.782 Read Only: No 00:14:51.782 Volatile Memory Backup: OK 00:14:51.782 Current Temperature: 0 Kelvin[2024-07-14 05:28:58.732898] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:51.782 [2024-07-14 05:28:58.732915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:51.782 [2024-07-14 05:28:58.732954] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:14:51.782 [2024-07-14 05:28:58.732971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.782 [2024-07-14 05:28:58.732981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.782 [2024-07-14 05:28:58.732991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.782 [2024-07-14 05:28:58.733001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.782 [2024-07-14 05:28:58.733567] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:51.782 [2024-07-14 05:28:58.733586] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:51.782 [2024-07-14 05:28:58.734564] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:51.782 [2024-07-14 05:28:58.734635] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:14:51.782 [2024-07-14 05:28:58.734649] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:14:51.782 [2024-07-14 05:28:58.735579] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:51.782 [2024-07-14 05:28:58.735601] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:14:51.782 [2024-07-14 05:28:58.735652] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:51.782 [2024-07-14 05:28:58.740876] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:51.782 (-273 Celsius) 00:14:51.782 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:51.782 Available Spare: 0% 00:14:51.782 Available Spare Threshold: 0% 00:14:51.782 Life Percentage Used: 0% 00:14:51.782 Data Units Read: 0 00:14:51.782 Data Units Written: 0 00:14:51.782 Host Read Commands: 0 00:14:51.782 Host Write Commands: 0 00:14:51.782 Controller Busy Time: 0 minutes 00:14:51.782 Power Cycles: 0 00:14:51.782 Power On Hours: 0 hours 00:14:51.782 Unsafe Shutdowns: 0 00:14:51.782 Unrecoverable Media Errors: 0 00:14:51.782 Lifetime Error Log Entries: 0 00:14:51.782 Warning Temperature Time: 0 minutes 00:14:51.782 Critical Temperature Time: 0 minutes 00:14:51.782 00:14:51.783 Number of Queues 00:14:51.783 ================ 00:14:51.783 Number of I/O Submission Queues: 127 00:14:51.783 Number of I/O Completion Queues: 127 00:14:51.783 00:14:51.783 Active Namespaces 00:14:51.783 ================= 00:14:51.783 Namespace ID:1 00:14:51.783 Error Recovery Timeout: Unlimited 00:14:51.783 Command Set Identifier: NVM (00h) 00:14:51.783 Deallocate: Supported 00:14:51.783 Deallocated/Unwritten Error: Not Supported 00:14:51.783 Deallocated Read Value: Unknown 00:14:51.783 Deallocate in Write Zeroes: Not Supported 00:14:51.783 Deallocated Guard Field: 0xFFFF 00:14:51.783 Flush: Supported 00:14:51.783 Reservation: Supported 00:14:51.783 Namespace Sharing Capabilities: Multiple Controllers 00:14:51.783 Size (in LBAs): 131072 (0GiB) 00:14:51.783 Capacity (in LBAs): 131072 (0GiB) 00:14:51.783 Utilization (in LBAs): 131072 (0GiB) 00:14:51.783 NGUID: DC8D4BB8CD554463940947E0998144AC 00:14:51.783 UUID: dc8d4bb8-cd55-4463-9409-47e0998144ac 00:14:51.783 Thin Provisioning: Not Supported 00:14:51.783 Per-NS Atomic Units: Yes 00:14:51.783 Atomic Boundary Size (Normal): 0 00:14:51.783 Atomic Boundary Size (PFail): 0 00:14:51.783 Atomic Boundary Offset: 0 00:14:51.783 Maximum Single Source Range Length: 65535 00:14:51.783 Maximum Copy Length: 65535 00:14:51.783 Maximum Source Range Count: 1 00:14:51.783 NGUID/EUI64 Never Reused: No 00:14:51.783 Namespace Write Protected: No 00:14:51.783 Number of LBA Formats: 1 00:14:51.783 Current LBA Format: LBA Format #00 00:14:51.783 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:51.783 00:14:51.783 05:28:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:51.783 EAL: No free 2048 kB hugepages reported on node 1 00:14:52.040 [2024-07-14 05:28:58.970691] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:57.306 Initializing NVMe Controllers 00:14:57.306 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:57.306 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:57.306 Initialization complete. Launching workers. 00:14:57.306 ======================================================== 00:14:57.306 Latency(us) 00:14:57.306 Device Information : IOPS MiB/s Average min max 00:14:57.306 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 36186.98 141.36 3537.20 1140.75 7397.71 00:14:57.306 ======================================================== 00:14:57.306 Total : 36186.98 141.36 3537.20 1140.75 7397.71 00:14:57.306 00:14:57.306 [2024-07-14 05:29:03.993812] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:57.306 05:29:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:57.306 EAL: No free 2048 kB hugepages reported on node 1 00:14:57.306 [2024-07-14 05:29:04.235971] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:02.568 Initializing NVMe Controllers 00:15:02.568 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:02.568 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:02.568 Initialization complete. Launching workers. 00:15:02.568 ======================================================== 00:15:02.568 Latency(us) 00:15:02.568 Device Information : IOPS MiB/s Average min max 00:15:02.568 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16025.60 62.60 7995.66 4985.53 15973.27 00:15:02.568 ======================================================== 00:15:02.568 Total : 16025.60 62.60 7995.66 4985.53 15973.27 00:15:02.568 00:15:02.568 [2024-07-14 05:29:09.272640] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:02.568 05:29:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:02.568 EAL: No free 2048 kB hugepages reported on node 1 00:15:02.568 [2024-07-14 05:29:09.483692] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:07.826 [2024-07-14 05:29:14.553186] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:07.826 Initializing NVMe Controllers 00:15:07.826 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:07.826 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:07.826 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:07.826 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:07.826 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:07.826 Initialization complete. Launching workers. 00:15:07.826 Starting thread on core 2 00:15:07.826 Starting thread on core 3 00:15:07.826 Starting thread on core 1 00:15:07.826 05:29:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:07.826 EAL: No free 2048 kB hugepages reported on node 1 00:15:07.826 [2024-07-14 05:29:14.845780] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:11.121 [2024-07-14 05:29:17.912806] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:11.121 Initializing NVMe Controllers 00:15:11.121 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:11.121 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:11.121 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:11.121 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:11.121 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:11.121 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:11.121 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:11.121 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:11.121 Initialization complete. Launching workers. 00:15:11.121 Starting thread on core 1 with urgent priority queue 00:15:11.121 Starting thread on core 2 with urgent priority queue 00:15:11.121 Starting thread on core 3 with urgent priority queue 00:15:11.121 Starting thread on core 0 with urgent priority queue 00:15:11.121 SPDK bdev Controller (SPDK1 ) core 0: 5532.33 IO/s 18.08 secs/100000 ios 00:15:11.121 SPDK bdev Controller (SPDK1 ) core 1: 5346.33 IO/s 18.70 secs/100000 ios 00:15:11.121 SPDK bdev Controller (SPDK1 ) core 2: 5525.00 IO/s 18.10 secs/100000 ios 00:15:11.121 SPDK bdev Controller (SPDK1 ) core 3: 5500.00 IO/s 18.18 secs/100000 ios 00:15:11.121 ======================================================== 00:15:11.121 00:15:11.121 05:29:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:11.121 EAL: No free 2048 kB hugepages reported on node 1 00:15:11.121 [2024-07-14 05:29:18.209540] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:11.379 Initializing NVMe Controllers 00:15:11.379 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:11.379 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:11.379 Namespace ID: 1 size: 0GB 00:15:11.379 Initialization complete. 00:15:11.379 INFO: using host memory buffer for IO 00:15:11.379 Hello world! 00:15:11.379 [2024-07-14 05:29:18.243098] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:11.379 05:29:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:11.379 EAL: No free 2048 kB hugepages reported on node 1 00:15:11.637 [2024-07-14 05:29:18.540317] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:12.571 Initializing NVMe Controllers 00:15:12.571 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:12.571 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:12.571 Initialization complete. Launching workers. 00:15:12.571 submit (in ns) avg, min, max = 6264.2, 3504.4, 4002611.1 00:15:12.571 complete (in ns) avg, min, max = 26097.9, 2061.1, 4147504.4 00:15:12.571 00:15:12.571 Submit histogram 00:15:12.571 ================ 00:15:12.571 Range in us Cumulative Count 00:15:12.571 3.484 - 3.508: 0.0222% ( 3) 00:15:12.571 3.508 - 3.532: 1.2039% ( 160) 00:15:12.571 3.532 - 3.556: 2.9323% ( 234) 00:15:12.571 3.556 - 3.579: 8.0730% ( 696) 00:15:12.571 3.579 - 3.603: 15.9982% ( 1073) 00:15:12.571 3.603 - 3.627: 25.8956% ( 1340) 00:15:12.571 3.627 - 3.650: 34.6185% ( 1181) 00:15:12.571 3.650 - 3.674: 42.3148% ( 1042) 00:15:12.571 3.674 - 3.698: 48.9918% ( 904) 00:15:12.571 3.698 - 3.721: 56.3483% ( 996) 00:15:12.571 3.721 - 3.745: 61.5777% ( 708) 00:15:12.571 3.745 - 3.769: 65.6400% ( 550) 00:15:12.571 3.769 - 3.793: 68.6904% ( 413) 00:15:12.571 3.793 - 3.816: 71.9034% ( 435) 00:15:12.571 3.816 - 3.840: 75.3305% ( 464) 00:15:12.571 3.840 - 3.864: 79.2895% ( 536) 00:15:12.571 3.864 - 3.887: 82.5836% ( 446) 00:15:12.571 3.887 - 3.911: 85.3017% ( 368) 00:15:12.571 3.911 - 3.935: 87.8351% ( 343) 00:15:12.571 3.935 - 3.959: 89.6669% ( 248) 00:15:12.571 3.959 - 3.982: 91.0776% ( 191) 00:15:12.571 3.982 - 4.006: 92.5844% ( 204) 00:15:12.571 4.006 - 4.030: 93.5298% ( 128) 00:15:12.571 4.030 - 4.053: 94.3275% ( 108) 00:15:12.571 4.053 - 4.077: 95.0366% ( 96) 00:15:12.571 4.077 - 4.101: 95.5979% ( 76) 00:15:12.571 4.101 - 4.124: 96.0041% ( 55) 00:15:12.571 4.124 - 4.148: 96.3587% ( 48) 00:15:12.571 4.148 - 4.172: 96.4990% ( 19) 00:15:12.571 4.172 - 4.196: 96.6615% ( 22) 00:15:12.571 4.196 - 4.219: 96.7944% ( 18) 00:15:12.571 4.219 - 4.243: 96.8461% ( 7) 00:15:12.571 4.243 - 4.267: 96.9643% ( 16) 00:15:12.571 4.267 - 4.290: 97.0825% ( 16) 00:15:12.571 4.290 - 4.314: 97.1859% ( 14) 00:15:12.571 4.314 - 4.338: 97.2450% ( 8) 00:15:12.571 4.338 - 4.361: 97.3041% ( 8) 00:15:12.571 4.361 - 4.385: 97.3336% ( 4) 00:15:12.571 4.385 - 4.409: 97.3779% ( 6) 00:15:12.571 4.409 - 4.433: 97.4001% ( 3) 00:15:12.571 4.456 - 4.480: 97.4075% ( 1) 00:15:12.571 4.480 - 4.504: 97.4149% ( 1) 00:15:12.571 4.551 - 4.575: 97.4223% ( 1) 00:15:12.571 4.575 - 4.599: 97.4370% ( 2) 00:15:12.571 4.599 - 4.622: 97.4740% ( 5) 00:15:12.571 4.622 - 4.646: 97.5331% ( 8) 00:15:12.571 4.646 - 4.670: 97.5700% ( 5) 00:15:12.571 4.670 - 4.693: 97.6143% ( 6) 00:15:12.571 4.693 - 4.717: 97.6438% ( 4) 00:15:12.571 4.717 - 4.741: 97.6955% ( 7) 00:15:12.571 4.741 - 4.764: 97.7842% ( 12) 00:15:12.571 4.764 - 4.788: 97.8359% ( 7) 00:15:12.571 4.788 - 4.812: 97.8950% ( 8) 00:15:12.571 4.812 - 4.836: 97.9171% ( 3) 00:15:12.571 4.836 - 4.859: 97.9245% ( 1) 00:15:12.571 4.859 - 4.883: 97.9688% ( 6) 00:15:12.571 4.883 - 4.907: 98.0058% ( 5) 00:15:12.571 4.907 - 4.930: 98.0279% ( 3) 00:15:12.571 4.930 - 4.954: 98.0796% ( 7) 00:15:12.571 4.954 - 4.978: 98.1018% ( 3) 00:15:12.571 4.978 - 5.001: 98.1461% ( 6) 00:15:12.571 5.001 - 5.025: 98.1683% ( 3) 00:15:12.571 5.025 - 5.049: 98.1978% ( 4) 00:15:12.571 5.049 - 5.073: 98.2126% ( 2) 00:15:12.571 5.073 - 5.096: 98.2347% ( 3) 00:15:12.571 5.096 - 5.120: 98.2495% ( 2) 00:15:12.571 5.144 - 5.167: 98.2569% ( 1) 00:15:12.571 5.167 - 5.191: 98.2717% ( 2) 00:15:12.571 5.191 - 5.215: 98.2790% ( 1) 00:15:12.571 5.215 - 5.239: 98.3012% ( 3) 00:15:12.571 5.239 - 5.262: 98.3234% ( 3) 00:15:12.571 5.262 - 5.286: 98.3381% ( 2) 00:15:12.571 5.310 - 5.333: 98.3603% ( 3) 00:15:12.571 5.333 - 5.357: 98.3677% ( 1) 00:15:12.571 5.381 - 5.404: 98.3751% ( 1) 00:15:12.571 5.404 - 5.428: 98.3825% ( 1) 00:15:12.571 5.452 - 5.476: 98.3898% ( 1) 00:15:12.571 5.476 - 5.499: 98.3972% ( 1) 00:15:12.571 5.523 - 5.547: 98.4046% ( 1) 00:15:12.572 5.547 - 5.570: 98.4120% ( 1) 00:15:12.572 5.570 - 5.594: 98.4194% ( 1) 00:15:12.572 5.665 - 5.689: 98.4415% ( 3) 00:15:12.572 5.689 - 5.713: 98.4489% ( 1) 00:15:12.572 5.713 - 5.736: 98.4563% ( 1) 00:15:12.572 5.831 - 5.855: 98.4637% ( 1) 00:15:12.572 5.855 - 5.879: 98.4711% ( 1) 00:15:12.572 5.926 - 5.950: 98.4785% ( 1) 00:15:12.572 5.973 - 5.997: 98.5006% ( 3) 00:15:12.572 5.997 - 6.021: 98.5080% ( 1) 00:15:12.572 6.068 - 6.116: 98.5154% ( 1) 00:15:12.572 6.163 - 6.210: 98.5376% ( 3) 00:15:12.572 6.210 - 6.258: 98.5523% ( 2) 00:15:12.572 6.400 - 6.447: 98.5597% ( 1) 00:15:12.572 6.542 - 6.590: 98.5671% ( 1) 00:15:12.572 6.637 - 6.684: 98.5745% ( 1) 00:15:12.572 6.684 - 6.732: 98.5819% ( 1) 00:15:12.572 6.732 - 6.779: 98.5893% ( 1) 00:15:12.572 6.874 - 6.921: 98.5966% ( 1) 00:15:12.572 6.921 - 6.969: 98.6040% ( 1) 00:15:12.572 7.016 - 7.064: 98.6114% ( 1) 00:15:12.572 7.348 - 7.396: 98.6188% ( 1) 00:15:12.572 7.443 - 7.490: 98.6262% ( 1) 00:15:12.572 7.538 - 7.585: 98.6336% ( 1) 00:15:12.572 7.633 - 7.680: 98.6410% ( 1) 00:15:12.572 7.822 - 7.870: 98.6631% ( 3) 00:15:12.572 7.870 - 7.917: 98.6705% ( 1) 00:15:12.572 8.012 - 8.059: 98.6853% ( 2) 00:15:12.572 8.059 - 8.107: 98.7001% ( 2) 00:15:12.572 8.107 - 8.154: 98.7074% ( 1) 00:15:12.572 8.249 - 8.296: 98.7222% ( 2) 00:15:12.572 8.296 - 8.344: 98.7296% ( 1) 00:15:12.572 8.391 - 8.439: 98.7444% ( 2) 00:15:12.572 8.439 - 8.486: 98.7518% ( 1) 00:15:12.572 8.533 - 8.581: 98.7591% ( 1) 00:15:12.572 8.581 - 8.628: 98.7665% ( 1) 00:15:12.572 8.723 - 8.770: 98.7739% ( 1) 00:15:12.572 8.818 - 8.865: 98.7887% ( 2) 00:15:12.572 8.865 - 8.913: 98.8108% ( 3) 00:15:12.572 8.913 - 8.960: 98.8182% ( 1) 00:15:12.572 9.292 - 9.339: 98.8330% ( 2) 00:15:12.572 9.387 - 9.434: 98.8404% ( 1) 00:15:12.572 9.576 - 9.624: 98.8478% ( 1) 00:15:12.572 9.624 - 9.671: 98.8699% ( 3) 00:15:12.572 9.861 - 9.908: 98.8773% ( 1) 00:15:12.572 10.240 - 10.287: 98.8995% ( 3) 00:15:12.572 10.335 - 10.382: 98.9069% ( 1) 00:15:12.572 10.382 - 10.430: 98.9142% ( 1) 00:15:12.572 10.430 - 10.477: 98.9290% ( 2) 00:15:12.572 10.524 - 10.572: 98.9364% ( 1) 00:15:12.572 10.904 - 10.951: 98.9438% ( 1) 00:15:12.572 11.046 - 11.093: 98.9512% ( 1) 00:15:12.572 11.188 - 11.236: 98.9586% ( 1) 00:15:12.572 11.520 - 11.567: 98.9733% ( 2) 00:15:12.572 11.662 - 11.710: 98.9807% ( 1) 00:15:12.572 11.899 - 11.947: 98.9881% ( 1) 00:15:12.572 11.947 - 11.994: 98.9955% ( 1) 00:15:12.572 12.041 - 12.089: 99.0029% ( 1) 00:15:12.572 12.231 - 12.326: 99.0250% ( 3) 00:15:12.572 12.516 - 12.610: 99.0324% ( 1) 00:15:12.572 12.705 - 12.800: 99.0398% ( 1) 00:15:12.572 12.990 - 13.084: 99.0472% ( 1) 00:15:12.572 13.084 - 13.179: 99.0620% ( 2) 00:15:12.572 13.179 - 13.274: 99.0694% ( 1) 00:15:12.572 13.274 - 13.369: 99.0767% ( 1) 00:15:12.572 13.653 - 13.748: 99.0841% ( 1) 00:15:12.572 14.033 - 14.127: 99.0915% ( 1) 00:15:12.572 14.222 - 14.317: 99.0989% ( 1) 00:15:12.572 14.412 - 14.507: 99.1063% ( 1) 00:15:12.572 14.696 - 14.791: 99.1137% ( 1) 00:15:12.572 17.161 - 17.256: 99.1211% ( 1) 00:15:12.572 17.256 - 17.351: 99.1358% ( 2) 00:15:12.572 17.351 - 17.446: 99.1580% ( 3) 00:15:12.572 17.446 - 17.541: 99.1949% ( 5) 00:15:12.572 17.541 - 17.636: 99.2171% ( 3) 00:15:12.572 17.636 - 17.730: 99.2836% ( 9) 00:15:12.572 17.730 - 17.825: 99.3279% ( 6) 00:15:12.572 17.825 - 17.920: 99.4091% ( 11) 00:15:12.572 17.920 - 18.015: 99.4830% ( 10) 00:15:12.572 18.015 - 18.110: 99.5199% ( 5) 00:15:12.572 18.110 - 18.204: 99.5864% ( 9) 00:15:12.572 18.204 - 18.299: 99.6307% ( 6) 00:15:12.572 18.299 - 18.394: 99.6676% ( 5) 00:15:12.572 18.394 - 18.489: 99.7415% ( 10) 00:15:12.572 18.489 - 18.584: 99.7932% ( 7) 00:15:12.572 18.584 - 18.679: 99.8006% ( 1) 00:15:12.572 18.679 - 18.773: 99.8449% ( 6) 00:15:12.572 18.773 - 18.868: 99.8597% ( 2) 00:15:12.572 18.868 - 18.963: 99.8744% ( 2) 00:15:12.572 18.963 - 19.058: 99.8892% ( 2) 00:15:12.572 19.058 - 19.153: 99.8966% ( 1) 00:15:12.572 19.247 - 19.342: 99.9040% ( 1) 00:15:12.572 19.437 - 19.532: 99.9114% ( 1) 00:15:12.572 19.627 - 19.721: 99.9188% ( 1) 00:15:12.572 20.859 - 20.954: 99.9261% ( 1) 00:15:12.572 23.419 - 23.514: 99.9335% ( 1) 00:15:12.572 23.514 - 23.609: 99.9409% ( 1) 00:15:12.572 3980.705 - 4004.978: 100.0000% ( 8) 00:15:12.572 00:15:12.572 Complete histogram 00:15:12.572 ================== 00:15:12.572 Range in us Cumulative Count 00:15:12.572 2.050 - 2.062: 0.0074% ( 1) 00:15:12.572 2.062 - 2.074: 14.4398% ( 1954) 00:15:12.572 2.074 - 2.086: 41.5910% ( 3676) 00:15:12.572 2.086 - 2.098: 44.4198% ( 383) 00:15:12.572 2.098 - 2.110: 54.4279% ( 1355) 00:15:12.572 2.110 - 2.121: 61.0902% ( 902) 00:15:12.572 2.121 - 2.133: 62.7225% ( 221) 00:15:12.572 2.133 - 2.145: 74.2005% ( 1554) 00:15:12.572 2.145 - 2.157: 80.5746% ( 863) 00:15:12.572 2.157 - 2.169: 82.0371% ( 198) 00:15:12.572 2.169 - 2.181: 86.0256% ( 540) 00:15:12.572 2.181 - 2.193: 87.9312% ( 258) 00:15:12.572 2.193 - 2.204: 88.8027% ( 118) 00:15:12.572 2.204 - 2.216: 90.5163% ( 232) 00:15:12.572 2.216 - 2.228: 91.8532% ( 181) 00:15:12.572 2.228 - 2.240: 93.6110% ( 238) 00:15:12.572 2.240 - 2.252: 94.4900% ( 119) 00:15:12.572 2.252 - 2.264: 94.8224% ( 45) 00:15:12.572 2.264 - 2.276: 94.9996% ( 24) 00:15:12.572 2.276 - 2.287: 95.1326% ( 18) 00:15:12.572 2.287 - 2.299: 95.3542% ( 30) 00:15:12.572 2.299 - 2.311: 95.6644% ( 42) 00:15:12.572 2.311 - 2.323: 95.7899% ( 17) 00:15:12.572 2.323 - 2.335: 95.8269% ( 5) 00:15:12.572 2.335 - 2.347: 95.8786% ( 7) 00:15:12.572 2.347 - 2.359: 95.9155% ( 5) 00:15:12.572 2.359 - 2.370: 96.1888% ( 37) 00:15:12.572 2.370 - 2.382: 96.5064% ( 43) 00:15:12.572 2.382 - 2.394: 96.8905% ( 52) 00:15:12.572 2.394 - 2.406: 97.2155% ( 44) 00:15:12.572 2.406 - 2.418: 97.4518% ( 32) 00:15:12.572 2.418 - 2.430: 97.6217% ( 23) 00:15:12.572 2.430 - 2.441: 97.8285% ( 28) 00:15:12.572 2.441 - 2.453: 97.9245% ( 13) 00:15:12.572 2.453 - 2.465: 97.9984% ( 10) 00:15:12.572 2.465 - 2.477: 98.0427% ( 6) 00:15:12.572 2.477 - 2.489: 98.0722% ( 4) 00:15:12.572 2.489 - 2.501: 98.1092% ( 5) 00:15:12.572 2.513 - 2.524: 98.1239% ( 2) 00:15:12.572 2.524 - 2.536: 98.1313% ( 1) 00:15:12.572 2.548 - 2.560: 98.1387% ( 1) 00:15:12.572 2.560 - 2.572: 98.1461% ( 1) 00:15:12.572 2.572 - 2.584: 98.1609% ( 2) 00:15:12.572 2.584 - 2.596: 98.1683% ( 1) 00:15:12.572 2.596 - 2.607: 98.1756% ( 1) 00:15:12.572 2.643 - 2.655: 98.1830% ( 1) 00:15:12.572 2.655 - 2.667: 98.1978% ( 2) 00:15:12.572 2.667 - 2.679: 98.2200% ( 3) 00:15:12.572 2.679 - 2.690: 98.2273% ( 1) 00:15:12.572 2.690 - 2.702: 98.2347% ( 1) 00:15:12.572 2.750 - 2.761: 98.2421% ( 1) 00:15:12.572 2.904 - 2.916: 98.2495% ( 1) 00:15:12.572 2.927 - 2.939: 98.2569% ( 1) 00:15:12.572 2.951 - 2.963: 98.2643% ( 1) 00:15:12.572 2.999 - 3.010: 98.2717% ( 1) 00:15:12.572 3.010 - 3.022: 98.2790% ( 1) 00:15:12.572 3.034 - 3.058: 98.2938% ( 2) 00:15:12.572 3.058 - 3.081: 98.3012% ( 1) 00:15:12.572 3.105 - 3.129: 98.3086% ( 1) 00:15:12.572 3.153 - 3.176: 98.3160% ( 1) 00:15:12.572 3.176 - 3.200: 98.3234% ( 1) 00:15:12.572 3.200 - 3.224: 98.3307% ( 1) 00:15:12.572 3.224 - 3.247: 98.3381% ( 1) 00:15:12.572 3.247 - 3.271: 98.3529% ( 2) 00:15:12.572 3.271 - 3.295: 98.3751% ( 3) 00:15:12.572 3.295 - 3.319: 98.3898% ( 2) 00:15:12.572 3.319 - 3.342: 98.4120% ( 3) 00:15:12.572 3.342 - 3.366: 98.4415% ( 4) 00:15:12.572 3.413 - 3.437: 98.4637% ( 3) 00:15:12.572 3.461 - 3.484: 98.4859% ( 3) 00:15:12.572 3.484 - 3.508: 98.5080% ( 3) 00:15:12.572 3.532 - 3.556: 98.5154% ( 1) 00:15:12.572 3.556 - 3.579: 98.5376% ( 3) 00:15:12.572 3.579 - 3.603: 98.5523% ( 2) 00:15:12.572 3.603 - 3.627: 9[2024-07-14 05:29:19.563328] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:12.572 8.5745% ( 3) 00:15:12.572 3.627 - 3.650: 98.5966% ( 3) 00:15:12.572 3.698 - 3.721: 98.6040% ( 1) 00:15:12.572 3.721 - 3.745: 98.6114% ( 1) 00:15:12.572 3.769 - 3.793: 98.6188% ( 1) 00:15:12.572 3.793 - 3.816: 98.6262% ( 1) 00:15:12.572 3.816 - 3.840: 98.6779% ( 7) 00:15:12.572 3.840 - 3.864: 98.6853% ( 1) 00:15:12.572 3.864 - 3.887: 98.7001% ( 2) 00:15:12.572 3.887 - 3.911: 98.7074% ( 1) 00:15:12.572 4.030 - 4.053: 98.7148% ( 1) 00:15:12.572 4.978 - 5.001: 98.7222% ( 1) 00:15:12.572 5.452 - 5.476: 98.7296% ( 1) 00:15:12.572 5.499 - 5.523: 98.7370% ( 1) 00:15:12.572 5.547 - 5.570: 98.7444% ( 1) 00:15:12.572 5.594 - 5.618: 98.7518% ( 1) 00:15:12.572 5.689 - 5.713: 98.7591% ( 1) 00:15:12.572 5.879 - 5.902: 98.7739% ( 2) 00:15:12.572 6.116 - 6.163: 98.7887% ( 2) 00:15:12.572 6.400 - 6.447: 98.7961% ( 1) 00:15:12.572 6.637 - 6.684: 98.8035% ( 1) 00:15:12.572 6.684 - 6.732: 98.8182% ( 2) 00:15:12.572 6.779 - 6.827: 98.8256% ( 1) 00:15:12.572 6.921 - 6.969: 98.8330% ( 1) 00:15:12.573 7.064 - 7.111: 98.8404% ( 1) 00:15:12.573 7.253 - 7.301: 98.8478% ( 1) 00:15:12.573 7.301 - 7.348: 98.8552% ( 1) 00:15:12.573 7.490 - 7.538: 98.8699% ( 2) 00:15:12.573 7.917 - 7.964: 98.8773% ( 1) 00:15:12.573 8.249 - 8.296: 98.8847% ( 1) 00:15:12.573 8.486 - 8.533: 98.8921% ( 1) 00:15:12.573 9.244 - 9.292: 98.8995% ( 1) 00:15:12.573 10.809 - 10.856: 98.9069% ( 1) 00:15:12.573 10.951 - 10.999: 98.9142% ( 1) 00:15:12.573 11.141 - 11.188: 98.9216% ( 1) 00:15:12.573 15.360 - 15.455: 98.9364% ( 2) 00:15:12.573 15.455 - 15.550: 98.9438% ( 1) 00:15:12.573 15.550 - 15.644: 98.9512% ( 1) 00:15:12.573 15.644 - 15.739: 98.9586% ( 1) 00:15:12.573 15.739 - 15.834: 98.9955% ( 5) 00:15:12.573 15.834 - 15.929: 99.0177% ( 3) 00:15:12.573 15.929 - 16.024: 99.0398% ( 3) 00:15:12.573 16.024 - 16.119: 99.0694% ( 4) 00:15:12.573 16.119 - 16.213: 99.0841% ( 2) 00:15:12.573 16.213 - 16.308: 99.1137% ( 4) 00:15:12.573 16.308 - 16.403: 99.1506% ( 5) 00:15:12.573 16.403 - 16.498: 99.1654% ( 2) 00:15:12.573 16.498 - 16.593: 99.1875% ( 3) 00:15:12.573 16.593 - 16.687: 99.2023% ( 2) 00:15:12.573 16.687 - 16.782: 99.2466% ( 6) 00:15:12.573 16.782 - 16.877: 99.2909% ( 6) 00:15:12.573 16.877 - 16.972: 99.3131% ( 3) 00:15:12.573 17.161 - 17.256: 99.3279% ( 2) 00:15:12.573 17.256 - 17.351: 99.3500% ( 3) 00:15:12.573 17.351 - 17.446: 99.3574% ( 1) 00:15:12.573 17.446 - 17.541: 99.3648% ( 1) 00:15:12.573 17.541 - 17.636: 99.3722% ( 1) 00:15:12.573 17.730 - 17.825: 99.3796% ( 1) 00:15:12.573 17.920 - 18.015: 99.3870% ( 1) 00:15:12.573 18.299 - 18.394: 99.3943% ( 1) 00:15:12.573 28.444 - 28.634: 99.4017% ( 1) 00:15:12.573 3131.164 - 3155.437: 99.4091% ( 1) 00:15:12.573 3980.705 - 4004.978: 99.8523% ( 60) 00:15:12.573 4004.978 - 4029.250: 99.9852% ( 18) 00:15:12.573 4029.250 - 4053.523: 99.9926% ( 1) 00:15:12.573 4126.341 - 4150.613: 100.0000% ( 1) 00:15:12.573 00:15:12.573 05:29:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:12.573 05:29:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:12.573 05:29:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:12.573 05:29:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:12.573 05:29:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:12.830 [ 00:15:12.831 { 00:15:12.831 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:12.831 "subtype": "Discovery", 00:15:12.831 "listen_addresses": [], 00:15:12.831 "allow_any_host": true, 00:15:12.831 "hosts": [] 00:15:12.831 }, 00:15:12.831 { 00:15:12.831 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:12.831 "subtype": "NVMe", 00:15:12.831 "listen_addresses": [ 00:15:12.831 { 00:15:12.831 "trtype": "VFIOUSER", 00:15:12.831 "adrfam": "IPv4", 00:15:12.831 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:12.831 "trsvcid": "0" 00:15:12.831 } 00:15:12.831 ], 00:15:12.831 "allow_any_host": true, 00:15:12.831 "hosts": [], 00:15:12.831 "serial_number": "SPDK1", 00:15:12.831 "model_number": "SPDK bdev Controller", 00:15:12.831 "max_namespaces": 32, 00:15:12.831 "min_cntlid": 1, 00:15:12.831 "max_cntlid": 65519, 00:15:12.831 "namespaces": [ 00:15:12.831 { 00:15:12.831 "nsid": 1, 00:15:12.831 "bdev_name": "Malloc1", 00:15:12.831 "name": "Malloc1", 00:15:12.831 "nguid": "DC8D4BB8CD554463940947E0998144AC", 00:15:12.831 "uuid": "dc8d4bb8-cd55-4463-9409-47e0998144ac" 00:15:12.831 } 00:15:12.831 ] 00:15:12.831 }, 00:15:12.831 { 00:15:12.831 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:12.831 "subtype": "NVMe", 00:15:12.831 "listen_addresses": [ 00:15:12.831 { 00:15:12.831 "trtype": "VFIOUSER", 00:15:12.831 "adrfam": "IPv4", 00:15:12.831 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:12.831 "trsvcid": "0" 00:15:12.831 } 00:15:12.831 ], 00:15:12.831 "allow_any_host": true, 00:15:12.831 "hosts": [], 00:15:12.831 "serial_number": "SPDK2", 00:15:12.831 "model_number": "SPDK bdev Controller", 00:15:12.831 "max_namespaces": 32, 00:15:12.831 "min_cntlid": 1, 00:15:12.831 "max_cntlid": 65519, 00:15:12.831 "namespaces": [ 00:15:12.831 { 00:15:12.831 "nsid": 1, 00:15:12.831 "bdev_name": "Malloc2", 00:15:12.831 "name": "Malloc2", 00:15:12.831 "nguid": "17C53BCDD2214B7C9EB9DC856BADABB8", 00:15:12.831 "uuid": "17c53bcd-d221-4b7c-9eb9-dc856badabb8" 00:15:12.831 } 00:15:12.831 ] 00:15:12.831 } 00:15:12.831 ] 00:15:12.831 05:29:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:12.831 05:29:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3198849 00:15:12.831 05:29:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:12.831 05:29:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:12.831 05:29:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:15:12.831 05:29:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:12.831 05:29:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:12.831 05:29:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:15:12.831 05:29:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:12.831 05:29:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:12.831 EAL: No free 2048 kB hugepages reported on node 1 00:15:13.089 [2024-07-14 05:29:20.015372] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:13.089 Malloc3 00:15:13.089 05:29:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:13.346 [2024-07-14 05:29:20.425298] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:13.346 05:29:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:13.604 Asynchronous Event Request test 00:15:13.604 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:13.604 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:13.604 Registering asynchronous event callbacks... 00:15:13.604 Starting namespace attribute notice tests for all controllers... 00:15:13.604 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:13.604 aer_cb - Changed Namespace 00:15:13.604 Cleaning up... 00:15:13.604 [ 00:15:13.604 { 00:15:13.604 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:13.604 "subtype": "Discovery", 00:15:13.604 "listen_addresses": [], 00:15:13.604 "allow_any_host": true, 00:15:13.604 "hosts": [] 00:15:13.604 }, 00:15:13.604 { 00:15:13.604 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:13.604 "subtype": "NVMe", 00:15:13.604 "listen_addresses": [ 00:15:13.604 { 00:15:13.604 "trtype": "VFIOUSER", 00:15:13.604 "adrfam": "IPv4", 00:15:13.604 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:13.604 "trsvcid": "0" 00:15:13.604 } 00:15:13.604 ], 00:15:13.604 "allow_any_host": true, 00:15:13.604 "hosts": [], 00:15:13.604 "serial_number": "SPDK1", 00:15:13.604 "model_number": "SPDK bdev Controller", 00:15:13.604 "max_namespaces": 32, 00:15:13.604 "min_cntlid": 1, 00:15:13.604 "max_cntlid": 65519, 00:15:13.604 "namespaces": [ 00:15:13.604 { 00:15:13.604 "nsid": 1, 00:15:13.604 "bdev_name": "Malloc1", 00:15:13.604 "name": "Malloc1", 00:15:13.604 "nguid": "DC8D4BB8CD554463940947E0998144AC", 00:15:13.604 "uuid": "dc8d4bb8-cd55-4463-9409-47e0998144ac" 00:15:13.604 }, 00:15:13.604 { 00:15:13.604 "nsid": 2, 00:15:13.604 "bdev_name": "Malloc3", 00:15:13.604 "name": "Malloc3", 00:15:13.604 "nguid": "0C8DC234D1584B41B84842B2A4CE22E6", 00:15:13.604 "uuid": "0c8dc234-d158-4b41-b848-42b2a4ce22e6" 00:15:13.604 } 00:15:13.604 ] 00:15:13.604 }, 00:15:13.604 { 00:15:13.604 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:13.604 "subtype": "NVMe", 00:15:13.604 "listen_addresses": [ 00:15:13.604 { 00:15:13.604 "trtype": "VFIOUSER", 00:15:13.604 "adrfam": "IPv4", 00:15:13.604 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:13.604 "trsvcid": "0" 00:15:13.604 } 00:15:13.604 ], 00:15:13.604 "allow_any_host": true, 00:15:13.604 "hosts": [], 00:15:13.604 "serial_number": "SPDK2", 00:15:13.604 "model_number": "SPDK bdev Controller", 00:15:13.604 "max_namespaces": 32, 00:15:13.604 "min_cntlid": 1, 00:15:13.604 "max_cntlid": 65519, 00:15:13.604 "namespaces": [ 00:15:13.604 { 00:15:13.604 "nsid": 1, 00:15:13.604 "bdev_name": "Malloc2", 00:15:13.604 "name": "Malloc2", 00:15:13.604 "nguid": "17C53BCDD2214B7C9EB9DC856BADABB8", 00:15:13.604 "uuid": "17c53bcd-d221-4b7c-9eb9-dc856badabb8" 00:15:13.604 } 00:15:13.604 ] 00:15:13.604 } 00:15:13.604 ] 00:15:13.864 05:29:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3198849 00:15:13.864 05:29:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:13.864 05:29:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:13.864 05:29:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:13.864 05:29:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:13.864 [2024-07-14 05:29:20.728644] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:15:13.864 [2024-07-14 05:29:20.728681] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3198975 ] 00:15:13.864 EAL: No free 2048 kB hugepages reported on node 1 00:15:13.864 [2024-07-14 05:29:20.763958] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:13.864 [2024-07-14 05:29:20.770193] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:13.864 [2024-07-14 05:29:20.770221] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f961edeb000 00:15:13.864 [2024-07-14 05:29:20.771178] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:13.864 [2024-07-14 05:29:20.772168] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:13.864 [2024-07-14 05:29:20.773177] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:13.864 [2024-07-14 05:29:20.774192] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:13.864 [2024-07-14 05:29:20.775197] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:13.864 [2024-07-14 05:29:20.776207] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:13.864 [2024-07-14 05:29:20.777212] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:13.864 [2024-07-14 05:29:20.778225] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:13.864 [2024-07-14 05:29:20.779235] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:13.864 [2024-07-14 05:29:20.779257] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f961db9d000 00:15:13.864 [2024-07-14 05:29:20.780369] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:13.864 [2024-07-14 05:29:20.794569] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:13.864 [2024-07-14 05:29:20.794598] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:15:13.864 [2024-07-14 05:29:20.799709] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:13.864 [2024-07-14 05:29:20.799759] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:13.864 [2024-07-14 05:29:20.799838] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:15:13.864 [2024-07-14 05:29:20.799897] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:15:13.864 [2024-07-14 05:29:20.799909] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:15:13.864 [2024-07-14 05:29:20.800713] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:13.864 [2024-07-14 05:29:20.800740] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:15:13.864 [2024-07-14 05:29:20.800755] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:15:13.864 [2024-07-14 05:29:20.801721] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:13.864 [2024-07-14 05:29:20.801742] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:15:13.864 [2024-07-14 05:29:20.801755] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:15:13.864 [2024-07-14 05:29:20.802724] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:13.864 [2024-07-14 05:29:20.802744] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:13.864 [2024-07-14 05:29:20.803726] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:13.864 [2024-07-14 05:29:20.803746] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:15:13.864 [2024-07-14 05:29:20.803755] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:15:13.864 [2024-07-14 05:29:20.803767] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:13.864 [2024-07-14 05:29:20.803876] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:15:13.864 [2024-07-14 05:29:20.803886] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:13.864 [2024-07-14 05:29:20.803895] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:13.864 [2024-07-14 05:29:20.804736] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:13.864 [2024-07-14 05:29:20.805739] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:13.864 [2024-07-14 05:29:20.806749] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:13.864 [2024-07-14 05:29:20.807743] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:13.864 [2024-07-14 05:29:20.807820] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:13.864 [2024-07-14 05:29:20.808759] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:13.864 [2024-07-14 05:29:20.808777] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:13.864 [2024-07-14 05:29:20.808786] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:15:13.864 [2024-07-14 05:29:20.808809] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:15:13.864 [2024-07-14 05:29:20.808828] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:15:13.864 [2024-07-14 05:29:20.808884] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:13.864 [2024-07-14 05:29:20.808899] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:13.864 [2024-07-14 05:29:20.808934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:13.864 [2024-07-14 05:29:20.816893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:13.864 [2024-07-14 05:29:20.816919] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:15:13.864 [2024-07-14 05:29:20.816930] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:15:13.864 [2024-07-14 05:29:20.816937] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:15:13.864 [2024-07-14 05:29:20.816945] nvme_ctrlr.c:2004:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:13.864 [2024-07-14 05:29:20.816953] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:15:13.865 [2024-07-14 05:29:20.816960] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:15:13.865 [2024-07-14 05:29:20.816968] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:15:13.865 [2024-07-14 05:29:20.816981] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:15:13.865 [2024-07-14 05:29:20.816997] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:13.865 [2024-07-14 05:29:20.824878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:13.865 [2024-07-14 05:29:20.824903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:13.865 [2024-07-14 05:29:20.824916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:13.865 [2024-07-14 05:29:20.824928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:13.865 [2024-07-14 05:29:20.824940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:13.865 [2024-07-14 05:29:20.824949] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:15:13.865 [2024-07-14 05:29:20.824964] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:13.865 [2024-07-14 05:29:20.824979] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:13.865 [2024-07-14 05:29:20.832879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:13.865 [2024-07-14 05:29:20.832897] nvme_ctrlr.c:2892:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:15:13.865 [2024-07-14 05:29:20.832906] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:13.865 [2024-07-14 05:29:20.832916] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:15:13.865 [2024-07-14 05:29:20.832929] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:15:13.865 [2024-07-14 05:29:20.832947] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:13.865 [2024-07-14 05:29:20.840877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:13.865 [2024-07-14 05:29:20.840952] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:15:13.865 [2024-07-14 05:29:20.840968] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:15:13.865 [2024-07-14 05:29:20.840981] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:13.865 [2024-07-14 05:29:20.840989] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:13.865 [2024-07-14 05:29:20.840999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:13.865 [2024-07-14 05:29:20.848876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:13.865 [2024-07-14 05:29:20.848898] nvme_ctrlr.c:4570:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:15:13.865 [2024-07-14 05:29:20.848914] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:15:13.865 [2024-07-14 05:29:20.848928] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:15:13.865 [2024-07-14 05:29:20.848940] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:13.865 [2024-07-14 05:29:20.848948] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:13.865 [2024-07-14 05:29:20.848957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:13.865 [2024-07-14 05:29:20.856878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:13.865 [2024-07-14 05:29:20.856906] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:13.865 [2024-07-14 05:29:20.856922] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:13.865 [2024-07-14 05:29:20.856934] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:13.865 [2024-07-14 05:29:20.856943] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:13.865 [2024-07-14 05:29:20.856952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:13.865 [2024-07-14 05:29:20.864876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:13.865 [2024-07-14 05:29:20.864898] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:13.865 [2024-07-14 05:29:20.864910] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:15:13.865 [2024-07-14 05:29:20.864924] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:15:13.865 [2024-07-14 05:29:20.864934] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:13.865 [2024-07-14 05:29:20.864942] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:15:13.865 [2024-07-14 05:29:20.864954] nvme_ctrlr.c:2992:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:15:13.865 [2024-07-14 05:29:20.864962] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:15:13.865 [2024-07-14 05:29:20.864971] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:15:13.865 [2024-07-14 05:29:20.865000] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:13.865 [2024-07-14 05:29:20.872876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:13.865 [2024-07-14 05:29:20.872903] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:13.865 [2024-07-14 05:29:20.880878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:13.865 [2024-07-14 05:29:20.880902] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:13.865 [2024-07-14 05:29:20.888891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:13.865 [2024-07-14 05:29:20.888915] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:13.865 [2024-07-14 05:29:20.896877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:13.865 [2024-07-14 05:29:20.896902] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:13.865 [2024-07-14 05:29:20.896912] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:13.865 [2024-07-14 05:29:20.896918] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:13.865 [2024-07-14 05:29:20.896924] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:13.865 [2024-07-14 05:29:20.896934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:13.865 [2024-07-14 05:29:20.896945] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:13.865 [2024-07-14 05:29:20.896953] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:13.865 [2024-07-14 05:29:20.896962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:13.865 [2024-07-14 05:29:20.896973] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:13.865 [2024-07-14 05:29:20.896980] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:13.865 [2024-07-14 05:29:20.896989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:13.865 [2024-07-14 05:29:20.897000] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:13.865 [2024-07-14 05:29:20.897008] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:13.865 [2024-07-14 05:29:20.897016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:13.865 [2024-07-14 05:29:20.904875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:13.865 [2024-07-14 05:29:20.904902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:13.865 [2024-07-14 05:29:20.904920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:13.865 [2024-07-14 05:29:20.904935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:13.865 ===================================================== 00:15:13.865 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:13.865 ===================================================== 00:15:13.865 Controller Capabilities/Features 00:15:13.865 ================================ 00:15:13.865 Vendor ID: 4e58 00:15:13.865 Subsystem Vendor ID: 4e58 00:15:13.865 Serial Number: SPDK2 00:15:13.865 Model Number: SPDK bdev Controller 00:15:13.865 Firmware Version: 24.05.1 00:15:13.865 Recommended Arb Burst: 6 00:15:13.865 IEEE OUI Identifier: 8d 6b 50 00:15:13.865 Multi-path I/O 00:15:13.865 May have multiple subsystem ports: Yes 00:15:13.865 May have multiple controllers: Yes 00:15:13.865 Associated with SR-IOV VF: No 00:15:13.865 Max Data Transfer Size: 131072 00:15:13.865 Max Number of Namespaces: 32 00:15:13.865 Max Number of I/O Queues: 127 00:15:13.865 NVMe Specification Version (VS): 1.3 00:15:13.865 NVMe Specification Version (Identify): 1.3 00:15:13.865 Maximum Queue Entries: 256 00:15:13.865 Contiguous Queues Required: Yes 00:15:13.865 Arbitration Mechanisms Supported 00:15:13.865 Weighted Round Robin: Not Supported 00:15:13.865 Vendor Specific: Not Supported 00:15:13.865 Reset Timeout: 15000 ms 00:15:13.865 Doorbell Stride: 4 bytes 00:15:13.865 NVM Subsystem Reset: Not Supported 00:15:13.865 Command Sets Supported 00:15:13.865 NVM Command Set: Supported 00:15:13.865 Boot Partition: Not Supported 00:15:13.865 Memory Page Size Minimum: 4096 bytes 00:15:13.865 Memory Page Size Maximum: 4096 bytes 00:15:13.865 Persistent Memory Region: Not Supported 00:15:13.866 Optional Asynchronous Events Supported 00:15:13.866 Namespace Attribute Notices: Supported 00:15:13.866 Firmware Activation Notices: Not Supported 00:15:13.866 ANA Change Notices: Not Supported 00:15:13.866 PLE Aggregate Log Change Notices: Not Supported 00:15:13.866 LBA Status Info Alert Notices: Not Supported 00:15:13.866 EGE Aggregate Log Change Notices: Not Supported 00:15:13.866 Normal NVM Subsystem Shutdown event: Not Supported 00:15:13.866 Zone Descriptor Change Notices: Not Supported 00:15:13.866 Discovery Log Change Notices: Not Supported 00:15:13.866 Controller Attributes 00:15:13.866 128-bit Host Identifier: Supported 00:15:13.866 Non-Operational Permissive Mode: Not Supported 00:15:13.866 NVM Sets: Not Supported 00:15:13.866 Read Recovery Levels: Not Supported 00:15:13.866 Endurance Groups: Not Supported 00:15:13.866 Predictable Latency Mode: Not Supported 00:15:13.866 Traffic Based Keep ALive: Not Supported 00:15:13.866 Namespace Granularity: Not Supported 00:15:13.866 SQ Associations: Not Supported 00:15:13.866 UUID List: Not Supported 00:15:13.866 Multi-Domain Subsystem: Not Supported 00:15:13.866 Fixed Capacity Management: Not Supported 00:15:13.866 Variable Capacity Management: Not Supported 00:15:13.866 Delete Endurance Group: Not Supported 00:15:13.866 Delete NVM Set: Not Supported 00:15:13.866 Extended LBA Formats Supported: Not Supported 00:15:13.866 Flexible Data Placement Supported: Not Supported 00:15:13.866 00:15:13.866 Controller Memory Buffer Support 00:15:13.866 ================================ 00:15:13.866 Supported: No 00:15:13.866 00:15:13.866 Persistent Memory Region Support 00:15:13.866 ================================ 00:15:13.866 Supported: No 00:15:13.866 00:15:13.866 Admin Command Set Attributes 00:15:13.866 ============================ 00:15:13.866 Security Send/Receive: Not Supported 00:15:13.866 Format NVM: Not Supported 00:15:13.866 Firmware Activate/Download: Not Supported 00:15:13.866 Namespace Management: Not Supported 00:15:13.866 Device Self-Test: Not Supported 00:15:13.866 Directives: Not Supported 00:15:13.866 NVMe-MI: Not Supported 00:15:13.866 Virtualization Management: Not Supported 00:15:13.866 Doorbell Buffer Config: Not Supported 00:15:13.866 Get LBA Status Capability: Not Supported 00:15:13.866 Command & Feature Lockdown Capability: Not Supported 00:15:13.866 Abort Command Limit: 4 00:15:13.866 Async Event Request Limit: 4 00:15:13.866 Number of Firmware Slots: N/A 00:15:13.866 Firmware Slot 1 Read-Only: N/A 00:15:13.866 Firmware Activation Without Reset: N/A 00:15:13.866 Multiple Update Detection Support: N/A 00:15:13.866 Firmware Update Granularity: No Information Provided 00:15:13.866 Per-Namespace SMART Log: No 00:15:13.866 Asymmetric Namespace Access Log Page: Not Supported 00:15:13.866 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:13.866 Command Effects Log Page: Supported 00:15:13.866 Get Log Page Extended Data: Supported 00:15:13.866 Telemetry Log Pages: Not Supported 00:15:13.866 Persistent Event Log Pages: Not Supported 00:15:13.866 Supported Log Pages Log Page: May Support 00:15:13.866 Commands Supported & Effects Log Page: Not Supported 00:15:13.866 Feature Identifiers & Effects Log Page:May Support 00:15:13.866 NVMe-MI Commands & Effects Log Page: May Support 00:15:13.866 Data Area 4 for Telemetry Log: Not Supported 00:15:13.866 Error Log Page Entries Supported: 128 00:15:13.866 Keep Alive: Supported 00:15:13.866 Keep Alive Granularity: 10000 ms 00:15:13.866 00:15:13.866 NVM Command Set Attributes 00:15:13.866 ========================== 00:15:13.866 Submission Queue Entry Size 00:15:13.866 Max: 64 00:15:13.866 Min: 64 00:15:13.866 Completion Queue Entry Size 00:15:13.866 Max: 16 00:15:13.866 Min: 16 00:15:13.866 Number of Namespaces: 32 00:15:13.866 Compare Command: Supported 00:15:13.866 Write Uncorrectable Command: Not Supported 00:15:13.866 Dataset Management Command: Supported 00:15:13.866 Write Zeroes Command: Supported 00:15:13.866 Set Features Save Field: Not Supported 00:15:13.866 Reservations: Not Supported 00:15:13.866 Timestamp: Not Supported 00:15:13.866 Copy: Supported 00:15:13.866 Volatile Write Cache: Present 00:15:13.866 Atomic Write Unit (Normal): 1 00:15:13.866 Atomic Write Unit (PFail): 1 00:15:13.866 Atomic Compare & Write Unit: 1 00:15:13.866 Fused Compare & Write: Supported 00:15:13.866 Scatter-Gather List 00:15:13.866 SGL Command Set: Supported (Dword aligned) 00:15:13.866 SGL Keyed: Not Supported 00:15:13.866 SGL Bit Bucket Descriptor: Not Supported 00:15:13.866 SGL Metadata Pointer: Not Supported 00:15:13.866 Oversized SGL: Not Supported 00:15:13.866 SGL Metadata Address: Not Supported 00:15:13.866 SGL Offset: Not Supported 00:15:13.866 Transport SGL Data Block: Not Supported 00:15:13.866 Replay Protected Memory Block: Not Supported 00:15:13.866 00:15:13.866 Firmware Slot Information 00:15:13.866 ========================= 00:15:13.866 Active slot: 1 00:15:13.866 Slot 1 Firmware Revision: 24.05.1 00:15:13.866 00:15:13.866 00:15:13.866 Commands Supported and Effects 00:15:13.866 ============================== 00:15:13.866 Admin Commands 00:15:13.866 -------------- 00:15:13.866 Get Log Page (02h): Supported 00:15:13.866 Identify (06h): Supported 00:15:13.866 Abort (08h): Supported 00:15:13.866 Set Features (09h): Supported 00:15:13.866 Get Features (0Ah): Supported 00:15:13.866 Asynchronous Event Request (0Ch): Supported 00:15:13.866 Keep Alive (18h): Supported 00:15:13.866 I/O Commands 00:15:13.866 ------------ 00:15:13.866 Flush (00h): Supported LBA-Change 00:15:13.866 Write (01h): Supported LBA-Change 00:15:13.866 Read (02h): Supported 00:15:13.866 Compare (05h): Supported 00:15:13.866 Write Zeroes (08h): Supported LBA-Change 00:15:13.866 Dataset Management (09h): Supported LBA-Change 00:15:13.866 Copy (19h): Supported LBA-Change 00:15:13.866 Unknown (79h): Supported LBA-Change 00:15:13.866 Unknown (7Ah): Supported 00:15:13.866 00:15:13.866 Error Log 00:15:13.866 ========= 00:15:13.866 00:15:13.866 Arbitration 00:15:13.866 =========== 00:15:13.866 Arbitration Burst: 1 00:15:13.866 00:15:13.866 Power Management 00:15:13.866 ================ 00:15:13.866 Number of Power States: 1 00:15:13.866 Current Power State: Power State #0 00:15:13.866 Power State #0: 00:15:13.866 Max Power: 0.00 W 00:15:13.866 Non-Operational State: Operational 00:15:13.866 Entry Latency: Not Reported 00:15:13.866 Exit Latency: Not Reported 00:15:13.866 Relative Read Throughput: 0 00:15:13.866 Relative Read Latency: 0 00:15:13.866 Relative Write Throughput: 0 00:15:13.866 Relative Write Latency: 0 00:15:13.866 Idle Power: Not Reported 00:15:13.866 Active Power: Not Reported 00:15:13.866 Non-Operational Permissive Mode: Not Supported 00:15:13.866 00:15:13.866 Health Information 00:15:13.866 ================== 00:15:13.866 Critical Warnings: 00:15:13.866 Available Spare Space: OK 00:15:13.866 Temperature: OK 00:15:13.866 Device Reliability: OK 00:15:13.866 Read Only: No 00:15:13.866 Volatile Memory Backup: OK 00:15:13.866 Current Temperature: 0 Kelvin[2024-07-14 05:29:20.905055] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:13.866 [2024-07-14 05:29:20.912879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:13.866 [2024-07-14 05:29:20.912923] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:15:13.866 [2024-07-14 05:29:20.912940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.866 [2024-07-14 05:29:20.912951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.866 [2024-07-14 05:29:20.912960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.866 [2024-07-14 05:29:20.912970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.866 [2024-07-14 05:29:20.913056] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:13.866 [2024-07-14 05:29:20.913076] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:13.866 [2024-07-14 05:29:20.914061] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:13.866 [2024-07-14 05:29:20.914134] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:15:13.866 [2024-07-14 05:29:20.914149] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:15:13.866 [2024-07-14 05:29:20.915076] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:13.866 [2024-07-14 05:29:20.915100] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:15:13.866 [2024-07-14 05:29:20.915152] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:13.866 [2024-07-14 05:29:20.916382] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:13.866 (-273 Celsius) 00:15:13.866 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:13.866 Available Spare: 0% 00:15:13.866 Available Spare Threshold: 0% 00:15:13.866 Life Percentage Used: 0% 00:15:13.866 Data Units Read: 0 00:15:13.866 Data Units Written: 0 00:15:13.866 Host Read Commands: 0 00:15:13.866 Host Write Commands: 0 00:15:13.866 Controller Busy Time: 0 minutes 00:15:13.866 Power Cycles: 0 00:15:13.866 Power On Hours: 0 hours 00:15:13.866 Unsafe Shutdowns: 0 00:15:13.867 Unrecoverable Media Errors: 0 00:15:13.867 Lifetime Error Log Entries: 0 00:15:13.867 Warning Temperature Time: 0 minutes 00:15:13.867 Critical Temperature Time: 0 minutes 00:15:13.867 00:15:13.867 Number of Queues 00:15:13.867 ================ 00:15:13.867 Number of I/O Submission Queues: 127 00:15:13.867 Number of I/O Completion Queues: 127 00:15:13.867 00:15:13.867 Active Namespaces 00:15:13.867 ================= 00:15:13.867 Namespace ID:1 00:15:13.867 Error Recovery Timeout: Unlimited 00:15:13.867 Command Set Identifier: NVM (00h) 00:15:13.867 Deallocate: Supported 00:15:13.867 Deallocated/Unwritten Error: Not Supported 00:15:13.867 Deallocated Read Value: Unknown 00:15:13.867 Deallocate in Write Zeroes: Not Supported 00:15:13.867 Deallocated Guard Field: 0xFFFF 00:15:13.867 Flush: Supported 00:15:13.867 Reservation: Supported 00:15:13.867 Namespace Sharing Capabilities: Multiple Controllers 00:15:13.867 Size (in LBAs): 131072 (0GiB) 00:15:13.867 Capacity (in LBAs): 131072 (0GiB) 00:15:13.867 Utilization (in LBAs): 131072 (0GiB) 00:15:13.867 NGUID: 17C53BCDD2214B7C9EB9DC856BADABB8 00:15:13.867 UUID: 17c53bcd-d221-4b7c-9eb9-dc856badabb8 00:15:13.867 Thin Provisioning: Not Supported 00:15:13.867 Per-NS Atomic Units: Yes 00:15:13.867 Atomic Boundary Size (Normal): 0 00:15:13.867 Atomic Boundary Size (PFail): 0 00:15:13.867 Atomic Boundary Offset: 0 00:15:13.867 Maximum Single Source Range Length: 65535 00:15:13.867 Maximum Copy Length: 65535 00:15:13.867 Maximum Source Range Count: 1 00:15:13.867 NGUID/EUI64 Never Reused: No 00:15:13.867 Namespace Write Protected: No 00:15:13.867 Number of LBA Formats: 1 00:15:13.867 Current LBA Format: LBA Format #00 00:15:13.867 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:13.867 00:15:13.867 05:29:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:14.123 EAL: No free 2048 kB hugepages reported on node 1 00:15:14.123 [2024-07-14 05:29:21.144649] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:19.418 Initializing NVMe Controllers 00:15:19.418 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:19.418 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:19.418 Initialization complete. Launching workers. 00:15:19.418 ======================================================== 00:15:19.418 Latency(us) 00:15:19.418 Device Information : IOPS MiB/s Average min max 00:15:19.418 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 36529.09 142.69 3503.63 1138.29 7321.06 00:15:19.418 ======================================================== 00:15:19.418 Total : 36529.09 142.69 3503.63 1138.29 7321.06 00:15:19.418 00:15:19.418 [2024-07-14 05:29:26.249225] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:19.418 05:29:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:19.418 EAL: No free 2048 kB hugepages reported on node 1 00:15:19.418 [2024-07-14 05:29:26.486885] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:24.682 Initializing NVMe Controllers 00:15:24.682 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:24.682 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:24.682 Initialization complete. Launching workers. 00:15:24.682 ======================================================== 00:15:24.682 Latency(us) 00:15:24.682 Device Information : IOPS MiB/s Average min max 00:15:24.682 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34270.79 133.87 3734.65 1194.36 7492.47 00:15:24.682 ======================================================== 00:15:24.682 Total : 34270.79 133.87 3734.65 1194.36 7492.47 00:15:24.682 00:15:24.682 [2024-07-14 05:29:31.510565] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:24.682 05:29:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:24.682 EAL: No free 2048 kB hugepages reported on node 1 00:15:24.682 [2024-07-14 05:29:31.717606] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:29.942 [2024-07-14 05:29:36.868026] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:29.942 Initializing NVMe Controllers 00:15:29.942 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:29.942 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:29.942 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:29.942 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:29.942 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:29.942 Initialization complete. Launching workers. 00:15:29.942 Starting thread on core 2 00:15:29.942 Starting thread on core 3 00:15:29.942 Starting thread on core 1 00:15:29.942 05:29:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:29.942 EAL: No free 2048 kB hugepages reported on node 1 00:15:30.200 [2024-07-14 05:29:37.178365] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:33.491 [2024-07-14 05:29:40.412144] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:33.491 Initializing NVMe Controllers 00:15:33.491 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:33.491 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:33.491 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:33.491 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:33.491 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:33.491 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:33.491 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:33.491 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:33.491 Initialization complete. Launching workers. 00:15:33.491 Starting thread on core 1 with urgent priority queue 00:15:33.491 Starting thread on core 2 with urgent priority queue 00:15:33.491 Starting thread on core 3 with urgent priority queue 00:15:33.491 Starting thread on core 0 with urgent priority queue 00:15:33.491 SPDK bdev Controller (SPDK2 ) core 0: 1644.33 IO/s 60.81 secs/100000 ios 00:15:33.491 SPDK bdev Controller (SPDK2 ) core 1: 1697.67 IO/s 58.90 secs/100000 ios 00:15:33.491 SPDK bdev Controller (SPDK2 ) core 2: 1984.33 IO/s 50.39 secs/100000 ios 00:15:33.491 SPDK bdev Controller (SPDK2 ) core 3: 1941.67 IO/s 51.50 secs/100000 ios 00:15:33.491 ======================================================== 00:15:33.491 00:15:33.491 05:29:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:33.491 EAL: No free 2048 kB hugepages reported on node 1 00:15:33.748 [2024-07-14 05:29:40.703413] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:33.748 Initializing NVMe Controllers 00:15:33.748 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:33.748 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:33.748 Namespace ID: 1 size: 0GB 00:15:33.748 Initialization complete. 00:15:33.748 INFO: using host memory buffer for IO 00:15:33.748 Hello world! 00:15:33.748 [2024-07-14 05:29:40.715558] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:33.749 05:29:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:33.749 EAL: No free 2048 kB hugepages reported on node 1 00:15:34.006 [2024-07-14 05:29:41.006136] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:35.380 Initializing NVMe Controllers 00:15:35.380 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:35.380 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:35.380 Initialization complete. Launching workers. 00:15:35.380 submit (in ns) avg, min, max = 8579.6, 3501.1, 4017128.9 00:15:35.380 complete (in ns) avg, min, max = 22418.0, 2060.0, 4017587.8 00:15:35.380 00:15:35.380 Submit histogram 00:15:35.380 ================ 00:15:35.380 Range in us Cumulative Count 00:15:35.380 3.484 - 3.508: 0.1388% ( 19) 00:15:35.380 3.508 - 3.532: 1.4392% ( 178) 00:15:35.380 3.532 - 3.556: 4.3030% ( 392) 00:15:35.380 3.556 - 3.579: 10.4690% ( 844) 00:15:35.380 3.579 - 3.603: 19.5792% ( 1247) 00:15:35.380 3.603 - 3.627: 31.3925% ( 1617) 00:15:35.380 3.627 - 3.650: 41.5766% ( 1394) 00:15:35.380 3.650 - 3.674: 48.0932% ( 892) 00:15:35.380 3.674 - 3.698: 53.8793% ( 792) 00:15:35.380 3.698 - 3.721: 60.2937% ( 878) 00:15:35.380 3.721 - 3.745: 64.5602% ( 584) 00:15:35.380 3.745 - 3.769: 68.4395% ( 531) 00:15:35.380 3.769 - 3.793: 70.7481% ( 316) 00:15:35.380 3.793 - 3.816: 73.4877% ( 375) 00:15:35.380 3.816 - 3.840: 76.5926% ( 425) 00:15:35.380 3.840 - 3.864: 80.3916% ( 520) 00:15:35.380 3.864 - 3.887: 83.8252% ( 470) 00:15:35.380 3.887 - 3.911: 86.2507% ( 332) 00:15:35.380 3.911 - 3.935: 88.1867% ( 265) 00:15:35.380 3.935 - 3.959: 89.6625% ( 202) 00:15:35.380 3.959 - 3.982: 91.1821% ( 208) 00:15:35.380 3.982 - 4.006: 92.4605% ( 175) 00:15:35.380 4.006 - 4.030: 93.1108% ( 89) 00:15:35.380 4.030 - 4.053: 93.9144% ( 110) 00:15:35.381 4.053 - 4.077: 94.5281% ( 84) 00:15:35.381 4.077 - 4.101: 94.9956% ( 64) 00:15:35.381 4.101 - 4.124: 95.4047% ( 56) 00:15:35.381 4.124 - 4.148: 95.7627% ( 49) 00:15:35.381 4.148 - 4.172: 95.9746% ( 29) 00:15:35.381 4.172 - 4.196: 96.0988% ( 17) 00:15:35.381 4.196 - 4.219: 96.2230% ( 17) 00:15:35.381 4.219 - 4.243: 96.3252% ( 14) 00:15:35.381 4.243 - 4.267: 96.4129% ( 12) 00:15:35.381 4.267 - 4.290: 96.5444% ( 18) 00:15:35.381 4.290 - 4.314: 96.6102% ( 9) 00:15:35.381 4.314 - 4.338: 96.6832% ( 10) 00:15:35.381 4.338 - 4.361: 96.8147% ( 18) 00:15:35.381 4.361 - 4.385: 96.8586% ( 6) 00:15:35.381 4.385 - 4.409: 96.8878% ( 4) 00:15:35.381 4.409 - 4.433: 96.9243% ( 5) 00:15:35.381 4.433 - 4.456: 96.9462% ( 3) 00:15:35.381 4.456 - 4.480: 96.9828% ( 5) 00:15:35.381 4.480 - 4.504: 97.0339% ( 7) 00:15:35.381 4.504 - 4.527: 97.0558% ( 3) 00:15:35.381 4.527 - 4.551: 97.0704% ( 2) 00:15:35.381 4.551 - 4.575: 97.0923% ( 3) 00:15:35.381 4.575 - 4.599: 97.0996% ( 1) 00:15:35.381 4.599 - 4.622: 97.1143% ( 2) 00:15:35.381 4.622 - 4.646: 97.1362% ( 3) 00:15:35.381 4.646 - 4.670: 97.1654% ( 4) 00:15:35.381 4.670 - 4.693: 97.1727% ( 1) 00:15:35.381 4.693 - 4.717: 97.1873% ( 2) 00:15:35.381 4.717 - 4.741: 97.2019% ( 2) 00:15:35.381 4.764 - 4.788: 97.2604% ( 8) 00:15:35.381 4.788 - 4.812: 97.2823% ( 3) 00:15:35.381 4.812 - 4.836: 97.3188% ( 5) 00:15:35.381 4.836 - 4.859: 97.3627% ( 6) 00:15:35.381 4.859 - 4.883: 97.4065% ( 6) 00:15:35.381 4.883 - 4.907: 97.4868% ( 11) 00:15:35.381 4.907 - 4.930: 97.5234% ( 5) 00:15:35.381 4.930 - 4.954: 97.5526% ( 4) 00:15:35.381 4.954 - 4.978: 97.6037% ( 7) 00:15:35.381 4.978 - 5.001: 97.6403% ( 5) 00:15:35.381 5.001 - 5.025: 97.6695% ( 4) 00:15:35.381 5.025 - 5.049: 97.7279% ( 8) 00:15:35.381 5.049 - 5.073: 97.8010% ( 10) 00:15:35.381 5.073 - 5.096: 97.8521% ( 7) 00:15:35.381 5.096 - 5.120: 97.8741% ( 3) 00:15:35.381 5.120 - 5.144: 97.8960% ( 3) 00:15:35.381 5.144 - 5.167: 97.9544% ( 8) 00:15:35.381 5.167 - 5.191: 97.9909% ( 5) 00:15:35.381 5.191 - 5.215: 97.9982% ( 1) 00:15:35.381 5.215 - 5.239: 98.0275% ( 4) 00:15:35.381 5.239 - 5.262: 98.0421% ( 2) 00:15:35.381 5.262 - 5.286: 98.0640% ( 3) 00:15:35.381 5.286 - 5.310: 98.0713% ( 1) 00:15:35.381 5.310 - 5.333: 98.0932% ( 3) 00:15:35.381 5.333 - 5.357: 98.1005% ( 1) 00:15:35.381 5.357 - 5.381: 98.1371% ( 5) 00:15:35.381 5.381 - 5.404: 98.1517% ( 2) 00:15:35.381 5.452 - 5.476: 98.1590% ( 1) 00:15:35.381 5.476 - 5.499: 98.1663% ( 1) 00:15:35.381 5.499 - 5.523: 98.1882% ( 3) 00:15:35.381 5.523 - 5.547: 98.1955% ( 1) 00:15:35.381 5.547 - 5.570: 98.2101% ( 2) 00:15:35.381 5.570 - 5.594: 98.2174% ( 1) 00:15:35.381 5.641 - 5.665: 98.2320% ( 2) 00:15:35.381 5.689 - 5.713: 98.2466% ( 2) 00:15:35.381 5.713 - 5.736: 98.2539% ( 1) 00:15:35.381 5.807 - 5.831: 98.2686% ( 2) 00:15:35.381 5.831 - 5.855: 98.2759% ( 1) 00:15:35.381 5.855 - 5.879: 98.2905% ( 2) 00:15:35.381 5.879 - 5.902: 98.3051% ( 2) 00:15:35.381 5.902 - 5.926: 98.3124% ( 1) 00:15:35.381 5.950 - 5.973: 98.3197% ( 1) 00:15:35.381 5.973 - 5.997: 98.3270% ( 1) 00:15:35.381 5.997 - 6.021: 98.3416% ( 2) 00:15:35.381 6.021 - 6.044: 98.3562% ( 2) 00:15:35.381 6.044 - 6.068: 98.3708% ( 2) 00:15:35.381 6.068 - 6.116: 98.4001% ( 4) 00:15:35.381 6.116 - 6.163: 98.4147% ( 2) 00:15:35.381 6.163 - 6.210: 98.4220% ( 1) 00:15:35.381 6.210 - 6.258: 98.4366% ( 2) 00:15:35.381 6.258 - 6.305: 98.4439% ( 1) 00:15:35.381 6.400 - 6.447: 98.4512% ( 1) 00:15:35.381 6.542 - 6.590: 98.4731% ( 3) 00:15:35.381 6.684 - 6.732: 98.4804% ( 1) 00:15:35.381 6.732 - 6.779: 98.5023% ( 3) 00:15:35.381 6.779 - 6.827: 98.5243% ( 3) 00:15:35.381 6.874 - 6.921: 98.5316% ( 1) 00:15:35.381 6.921 - 6.969: 98.5389% ( 1) 00:15:35.381 6.969 - 7.016: 98.5462% ( 1) 00:15:35.381 7.159 - 7.206: 98.5535% ( 1) 00:15:35.381 7.253 - 7.301: 98.5608% ( 1) 00:15:35.381 7.301 - 7.348: 98.5681% ( 1) 00:15:35.381 7.443 - 7.490: 98.5754% ( 1) 00:15:35.381 7.585 - 7.633: 98.5827% ( 1) 00:15:35.381 7.680 - 7.727: 98.5900% ( 1) 00:15:35.381 7.870 - 7.917: 98.5973% ( 1) 00:15:35.381 7.917 - 7.964: 98.6046% ( 1) 00:15:35.381 7.964 - 8.012: 98.6119% ( 1) 00:15:35.381 8.249 - 8.296: 98.6192% ( 1) 00:15:35.381 8.533 - 8.581: 98.6411% ( 3) 00:15:35.381 8.628 - 8.676: 98.6485% ( 1) 00:15:35.381 8.676 - 8.723: 98.6558% ( 1) 00:15:35.381 8.723 - 8.770: 98.6631% ( 1) 00:15:35.381 8.770 - 8.818: 98.6704% ( 1) 00:15:35.381 8.818 - 8.865: 98.6777% ( 1) 00:15:35.381 8.865 - 8.913: 98.6923% ( 2) 00:15:35.381 8.913 - 8.960: 98.6996% ( 1) 00:15:35.381 8.960 - 9.007: 98.7069% ( 1) 00:15:35.381 9.150 - 9.197: 98.7215% ( 2) 00:15:35.381 9.197 - 9.244: 98.7288% ( 1) 00:15:35.381 9.292 - 9.339: 98.7434% ( 2) 00:15:35.381 9.481 - 9.529: 98.7507% ( 1) 00:15:35.381 9.529 - 9.576: 98.7580% ( 1) 00:15:35.381 9.766 - 9.813: 98.7653% ( 1) 00:15:35.381 9.861 - 9.908: 98.7726% ( 1) 00:15:35.381 10.003 - 10.050: 98.7800% ( 1) 00:15:35.381 10.098 - 10.145: 98.7873% ( 1) 00:15:35.381 10.193 - 10.240: 98.7946% ( 1) 00:15:35.381 10.335 - 10.382: 98.8019% ( 1) 00:15:35.381 10.430 - 10.477: 98.8092% ( 1) 00:15:35.381 10.572 - 10.619: 98.8238% ( 2) 00:15:35.381 10.809 - 10.856: 98.8384% ( 2) 00:15:35.381 10.856 - 10.904: 98.8530% ( 2) 00:15:35.381 11.046 - 11.093: 98.8603% ( 1) 00:15:35.381 11.188 - 11.236: 98.8676% ( 1) 00:15:35.381 11.283 - 11.330: 98.8749% ( 1) 00:15:35.381 11.378 - 11.425: 98.8822% ( 1) 00:15:35.381 11.425 - 11.473: 98.8895% ( 1) 00:15:35.381 11.615 - 11.662: 98.8968% ( 1) 00:15:35.381 11.662 - 11.710: 98.9041% ( 1) 00:15:35.381 11.804 - 11.852: 98.9115% ( 1) 00:15:35.381 11.852 - 11.899: 98.9188% ( 1) 00:15:35.381 11.994 - 12.041: 98.9261% ( 1) 00:15:35.381 12.421 - 12.516: 98.9480% ( 3) 00:15:35.381 12.516 - 12.610: 98.9553% ( 1) 00:15:35.381 12.800 - 12.895: 98.9699% ( 2) 00:15:35.381 12.990 - 13.084: 98.9772% ( 1) 00:15:35.381 13.274 - 13.369: 98.9845% ( 1) 00:15:35.381 13.464 - 13.559: 98.9918% ( 1) 00:15:35.381 13.559 - 13.653: 98.9991% ( 1) 00:15:35.381 13.938 - 14.033: 99.0064% ( 1) 00:15:35.381 14.127 - 14.222: 99.0283% ( 3) 00:15:35.381 14.222 - 14.317: 99.0357% ( 1) 00:15:35.381 14.696 - 14.791: 99.0503% ( 2) 00:15:35.381 16.972 - 17.067: 99.0576% ( 1) 00:15:35.381 17.067 - 17.161: 99.0722% ( 2) 00:15:35.381 17.161 - 17.256: 99.0795% ( 1) 00:15:35.381 17.256 - 17.351: 99.1014% ( 3) 00:15:35.381 17.351 - 17.446: 99.1306% ( 4) 00:15:35.381 17.446 - 17.541: 99.1379% ( 1) 00:15:35.381 17.541 - 17.636: 99.1598% ( 3) 00:15:35.381 17.636 - 17.730: 99.1891% ( 4) 00:15:35.381 17.730 - 17.825: 99.2256% ( 5) 00:15:35.381 17.825 - 17.920: 99.2987% ( 10) 00:15:35.381 17.920 - 18.015: 99.3425% ( 6) 00:15:35.381 18.015 - 18.110: 99.3717% ( 4) 00:15:35.381 18.110 - 18.204: 99.4375% ( 9) 00:15:35.381 18.204 - 18.299: 99.4813% ( 6) 00:15:35.381 18.299 - 18.394: 99.5470% ( 9) 00:15:35.381 18.394 - 18.489: 99.6274% ( 11) 00:15:35.381 18.489 - 18.584: 99.6420% ( 2) 00:15:35.381 18.584 - 18.679: 99.6786% ( 5) 00:15:35.381 18.679 - 18.773: 99.7297% ( 7) 00:15:35.381 18.773 - 18.868: 99.7589% ( 4) 00:15:35.381 18.868 - 18.963: 99.7735% ( 2) 00:15:35.381 18.963 - 19.058: 99.7954% ( 3) 00:15:35.381 19.058 - 19.153: 99.8027% ( 1) 00:15:35.381 19.153 - 19.247: 99.8101% ( 1) 00:15:35.381 19.342 - 19.437: 99.8247% ( 2) 00:15:35.381 19.721 - 19.816: 99.8320% ( 1) 00:15:35.381 19.816 - 19.911: 99.8466% ( 2) 00:15:35.381 20.006 - 20.101: 99.8539% ( 1) 00:15:35.381 20.101 - 20.196: 99.8612% ( 1) 00:15:35.381 22.566 - 22.661: 99.8685% ( 1) 00:15:35.381 25.221 - 25.410: 99.8758% ( 1) 00:15:35.381 35.840 - 36.030: 99.8831% ( 1) 00:15:35.381 3980.705 - 4004.978: 99.9635% ( 11) 00:15:35.381 4004.978 - 4029.250: 100.0000% ( 5) 00:15:35.381 00:15:35.381 Complete histogram 00:15:35.381 ================== 00:15:35.381 Range in us Cumulative Count 00:15:35.381 2.050 - 2.062: 0.0073% ( 1) 00:15:35.381 2.062 - 2.074: 14.9401% ( 2044) 00:15:35.381 2.074 - 2.086: 38.7127% ( 3254) 00:15:35.381 2.086 - 2.098: 40.9702% ( 309) 00:15:35.381 2.098 - 2.110: 53.4191% ( 1704) 00:15:35.381 2.110 - 2.121: 62.0982% ( 1188) 00:15:35.381 2.121 - 2.133: 63.9100% ( 248) 00:15:35.381 2.133 - 2.145: 74.2475% ( 1415) 00:15:35.381 2.145 - 2.157: 79.7560% ( 754) 00:15:35.381 2.157 - 2.169: 81.0418% ( 176) 00:15:35.381 2.169 - 2.181: 86.0169% ( 681) 00:15:35.381 2.181 - 2.193: 88.3402% ( 318) 00:15:35.381 2.193 - 2.204: 88.8662% ( 72) 00:15:35.381 2.204 - 2.216: 90.2908% ( 195) 00:15:35.381 2.216 - 2.228: 91.8030% ( 207) 00:15:35.381 2.228 - 2.240: 93.1400% ( 183) 00:15:35.381 2.240 - 2.252: 94.0240% ( 121) 00:15:35.382 2.252 - 2.264: 94.4258% ( 55) 00:15:35.382 2.264 - 2.276: 94.6596% ( 32) 00:15:35.382 2.276 - 2.287: 94.8787% ( 30) 00:15:35.382 2.287 - 2.299: 95.0760% ( 27) 00:15:35.382 2.299 - 2.311: 95.2878% ( 29) 00:15:35.382 2.311 - 2.323: 95.4193% ( 18) 00:15:35.382 2.323 - 2.335: 95.4851% ( 9) 00:15:35.382 2.335 - 2.347: 95.5070% ( 3) 00:15:35.382 2.347 - 2.359: 95.5435% ( 5) 00:15:35.382 2.359 - 2.370: 95.6020% ( 8) 00:15:35.382 2.370 - 2.382: 95.6385% ( 5) 00:15:35.382 2.382 - 2.394: 95.7554% ( 16) 00:15:35.382 2.394 - 2.406: 95.8504% ( 13) 00:15:35.382 2.406 - 2.418: 96.0111% ( 22) 00:15:35.382 2.418 - 2.430: 96.2449% ( 32) 00:15:35.382 2.430 - 2.441: 96.4933% ( 34) 00:15:35.382 2.441 - 2.453: 96.6467% ( 21) 00:15:35.382 2.453 - 2.465: 96.8951% ( 34) 00:15:35.382 2.465 - 2.477: 97.1143% ( 30) 00:15:35.382 2.477 - 2.489: 97.2750% ( 22) 00:15:35.382 2.489 - 2.501: 97.3846% ( 15) 00:15:35.382 2.501 - 2.513: 97.4942% ( 15) 00:15:35.382 2.513 - 2.524: 97.5380% ( 6) 00:15:35.382 2.524 - 2.536: 97.6184% ( 11) 00:15:35.382 2.536 - 2.548: 97.6768% ( 8) 00:15:35.382 2.548 - 2.560: 97.7206% ( 6) 00:15:35.382 2.560 - 2.572: 97.7645% ( 6) 00:15:35.382 2.572 - 2.584: 97.7937% ( 4) 00:15:35.382 2.584 - 2.596: 97.8083% ( 2) 00:15:35.382 2.596 - 2.607: 97.8521% ( 6) 00:15:35.382 2.607 - 2.619: 97.8960% ( 6) 00:15:35.382 2.619 - 2.631: 97.9179% ( 3) 00:15:35.382 2.631 - 2.643: 97.9544% ( 5) 00:15:35.382 2.643 - 2.655: 97.9690% ( 2) 00:15:35.382 2.667 - 2.679: 97.9763% ( 1) 00:15:35.382 2.679 - 2.690: 97.9982% ( 3) 00:15:35.382 2.690 - 2.702: 98.0129% ( 2) 00:15:35.382 2.702 - 2.714: 98.0421% ( 4) 00:15:35.382 2.726 - 2.738: 98.0494% ( 1) 00:15:35.382 2.738 - 2.750: 98.0713% ( 3) 00:15:35.382 2.773 - 2.785: 98.0932% ( 3) 00:15:35.382 2.797 - 2.809: 98.1078% ( 2) 00:15:35.382 2.821 - 2.833: 98.1224% ( 2) 00:15:35.382 2.833 - 2.844: 98.1297% ( 1) 00:15:35.382 2.844 - 2.856: 98.1444% ( 2) 00:15:35.382 2.868 - 2.880: 98.1517% ( 1) 00:15:35.382 2.880 - 2.892: 98.1590% ( 1) 00:15:35.382 2.892 - 2.904: 98.1663% ( 1) 00:15:35.382 2.916 - 2.927: 98.1736% ( 1) 00:15:35.382 2.927 - 2.939: 98.1809% ( 1) 00:15:35.382 2.939 - 2.951: 98.1882% ( 1) 00:15:35.382 2.951 - 2.963: 98.2028% ( 2) 00:15:35.382 2.987 - 2.999: 98.2101% ( 1) 00:15:35.382 3.058 - 3.081: 98.2247% ( 2) 00:15:35.382 3.081 - 3.105: 98.2393% ( 2) 00:15:35.382 3.105 - 3.129: 98.2539% ( 2) 00:15:35.382 3.129 - 3.153: 98.2613% ( 1) 00:15:35.382 3.153 - 3.176: 98.2905% ( 4) 00:15:35.382 3.176 - 3.200: 98.2978% ( 1) 00:15:35.382 3.200 - 3.224: 98.3124% ( 2) 00:15:35.382 3.224 - 3.247: 98.3270% ( 2) 00:15:35.382 3.247 - 3.271: 98.3343% ( 1) 00:15:35.382 3.271 - 3.295: 98.3635% ( 4) 00:15:35.382 3.295 - 3.319: 98.3708% ( 1) 00:15:35.382 3.319 - 3.342: 98.4001% ( 4) 00:15:35.382 3.342 - 3.366: 98.4220% ( 3) 00:15:35.382 3.390 - 3.413: 98.4366% ( 2) 00:15:35.382 3.413 - 3.437: 98.4512% ( 2) 00:15:35.382 3.437 - 3.461: 98.4658% ( 2) 00:15:35.382 3.508 - 3.532: 98.4804% ( 2) 00:15:35.382 3.532 - 3.556: 98.5096% ( 4) 00:15:35.382 3.556 - 3.579: 98.5243% ( 2) 00:15:35.382 3.579 - 3.603: 98.5389% ( 2) 00:15:35.382 3.603 - 3.627: 98.5681% ( 4) 00:15:35.382 3.627 - 3.650: 98.5827% ( 2) 00:15:35.382 3.674 - 3.698: 98.5900% ( 1) 00:15:35.382 3.721 - 3.745: 98.6119% ( 3) 00:15:35.382 3.745 - 3.769: 98.6265% ( 2) 00:15:35.382 3.769 - 3.793: 98.6411% ( 2) 00:15:35.382 3.816 - 3.840: 98.6558% ( 2) 00:15:35.382 3.840 - 3.864: 98.6631% ( 1) 00:15:35.382 3.864 - 3.887: 98.6850% ( 3) 00:15:35.382 3.887 - 3.911: 98.7069% ( 3) 00:15:35.382 3.911 - 3.935: 98.7142% ( 1) 00:15:35.382 3.935 - 3.959: 98.7215% ( 1) 00:15:35.382 3.982 - 4.006: 98.7288% ( 1) 00:15:35.382 4.030 - 4.053: 98.7434% ( 2) 00:15:35.382 4.101 - 4.124: 98.7507% ( 1) 00:15:35.382 4.148 - 4.172: 98.7580% ( 1) 00:15:35.382 5.807 - 5.831: 98.7653% ( 1) 00:15:35.382 5.879 - 5.902: 98.7726% ( 1) 00:15:35.382 5.950 - 5.973: 98.7800% ( 1) 00:15:35.382 6.447 - 6.495: 98.7873% ( 1) 00:15:35.382 7.016 - 7.064: 98.8019% ( 2) 00:15:35.382 7.111 - 7.159: 98.8092% ( 1) 00:15:35.382 7.253 - 7.301: 98.8238% ( 2) 00:15:35.382 7.396 - 7.443: 98.8311% ( 1) 00:15:35.382 7.490 - 7.538: 98.8384% ( 1) 00:15:35.382 7.680 - 7.727: 98.8457% ( 1) 00:15:35.382 7.727 - 7.775: 98.8530% ( 1) 00:15:35.382 8.059 - 8.107: 98.8676% ( 2) 00:15:35.382 8.296 - 8.344: 98.8749% ( 1) 00:15:35.382 8.960 - 9.007: 98.8822% ( 1) 00:15:35.382 10.999 - 11.046: 98.8895% ( 1) 00:15:35.382 12.326 - 12.421: 98.8968% ( 1) 00:15:35.382 15.455 - 15.550: 98.9115% ( 2) 00:15:35.382 15.644 - 15.739: 98.9334% ( 3) 00:15:35.382 15.739 - 15.834: 98.9553% ( 3) 00:15:35.382 15.929 - 16.024: 98.9991% ( 6) 00:15:35.382 16.024 - 16.119: 99.0210% ( 3) 00:15:35.382 16.119 - 16.213: 99.0283% ( 1) 00:15:35.382 16.213 - 16.308: 99.0941% ( 9) 00:15:35.382 16.308 - 16.403: 99.1014% ( 1) 00:15:35.382 16.403 - 16.498: 99.1233% ( 3) 00:15:35.382 16.498 - 16.593: 99.1745% ( 7) 00:15:35.382 16.593 - 16.687: 99.2256% ( 7) 00:15:35.382 16.687 - 16.782: 99.2548% ( 4) 00:15:35.382 16.782 - 16.877: 99.2621% ( 1) 00:15:35.382 16.877 - 16.972: 99.2987% ( 5) 00:15:35.382 16.972 - 17.067: 99.3352% ( 5) 00:15:35.382 17.067 - 17.161: 99.3498%[2024-07-14 05:29:42.107731] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:35.382 ( 2) 00:15:35.382 17.446 - 17.541: 99.3717% ( 3) 00:15:35.382 17.541 - 17.636: 99.4009% ( 4) 00:15:35.382 17.636 - 17.730: 99.4229% ( 3) 00:15:35.382 17.730 - 17.825: 99.4302% ( 1) 00:15:35.382 17.825 - 17.920: 99.4375% ( 1) 00:15:35.382 18.015 - 18.110: 99.4448% ( 1) 00:15:35.382 18.110 - 18.204: 99.4594% ( 2) 00:15:35.382 18.299 - 18.394: 99.4740% ( 2) 00:15:35.382 18.394 - 18.489: 99.4886% ( 2) 00:15:35.382 20.670 - 20.764: 99.4959% ( 1) 00:15:35.382 3980.705 - 4004.978: 99.7662% ( 37) 00:15:35.382 4004.978 - 4029.250: 100.0000% ( 32) 00:15:35.382 00:15:35.382 05:29:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:35.382 05:29:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:35.382 05:29:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:35.382 05:29:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:35.382 05:29:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:35.382 [ 00:15:35.382 { 00:15:35.382 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:35.382 "subtype": "Discovery", 00:15:35.382 "listen_addresses": [], 00:15:35.382 "allow_any_host": true, 00:15:35.382 "hosts": [] 00:15:35.382 }, 00:15:35.382 { 00:15:35.382 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:35.382 "subtype": "NVMe", 00:15:35.382 "listen_addresses": [ 00:15:35.382 { 00:15:35.382 "trtype": "VFIOUSER", 00:15:35.382 "adrfam": "IPv4", 00:15:35.382 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:35.382 "trsvcid": "0" 00:15:35.382 } 00:15:35.382 ], 00:15:35.382 "allow_any_host": true, 00:15:35.382 "hosts": [], 00:15:35.382 "serial_number": "SPDK1", 00:15:35.382 "model_number": "SPDK bdev Controller", 00:15:35.382 "max_namespaces": 32, 00:15:35.382 "min_cntlid": 1, 00:15:35.382 "max_cntlid": 65519, 00:15:35.382 "namespaces": [ 00:15:35.382 { 00:15:35.382 "nsid": 1, 00:15:35.382 "bdev_name": "Malloc1", 00:15:35.382 "name": "Malloc1", 00:15:35.382 "nguid": "DC8D4BB8CD554463940947E0998144AC", 00:15:35.382 "uuid": "dc8d4bb8-cd55-4463-9409-47e0998144ac" 00:15:35.382 }, 00:15:35.382 { 00:15:35.382 "nsid": 2, 00:15:35.382 "bdev_name": "Malloc3", 00:15:35.382 "name": "Malloc3", 00:15:35.382 "nguid": "0C8DC234D1584B41B84842B2A4CE22E6", 00:15:35.382 "uuid": "0c8dc234-d158-4b41-b848-42b2a4ce22e6" 00:15:35.382 } 00:15:35.382 ] 00:15:35.382 }, 00:15:35.382 { 00:15:35.382 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:35.382 "subtype": "NVMe", 00:15:35.382 "listen_addresses": [ 00:15:35.382 { 00:15:35.382 "trtype": "VFIOUSER", 00:15:35.382 "adrfam": "IPv4", 00:15:35.382 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:35.382 "trsvcid": "0" 00:15:35.382 } 00:15:35.382 ], 00:15:35.382 "allow_any_host": true, 00:15:35.382 "hosts": [], 00:15:35.382 "serial_number": "SPDK2", 00:15:35.382 "model_number": "SPDK bdev Controller", 00:15:35.382 "max_namespaces": 32, 00:15:35.382 "min_cntlid": 1, 00:15:35.382 "max_cntlid": 65519, 00:15:35.382 "namespaces": [ 00:15:35.382 { 00:15:35.382 "nsid": 1, 00:15:35.382 "bdev_name": "Malloc2", 00:15:35.382 "name": "Malloc2", 00:15:35.382 "nguid": "17C53BCDD2214B7C9EB9DC856BADABB8", 00:15:35.382 "uuid": "17c53bcd-d221-4b7c-9eb9-dc856badabb8" 00:15:35.382 } 00:15:35.382 ] 00:15:35.382 } 00:15:35.382 ] 00:15:35.383 05:29:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:35.383 05:29:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3201498 00:15:35.383 05:29:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:35.383 05:29:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:35.383 05:29:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:15:35.383 05:29:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:35.383 05:29:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:35.383 05:29:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:15:35.383 05:29:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:35.383 05:29:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:35.383 EAL: No free 2048 kB hugepages reported on node 1 00:15:35.641 [2024-07-14 05:29:42.573335] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:35.641 Malloc4 00:15:35.641 05:29:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:35.899 [2024-07-14 05:29:42.926849] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:35.899 05:29:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:35.899 Asynchronous Event Request test 00:15:35.899 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:35.899 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:35.899 Registering asynchronous event callbacks... 00:15:35.899 Starting namespace attribute notice tests for all controllers... 00:15:35.899 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:35.899 aer_cb - Changed Namespace 00:15:35.899 Cleaning up... 00:15:36.157 [ 00:15:36.157 { 00:15:36.157 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:36.157 "subtype": "Discovery", 00:15:36.157 "listen_addresses": [], 00:15:36.157 "allow_any_host": true, 00:15:36.157 "hosts": [] 00:15:36.157 }, 00:15:36.157 { 00:15:36.157 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:36.157 "subtype": "NVMe", 00:15:36.157 "listen_addresses": [ 00:15:36.157 { 00:15:36.157 "trtype": "VFIOUSER", 00:15:36.157 "adrfam": "IPv4", 00:15:36.157 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:36.157 "trsvcid": "0" 00:15:36.157 } 00:15:36.157 ], 00:15:36.157 "allow_any_host": true, 00:15:36.157 "hosts": [], 00:15:36.157 "serial_number": "SPDK1", 00:15:36.157 "model_number": "SPDK bdev Controller", 00:15:36.157 "max_namespaces": 32, 00:15:36.157 "min_cntlid": 1, 00:15:36.157 "max_cntlid": 65519, 00:15:36.157 "namespaces": [ 00:15:36.157 { 00:15:36.157 "nsid": 1, 00:15:36.157 "bdev_name": "Malloc1", 00:15:36.157 "name": "Malloc1", 00:15:36.157 "nguid": "DC8D4BB8CD554463940947E0998144AC", 00:15:36.157 "uuid": "dc8d4bb8-cd55-4463-9409-47e0998144ac" 00:15:36.157 }, 00:15:36.157 { 00:15:36.157 "nsid": 2, 00:15:36.157 "bdev_name": "Malloc3", 00:15:36.157 "name": "Malloc3", 00:15:36.157 "nguid": "0C8DC234D1584B41B84842B2A4CE22E6", 00:15:36.157 "uuid": "0c8dc234-d158-4b41-b848-42b2a4ce22e6" 00:15:36.157 } 00:15:36.157 ] 00:15:36.157 }, 00:15:36.157 { 00:15:36.157 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:36.157 "subtype": "NVMe", 00:15:36.157 "listen_addresses": [ 00:15:36.157 { 00:15:36.157 "trtype": "VFIOUSER", 00:15:36.157 "adrfam": "IPv4", 00:15:36.157 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:36.157 "trsvcid": "0" 00:15:36.157 } 00:15:36.157 ], 00:15:36.157 "allow_any_host": true, 00:15:36.157 "hosts": [], 00:15:36.157 "serial_number": "SPDK2", 00:15:36.157 "model_number": "SPDK bdev Controller", 00:15:36.157 "max_namespaces": 32, 00:15:36.157 "min_cntlid": 1, 00:15:36.157 "max_cntlid": 65519, 00:15:36.157 "namespaces": [ 00:15:36.157 { 00:15:36.157 "nsid": 1, 00:15:36.157 "bdev_name": "Malloc2", 00:15:36.157 "name": "Malloc2", 00:15:36.157 "nguid": "17C53BCDD2214B7C9EB9DC856BADABB8", 00:15:36.157 "uuid": "17c53bcd-d221-4b7c-9eb9-dc856badabb8" 00:15:36.157 }, 00:15:36.157 { 00:15:36.157 "nsid": 2, 00:15:36.157 "bdev_name": "Malloc4", 00:15:36.157 "name": "Malloc4", 00:15:36.157 "nguid": "44D264736FCF43918137062DB345793E", 00:15:36.157 "uuid": "44d26473-6fcf-4391-8137-062db345793e" 00:15:36.157 } 00:15:36.157 ] 00:15:36.157 } 00:15:36.157 ] 00:15:36.157 05:29:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3201498 00:15:36.157 05:29:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:36.157 05:29:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3195906 00:15:36.157 05:29:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 3195906 ']' 00:15:36.157 05:29:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 3195906 00:15:36.157 05:29:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:15:36.157 05:29:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:36.157 05:29:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3195906 00:15:36.157 05:29:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:36.157 05:29:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:36.157 05:29:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3195906' 00:15:36.157 killing process with pid 3195906 00:15:36.157 05:29:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 3195906 00:15:36.157 05:29:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 3195906 00:15:36.724 05:29:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:36.724 05:29:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:36.724 05:29:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:36.724 05:29:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:36.724 05:29:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:36.724 05:29:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3201642 00:15:36.724 05:29:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:36.724 05:29:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3201642' 00:15:36.724 Process pid: 3201642 00:15:36.724 05:29:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:36.724 05:29:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3201642 00:15:36.724 05:29:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 3201642 ']' 00:15:36.724 05:29:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:36.724 05:29:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:36.724 05:29:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:36.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:36.725 05:29:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:36.725 05:29:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:36.725 [2024-07-14 05:29:43.583020] thread.c:2937:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:36.725 [2024-07-14 05:29:43.584083] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:15:36.725 [2024-07-14 05:29:43.584160] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:36.725 EAL: No free 2048 kB hugepages reported on node 1 00:15:36.725 [2024-07-14 05:29:43.648424] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:36.725 [2024-07-14 05:29:43.740145] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:36.725 [2024-07-14 05:29:43.740205] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:36.725 [2024-07-14 05:29:43.740222] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:36.725 [2024-07-14 05:29:43.740236] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:36.725 [2024-07-14 05:29:43.740247] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:36.725 [2024-07-14 05:29:43.740333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:36.725 [2024-07-14 05:29:43.740406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:36.725 [2024-07-14 05:29:43.740509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:36.725 [2024-07-14 05:29:43.740512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:36.982 [2024-07-14 05:29:43.849550] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:36.982 [2024-07-14 05:29:43.849778] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:36.982 [2024-07-14 05:29:43.850078] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:36.982 [2024-07-14 05:29:43.850683] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:36.982 [2024-07-14 05:29:43.850927] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:36.982 05:29:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:36.982 05:29:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:15:36.982 05:29:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:37.915 05:29:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:38.197 05:29:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:38.197 05:29:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:38.197 05:29:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:38.197 05:29:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:38.197 05:29:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:38.461 Malloc1 00:15:38.461 05:29:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:38.720 05:29:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:38.979 05:29:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:39.236 05:29:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:39.236 05:29:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:39.236 05:29:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:39.494 Malloc2 00:15:39.494 05:29:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:39.752 05:29:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:40.010 05:29:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:40.269 05:29:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:40.269 05:29:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3201642 00:15:40.269 05:29:47 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 3201642 ']' 00:15:40.269 05:29:47 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 3201642 00:15:40.269 05:29:47 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:15:40.269 05:29:47 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:40.269 05:29:47 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3201642 00:15:40.269 05:29:47 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:40.269 05:29:47 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:40.269 05:29:47 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3201642' 00:15:40.269 killing process with pid 3201642 00:15:40.269 05:29:47 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 3201642 00:15:40.269 05:29:47 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 3201642 00:15:40.529 05:29:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:40.529 05:29:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:40.529 00:15:40.529 real 0m52.753s 00:15:40.529 user 3m28.622s 00:15:40.529 sys 0m4.203s 00:15:40.529 05:29:47 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:40.529 05:29:47 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:40.529 ************************************ 00:15:40.529 END TEST nvmf_vfio_user 00:15:40.529 ************************************ 00:15:40.529 05:29:47 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:40.529 05:29:47 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:40.529 05:29:47 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:40.529 05:29:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:40.529 ************************************ 00:15:40.529 START TEST nvmf_vfio_user_nvme_compliance 00:15:40.529 ************************************ 00:15:40.529 05:29:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:40.529 * Looking for test storage... 00:15:40.529 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:40.529 05:29:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:40.529 05:29:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:40.529 05:29:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:40.529 05:29:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:40.529 05:29:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:40.529 05:29:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:40.529 05:29:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:40.529 05:29:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:40.529 05:29:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:40.529 05:29:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:40.529 05:29:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:40.529 05:29:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:40.529 05:29:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:40.529 05:29:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:40.529 05:29:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:40.529 05:29:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:40.529 05:29:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:40.529 05:29:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:40.529 05:29:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:40.529 05:29:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:40.529 05:29:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:40.529 05:29:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:40.529 05:29:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.529 05:29:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.529 05:29:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.529 05:29:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:40.529 05:29:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.529 05:29:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:15:40.529 05:29:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:40.529 05:29:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:40.529 05:29:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:40.529 05:29:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:40.529 05:29:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:40.529 05:29:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:40.529 05:29:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:40.529 05:29:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:40.529 05:29:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:40.529 05:29:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:40.529 05:29:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:40.529 05:29:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:40.529 05:29:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:40.529 05:29:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3202125 00:15:40.529 05:29:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:40.529 05:29:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3202125' 00:15:40.529 Process pid: 3202125 00:15:40.529 05:29:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:40.529 05:29:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3202125 00:15:40.529 05:29:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@827 -- # '[' -z 3202125 ']' 00:15:40.529 05:29:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:40.529 05:29:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:40.529 05:29:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:40.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:40.529 05:29:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:40.529 05:29:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:40.788 [2024-07-14 05:29:47.649269] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:15:40.788 [2024-07-14 05:29:47.649340] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:40.788 EAL: No free 2048 kB hugepages reported on node 1 00:15:40.788 [2024-07-14 05:29:47.706936] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:40.788 [2024-07-14 05:29:47.797033] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:40.788 [2024-07-14 05:29:47.797083] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:40.788 [2024-07-14 05:29:47.797098] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:40.788 [2024-07-14 05:29:47.797110] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:40.788 [2024-07-14 05:29:47.797120] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:40.788 [2024-07-14 05:29:47.797193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:40.788 [2024-07-14 05:29:47.797262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:40.788 [2024-07-14 05:29:47.797265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.046 05:29:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:41.046 05:29:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # return 0 00:15:41.046 05:29:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:41.980 05:29:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:41.980 05:29:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:41.981 05:29:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:41.981 05:29:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.981 05:29:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:41.981 05:29:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.981 05:29:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:41.981 05:29:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:41.981 05:29:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.981 05:29:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:41.981 malloc0 00:15:41.981 05:29:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.981 05:29:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:41.981 05:29:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.981 05:29:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:41.981 05:29:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.981 05:29:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:41.981 05:29:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.981 05:29:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:41.981 05:29:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.981 05:29:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:41.981 05:29:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.981 05:29:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:41.981 05:29:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.981 05:29:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:41.981 EAL: No free 2048 kB hugepages reported on node 1 00:15:42.239 00:15:42.239 00:15:42.239 CUnit - A unit testing framework for C - Version 2.1-3 00:15:42.239 http://cunit.sourceforge.net/ 00:15:42.239 00:15:42.239 00:15:42.239 Suite: nvme_compliance 00:15:42.239 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-14 05:29:49.141378] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:42.239 [2024-07-14 05:29:49.142790] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:42.239 [2024-07-14 05:29:49.142813] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:42.239 [2024-07-14 05:29:49.142840] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:42.239 [2024-07-14 05:29:49.144394] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:42.239 passed 00:15:42.239 Test: admin_identify_ctrlr_verify_fused ...[2024-07-14 05:29:49.227965] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:42.239 [2024-07-14 05:29:49.230992] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:42.239 passed 00:15:42.239 Test: admin_identify_ns ...[2024-07-14 05:29:49.317356] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:42.496 [2024-07-14 05:29:49.377887] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:42.496 [2024-07-14 05:29:49.385880] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:42.496 [2024-07-14 05:29:49.407009] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:42.496 passed 00:15:42.496 Test: admin_get_features_mandatory_features ...[2024-07-14 05:29:49.490797] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:42.496 [2024-07-14 05:29:49.493818] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:42.496 passed 00:15:42.496 Test: admin_get_features_optional_features ...[2024-07-14 05:29:49.578391] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:42.496 [2024-07-14 05:29:49.581411] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:42.754 passed 00:15:42.754 Test: admin_set_features_number_of_queues ...[2024-07-14 05:29:49.666542] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:42.754 [2024-07-14 05:29:49.770982] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:42.754 passed 00:15:42.754 Test: admin_get_log_page_mandatory_logs ...[2024-07-14 05:29:49.854589] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:42.754 [2024-07-14 05:29:49.857610] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:43.013 passed 00:15:43.013 Test: admin_get_log_page_with_lpo ...[2024-07-14 05:29:49.939366] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:43.013 [2024-07-14 05:29:50.006882] ctrlr.c:2654:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:43.013 [2024-07-14 05:29:50.019973] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:43.013 passed 00:15:43.013 Test: fabric_property_get ...[2024-07-14 05:29:50.105083] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:43.013 [2024-07-14 05:29:50.106419] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:43.013 [2024-07-14 05:29:50.108109] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:43.271 passed 00:15:43.271 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-14 05:29:50.195689] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:43.271 [2024-07-14 05:29:50.197022] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:43.271 [2024-07-14 05:29:50.198712] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:43.271 passed 00:15:43.271 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-14 05:29:50.278831] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:43.271 [2024-07-14 05:29:50.363878] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:43.528 [2024-07-14 05:29:50.379875] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:43.528 [2024-07-14 05:29:50.385092] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:43.528 passed 00:15:43.528 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-14 05:29:50.468646] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:43.528 [2024-07-14 05:29:50.469962] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:43.528 [2024-07-14 05:29:50.471672] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:43.528 passed 00:15:43.528 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-14 05:29:50.555790] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:43.528 [2024-07-14 05:29:50.630878] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:43.786 [2024-07-14 05:29:50.654878] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:43.786 [2024-07-14 05:29:50.659985] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:43.786 passed 00:15:43.786 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-14 05:29:50.743872] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:43.786 [2024-07-14 05:29:50.745195] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:43.786 [2024-07-14 05:29:50.745247] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:43.786 [2024-07-14 05:29:50.746896] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:43.786 passed 00:15:43.786 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-14 05:29:50.828023] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:44.044 [2024-07-14 05:29:50.923879] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:44.044 [2024-07-14 05:29:50.931890] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:44.044 [2024-07-14 05:29:50.939891] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:44.044 [2024-07-14 05:29:50.947892] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:44.044 [2024-07-14 05:29:50.976984] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:44.044 passed 00:15:44.044 Test: admin_create_io_sq_verify_pc ...[2024-07-14 05:29:51.056599] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:44.044 [2024-07-14 05:29:51.072898] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:44.044 [2024-07-14 05:29:51.089899] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:44.044 passed 00:15:44.301 Test: admin_create_io_qp_max_qps ...[2024-07-14 05:29:51.172443] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:45.232 [2024-07-14 05:29:52.282884] nvme_ctrlr.c:5342:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:15:45.798 [2024-07-14 05:29:52.655556] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:45.798 passed 00:15:45.798 Test: admin_create_io_sq_shared_cq ...[2024-07-14 05:29:52.738863] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:45.798 [2024-07-14 05:29:52.872893] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:46.057 [2024-07-14 05:29:52.909962] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:46.057 passed 00:15:46.057 00:15:46.057 Run Summary: Type Total Ran Passed Failed Inactive 00:15:46.057 suites 1 1 n/a 0 0 00:15:46.057 tests 18 18 18 0 0 00:15:46.057 asserts 360 360 360 0 n/a 00:15:46.057 00:15:46.057 Elapsed time = 1.561 seconds 00:15:46.057 05:29:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3202125 00:15:46.057 05:29:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@946 -- # '[' -z 3202125 ']' 00:15:46.057 05:29:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # kill -0 3202125 00:15:46.057 05:29:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # uname 00:15:46.057 05:29:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:46.057 05:29:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3202125 00:15:46.057 05:29:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:46.057 05:29:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:46.057 05:29:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3202125' 00:15:46.057 killing process with pid 3202125 00:15:46.057 05:29:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@965 -- # kill 3202125 00:15:46.057 05:29:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@970 -- # wait 3202125 00:15:46.314 05:29:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:46.314 05:29:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:46.314 00:15:46.314 real 0m5.711s 00:15:46.314 user 0m16.041s 00:15:46.314 sys 0m0.554s 00:15:46.314 05:29:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:46.314 05:29:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:46.314 ************************************ 00:15:46.314 END TEST nvmf_vfio_user_nvme_compliance 00:15:46.314 ************************************ 00:15:46.314 05:29:53 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:46.314 05:29:53 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:46.314 05:29:53 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:46.314 05:29:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:46.314 ************************************ 00:15:46.314 START TEST nvmf_vfio_user_fuzz 00:15:46.314 ************************************ 00:15:46.314 05:29:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:46.314 * Looking for test storage... 00:15:46.314 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:46.314 05:29:53 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:46.314 05:29:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:46.314 05:29:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:46.314 05:29:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:46.314 05:29:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:46.314 05:29:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:46.314 05:29:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:46.314 05:29:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:46.314 05:29:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:46.314 05:29:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:46.314 05:29:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:46.314 05:29:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:46.314 05:29:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:46.314 05:29:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:46.314 05:29:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:46.314 05:29:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:46.314 05:29:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:46.314 05:29:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:46.314 05:29:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:46.314 05:29:53 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:46.314 05:29:53 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:46.314 05:29:53 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:46.314 05:29:53 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.314 05:29:53 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.314 05:29:53 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.314 05:29:53 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:46.315 05:29:53 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.315 05:29:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:15:46.315 05:29:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:46.315 05:29:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:46.315 05:29:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:46.315 05:29:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:46.315 05:29:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:46.315 05:29:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:46.315 05:29:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:46.315 05:29:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:46.315 05:29:53 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:46.315 05:29:53 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:46.315 05:29:53 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:46.315 05:29:53 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:46.315 05:29:53 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:46.315 05:29:53 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:46.315 05:29:53 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:46.315 05:29:53 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3202845 00:15:46.315 05:29:53 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:46.315 05:29:53 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3202845' 00:15:46.315 Process pid: 3202845 00:15:46.315 05:29:53 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:46.315 05:29:53 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3202845 00:15:46.315 05:29:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@827 -- # '[' -z 3202845 ']' 00:15:46.315 05:29:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:46.315 05:29:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:46.315 05:29:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:46.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:46.315 05:29:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:46.315 05:29:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:46.878 05:29:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:46.878 05:29:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # return 0 00:15:46.878 05:29:53 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:47.811 05:29:54 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:47.811 05:29:54 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.811 05:29:54 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:47.811 05:29:54 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.811 05:29:54 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:47.811 05:29:54 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:47.811 05:29:54 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.811 05:29:54 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:47.811 malloc0 00:15:47.811 05:29:54 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.811 05:29:54 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:47.811 05:29:54 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.811 05:29:54 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:47.811 05:29:54 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.811 05:29:54 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:47.811 05:29:54 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.811 05:29:54 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:47.811 05:29:54 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.811 05:29:54 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:47.811 05:29:54 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.811 05:29:54 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:47.811 05:29:54 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.811 05:29:54 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:47.811 05:29:54 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:19.934 Fuzzing completed. Shutting down the fuzz application 00:16:19.934 00:16:19.934 Dumping successful admin opcodes: 00:16:19.934 8, 9, 10, 24, 00:16:19.934 Dumping successful io opcodes: 00:16:19.934 0, 00:16:19.934 NS: 0x200003a1ef00 I/O qp, Total commands completed: 616398, total successful commands: 2383, random_seed: 1879763392 00:16:19.934 NS: 0x200003a1ef00 admin qp, Total commands completed: 79033, total successful commands: 613, random_seed: 6804096 00:16:19.934 05:30:25 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:19.934 05:30:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.934 05:30:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:19.934 05:30:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.934 05:30:25 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3202845 00:16:19.934 05:30:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@946 -- # '[' -z 3202845 ']' 00:16:19.934 05:30:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # kill -0 3202845 00:16:19.934 05:30:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # uname 00:16:19.934 05:30:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:19.934 05:30:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3202845 00:16:19.934 05:30:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:19.934 05:30:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:19.934 05:30:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3202845' 00:16:19.934 killing process with pid 3202845 00:16:19.934 05:30:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@965 -- # kill 3202845 00:16:19.934 05:30:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@970 -- # wait 3202845 00:16:19.934 05:30:25 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:19.934 05:30:25 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:19.934 00:16:19.934 real 0m32.240s 00:16:19.934 user 0m31.449s 00:16:19.934 sys 0m30.300s 00:16:19.934 05:30:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:19.934 05:30:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:19.934 ************************************ 00:16:19.934 END TEST nvmf_vfio_user_fuzz 00:16:19.934 ************************************ 00:16:19.934 05:30:25 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:19.934 05:30:25 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:19.934 05:30:25 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:19.934 05:30:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:19.934 ************************************ 00:16:19.934 START TEST nvmf_host_management 00:16:19.934 ************************************ 00:16:19.934 05:30:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:19.934 * Looking for test storage... 00:16:19.934 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:19.934 05:30:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:19.934 05:30:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:16:19.934 05:30:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:19.934 05:30:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:19.935 05:30:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:19.935 05:30:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:19.935 05:30:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:19.935 05:30:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:19.935 05:30:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:19.935 05:30:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:19.935 05:30:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:19.935 05:30:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:19.935 05:30:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:19.935 05:30:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:19.935 05:30:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:19.935 05:30:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:19.935 05:30:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:19.935 05:30:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:19.935 05:30:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:19.935 05:30:25 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:19.935 05:30:25 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:19.935 05:30:25 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:19.935 05:30:25 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.935 05:30:25 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.935 05:30:25 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.935 05:30:25 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:16:19.935 05:30:25 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.935 05:30:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:16:19.935 05:30:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:19.935 05:30:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:19.935 05:30:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:19.935 05:30:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:19.935 05:30:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:19.935 05:30:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:19.935 05:30:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:19.935 05:30:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:19.935 05:30:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:19.935 05:30:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:19.935 05:30:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:16:19.935 05:30:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:19.935 05:30:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:19.935 05:30:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:19.935 05:30:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:19.935 05:30:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:19.935 05:30:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:19.935 05:30:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:19.935 05:30:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:19.935 05:30:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:19.935 05:30:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:19.935 05:30:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:16:19.935 05:30:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:20.872 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:20.872 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:16:20.872 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:20.872 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:20.872 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:20.872 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:20.872 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:20.872 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:16:20.872 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:20.872 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:16:20.872 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:16:20.872 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:16:20.872 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:16:20.872 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:16:20.872 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:16:20.872 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:20.872 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:20.872 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:20.872 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:20.872 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:20.872 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:20.872 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:20.872 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:20.872 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:20.872 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:20.872 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:20.872 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:20.872 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:20.872 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:20.872 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:20.872 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:20.872 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:20.872 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:20.872 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:20.872 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:20.872 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:20.872 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:20.872 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:20.872 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:20.872 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:20.872 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:20.872 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:20.872 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:20.872 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:20.872 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:20.872 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:20.873 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:20.873 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:20.873 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:20.873 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:20.873 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:20.873 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:20.873 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:20.873 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:20.873 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:20.873 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:20.873 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:20.873 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:20.873 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:20.873 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:20.873 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:20.873 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:20.873 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:20.873 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:20.873 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:20.873 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:20.873 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:20.873 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:20.873 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:20.873 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:20.873 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:20.873 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:20.873 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:16:20.873 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:20.873 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:20.873 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:20.873 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:20.873 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:20.873 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:20.873 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:20.873 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:20.873 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:20.873 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:20.873 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:20.873 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:20.873 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:20.873 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:20.873 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:20.873 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:20.873 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:20.873 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:20.873 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:20.873 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:20.873 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:20.873 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:20.873 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:20.873 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:20.873 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.113 ms 00:16:20.873 00:16:20.873 --- 10.0.0.2 ping statistics --- 00:16:20.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.873 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:16:20.873 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:20.873 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:20.873 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:16:20.873 00:16:20.873 --- 10.0.0.1 ping statistics --- 00:16:20.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.873 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:16:20.873 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:20.873 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:16:20.873 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:20.873 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:20.873 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:20.873 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:20.873 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:20.873 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:20.873 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:20.873 05:30:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:16:20.873 05:30:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:16:20.873 05:30:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:16:20.873 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:20.873 05:30:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:20.873 05:30:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:20.873 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=3208906 00:16:20.873 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:20.873 05:30:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 3208906 00:16:20.873 05:30:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 3208906 ']' 00:16:20.873 05:30:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:20.873 05:30:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:20.873 05:30:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:20.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:20.873 05:30:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:20.873 05:30:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:20.873 [2024-07-14 05:30:27.875849] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:16:20.873 [2024-07-14 05:30:27.875948] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:20.873 EAL: No free 2048 kB hugepages reported on node 1 00:16:20.873 [2024-07-14 05:30:27.942064] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:21.132 [2024-07-14 05:30:28.029155] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:21.132 [2024-07-14 05:30:28.029219] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:21.132 [2024-07-14 05:30:28.029242] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:21.132 [2024-07-14 05:30:28.029253] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:21.132 [2024-07-14 05:30:28.029263] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:21.132 [2024-07-14 05:30:28.029345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:21.132 [2024-07-14 05:30:28.029473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:21.132 [2024-07-14 05:30:28.029544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:21.132 [2024-07-14 05:30:28.029545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:21.132 05:30:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:21.132 05:30:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:16:21.132 05:30:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:21.132 05:30:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:21.132 05:30:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:21.132 05:30:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:21.132 05:30:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:21.132 05:30:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.132 05:30:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:21.132 [2024-07-14 05:30:28.170456] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:21.132 05:30:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.132 05:30:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:16:21.132 05:30:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:21.132 05:30:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:21.132 05:30:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:21.132 05:30:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:16:21.132 05:30:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:16:21.132 05:30:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.132 05:30:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:21.132 Malloc0 00:16:21.132 [2024-07-14 05:30:28.231474] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:21.391 05:30:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.391 05:30:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:16:21.391 05:30:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:21.391 05:30:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:21.391 05:30:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3208955 00:16:21.391 05:30:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3208955 /var/tmp/bdevperf.sock 00:16:21.391 05:30:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 3208955 ']' 00:16:21.391 05:30:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:21.391 05:30:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:16:21.391 05:30:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:21.391 05:30:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:21.391 05:30:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:21.391 05:30:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:21.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:21.391 05:30:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:21.391 05:30:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:21.391 05:30:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:21.391 05:30:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:21.391 05:30:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:21.391 { 00:16:21.391 "params": { 00:16:21.391 "name": "Nvme$subsystem", 00:16:21.391 "trtype": "$TEST_TRANSPORT", 00:16:21.391 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:21.391 "adrfam": "ipv4", 00:16:21.391 "trsvcid": "$NVMF_PORT", 00:16:21.391 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:21.391 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:21.391 "hdgst": ${hdgst:-false}, 00:16:21.391 "ddgst": ${ddgst:-false} 00:16:21.391 }, 00:16:21.391 "method": "bdev_nvme_attach_controller" 00:16:21.391 } 00:16:21.391 EOF 00:16:21.391 )") 00:16:21.391 05:30:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:21.391 05:30:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:21.391 05:30:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:21.391 05:30:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:21.391 "params": { 00:16:21.391 "name": "Nvme0", 00:16:21.391 "trtype": "tcp", 00:16:21.391 "traddr": "10.0.0.2", 00:16:21.391 "adrfam": "ipv4", 00:16:21.391 "trsvcid": "4420", 00:16:21.391 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:21.391 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:21.391 "hdgst": false, 00:16:21.391 "ddgst": false 00:16:21.391 }, 00:16:21.391 "method": "bdev_nvme_attach_controller" 00:16:21.391 }' 00:16:21.391 [2024-07-14 05:30:28.312242] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:16:21.391 [2024-07-14 05:30:28.312333] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3208955 ] 00:16:21.391 EAL: No free 2048 kB hugepages reported on node 1 00:16:21.391 [2024-07-14 05:30:28.379324] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:21.391 [2024-07-14 05:30:28.466808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:21.962 Running I/O for 10 seconds... 00:16:21.962 05:30:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:21.962 05:30:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:16:21.962 05:30:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:21.962 05:30:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.962 05:30:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:21.962 05:30:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.962 05:30:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:21.962 05:30:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:16:21.962 05:30:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:21.962 05:30:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:16:21.962 05:30:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:16:21.962 05:30:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:16:21.962 05:30:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:16:21.962 05:30:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:21.962 05:30:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:21.962 05:30:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:21.962 05:30:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.962 05:30:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:21.962 05:30:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.962 05:30:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=3 00:16:21.962 05:30:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 3 -ge 100 ']' 00:16:21.962 05:30:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:16:22.222 05:30:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:16:22.222 05:30:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:22.222 05:30:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:22.222 05:30:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:22.222 05:30:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.222 05:30:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:22.222 05:30:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.222 05:30:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=323 00:16:22.222 05:30:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 323 -ge 100 ']' 00:16:22.222 05:30:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:16:22.222 05:30:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:16:22.222 05:30:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:16:22.222 05:30:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:22.222 05:30:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.222 05:30:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:22.222 05:30:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.222 05:30:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:22.222 05:30:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.222 05:30:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:22.222 [2024-07-14 05:30:29.162451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:22.222 [2024-07-14 05:30:29.162496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.222 [2024-07-14 05:30:29.162521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:22.222 [2024-07-14 05:30:29.162536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.222 [2024-07-14 05:30:29.162550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:22.222 [2024-07-14 05:30:29.162564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.222 [2024-07-14 05:30:29.162579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:22.222 [2024-07-14 05:30:29.162592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.222 [2024-07-14 05:30:29.162614] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21441e0 is same with the state(5) to be set 00:16:22.222 [2024-07-14 05:30:29.163200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.222 [2024-07-14 05:30:29.163227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.222 [2024-07-14 05:30:29.163261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:49280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.222 [2024-07-14 05:30:29.163277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.222 [2024-07-14 05:30:29.163293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:49408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.222 [2024-07-14 05:30:29.163316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.222 [2024-07-14 05:30:29.163330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:49536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.222 [2024-07-14 05:30:29.163344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.222 [2024-07-14 05:30:29.163359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:49664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.222 [2024-07-14 05:30:29.163373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.222 [2024-07-14 05:30:29.163388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:49792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.222 [2024-07-14 05:30:29.163402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.222 [2024-07-14 05:30:29.163417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:49920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.222 [2024-07-14 05:30:29.163432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.222 [2024-07-14 05:30:29.163447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:50048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.223 [2024-07-14 05:30:29.163460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.223 [2024-07-14 05:30:29.163476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:50176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.223 [2024-07-14 05:30:29.163490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.223 [2024-07-14 05:30:29.163505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:50304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.223 [2024-07-14 05:30:29.163519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.223 [2024-07-14 05:30:29.163535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:50432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.223 [2024-07-14 05:30:29.163548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.223 [2024-07-14 05:30:29.163563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:50560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.223 [2024-07-14 05:30:29.163577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.223 [2024-07-14 05:30:29.163597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:50688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.223 [2024-07-14 05:30:29.163612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.223 [2024-07-14 05:30:29.163627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:50816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.223 [2024-07-14 05:30:29.163641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.223 [2024-07-14 05:30:29.163656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:50944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.223 [2024-07-14 05:30:29.163670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.223 [2024-07-14 05:30:29.163685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:51072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.223 [2024-07-14 05:30:29.163699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.223 [2024-07-14 05:30:29.163714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:51200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.223 [2024-07-14 05:30:29.163728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.223 [2024-07-14 05:30:29.163744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:51328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.223 [2024-07-14 05:30:29.163759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.223 [2024-07-14 05:30:29.163775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:51456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.223 [2024-07-14 05:30:29.163789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.223 [2024-07-14 05:30:29.163805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:51584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.223 [2024-07-14 05:30:29.163819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.223 [2024-07-14 05:30:29.163835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:51712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.223 [2024-07-14 05:30:29.163871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.223 [2024-07-14 05:30:29.163891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:51840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.223 [2024-07-14 05:30:29.163921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.223 [2024-07-14 05:30:29.163940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:51968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.223 [2024-07-14 05:30:29.163956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.223 [2024-07-14 05:30:29.163972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:52096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.223 [2024-07-14 05:30:29.163988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.223 [2024-07-14 05:30:29.164004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:52224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.223 [2024-07-14 05:30:29.164023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.223 [2024-07-14 05:30:29.164040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:52352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.223 [2024-07-14 05:30:29.164055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.223 [2024-07-14 05:30:29.164071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:52480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.223 [2024-07-14 05:30:29.164086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.223 [2024-07-14 05:30:29.164102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:52608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.223 [2024-07-14 05:30:29.164117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.223 [2024-07-14 05:30:29.164134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:52736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.223 [2024-07-14 05:30:29.164156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.223 [2024-07-14 05:30:29.164187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:52864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.223 [2024-07-14 05:30:29.164203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.223 [2024-07-14 05:30:29.164223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:52992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.223 [2024-07-14 05:30:29.164237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.223 [2024-07-14 05:30:29.164253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:53120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.223 [2024-07-14 05:30:29.164267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.223 [2024-07-14 05:30:29.164282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:53248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.223 [2024-07-14 05:30:29.164296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.223 [2024-07-14 05:30:29.164311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:53376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.223 [2024-07-14 05:30:29.164325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.223 [2024-07-14 05:30:29.164340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:53504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.223 [2024-07-14 05:30:29.164354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.223 [2024-07-14 05:30:29.164370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:53632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.223 [2024-07-14 05:30:29.164384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.223 [2024-07-14 05:30:29.164399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:53760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.223 [2024-07-14 05:30:29.164412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.223 [2024-07-14 05:30:29.164431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:53888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.223 [2024-07-14 05:30:29.164445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.223 [2024-07-14 05:30:29.164460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:54016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.223 [2024-07-14 05:30:29.164474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.223 [2024-07-14 05:30:29.164489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:54144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.223 [2024-07-14 05:30:29.164502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.223 [2024-07-14 05:30:29.164518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:54272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.223 [2024-07-14 05:30:29.164531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.223 [2024-07-14 05:30:29.164546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:54400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.223 [2024-07-14 05:30:29.164560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.223 [2024-07-14 05:30:29.164575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:54528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.223 [2024-07-14 05:30:29.164589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.223 [2024-07-14 05:30:29.164604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:54656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.223 [2024-07-14 05:30:29.164617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.223 [2024-07-14 05:30:29.164632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:54784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.223 [2024-07-14 05:30:29.164646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.223 [2024-07-14 05:30:29.164661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:54912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.223 [2024-07-14 05:30:29.164674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.223 [2024-07-14 05:30:29.164690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:55040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.223 [2024-07-14 05:30:29.164704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.223 [2024-07-14 05:30:29.164720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:55168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.223 [2024-07-14 05:30:29.164733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.223 [2024-07-14 05:30:29.164749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:55296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.223 [2024-07-14 05:30:29.164763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.223 [2024-07-14 05:30:29.164778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:55424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.223 [2024-07-14 05:30:29.164796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.224 [2024-07-14 05:30:29.164811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:55552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.224 [2024-07-14 05:30:29.164825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.224 [2024-07-14 05:30:29.164840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:55680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.224 [2024-07-14 05:30:29.164877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.224 [2024-07-14 05:30:29.164894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:55808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.224 [2024-07-14 05:30:29.164909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.224 [2024-07-14 05:30:29.164923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:55936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.224 [2024-07-14 05:30:29.164937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.224 [2024-07-14 05:30:29.164953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:56064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.224 [2024-07-14 05:30:29.164967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.224 [2024-07-14 05:30:29.164983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:56192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.224 [2024-07-14 05:30:29.164998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.224 [2024-07-14 05:30:29.165014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:56320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.224 [2024-07-14 05:30:29.165029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.224 [2024-07-14 05:30:29.165045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:56448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.224 [2024-07-14 05:30:29.165060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.224 [2024-07-14 05:30:29.165075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:56576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.224 [2024-07-14 05:30:29.165089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.224 [2024-07-14 05:30:29.165105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:56704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.224 [2024-07-14 05:30:29.165120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.224 [2024-07-14 05:30:29.165135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:56832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.224 [2024-07-14 05:30:29.165156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.224 [2024-07-14 05:30:29.165187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:56960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.224 [2024-07-14 05:30:29.165202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.224 [2024-07-14 05:30:29.165225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:57088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.224 [2024-07-14 05:30:29.165240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.224 [2024-07-14 05:30:29.165254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:57216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.224 [2024-07-14 05:30:29.165268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.224 [2024-07-14 05:30:29.165345] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2555110 was disconnected and freed. reset controller. 00:16:22.224 [2024-07-14 05:30:29.166471] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:22.224 05:30:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.224 05:30:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:16:22.224 task offset: 49152 on job bdev=Nvme0n1 fails 00:16:22.224 00:16:22.224 Latency(us) 00:16:22.224 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:22.224 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:22.224 Job: Nvme0n1 ended in about 0.40 seconds with error 00:16:22.224 Verification LBA range: start 0x0 length 0x400 00:16:22.224 Nvme0n1 : 0.40 963.73 60.23 160.62 0.00 55413.15 2536.49 51263.72 00:16:22.224 =================================================================================================================== 00:16:22.224 Total : 963.73 60.23 160.62 0.00 55413.15 2536.49 51263.72 00:16:22.224 [2024-07-14 05:30:29.168353] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:22.224 [2024-07-14 05:30:29.168393] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21441e0 (9): Bad file descriptor 00:16:22.224 [2024-07-14 05:30:29.218734] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:23.157 05:30:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3208955 00:16:23.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3208955) - No such process 00:16:23.157 05:30:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:16:23.157 05:30:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:16:23.157 05:30:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:23.157 05:30:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:16:23.157 05:30:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:23.157 05:30:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:23.157 05:30:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:23.157 05:30:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:23.157 { 00:16:23.157 "params": { 00:16:23.157 "name": "Nvme$subsystem", 00:16:23.157 "trtype": "$TEST_TRANSPORT", 00:16:23.157 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:23.157 "adrfam": "ipv4", 00:16:23.157 "trsvcid": "$NVMF_PORT", 00:16:23.157 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:23.157 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:23.157 "hdgst": ${hdgst:-false}, 00:16:23.157 "ddgst": ${ddgst:-false} 00:16:23.157 }, 00:16:23.157 "method": "bdev_nvme_attach_controller" 00:16:23.157 } 00:16:23.157 EOF 00:16:23.157 )") 00:16:23.157 05:30:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:23.157 05:30:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:23.157 05:30:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:23.157 05:30:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:23.157 "params": { 00:16:23.157 "name": "Nvme0", 00:16:23.157 "trtype": "tcp", 00:16:23.157 "traddr": "10.0.0.2", 00:16:23.157 "adrfam": "ipv4", 00:16:23.157 "trsvcid": "4420", 00:16:23.157 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:23.157 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:23.157 "hdgst": false, 00:16:23.157 "ddgst": false 00:16:23.157 }, 00:16:23.157 "method": "bdev_nvme_attach_controller" 00:16:23.157 }' 00:16:23.157 [2024-07-14 05:30:30.216316] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:16:23.157 [2024-07-14 05:30:30.216407] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3209226 ] 00:16:23.157 EAL: No free 2048 kB hugepages reported on node 1 00:16:23.415 [2024-07-14 05:30:30.278687] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:23.415 [2024-07-14 05:30:30.370421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:23.673 Running I/O for 1 seconds... 00:16:24.607 00:16:24.607 Latency(us) 00:16:24.607 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:24.607 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:24.607 Verification LBA range: start 0x0 length 0x400 00:16:24.607 Nvme0n1 : 1.04 1117.25 69.83 0.00 0.00 56497.79 13981.01 49710.27 00:16:24.607 =================================================================================================================== 00:16:24.607 Total : 1117.25 69.83 0.00 0.00 56497.79 13981.01 49710.27 00:16:24.866 05:30:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:16:24.866 05:30:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:16:24.866 05:30:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:24.866 05:30:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:24.866 05:30:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:16:24.866 05:30:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:24.866 05:30:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:16:24.866 05:30:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:24.866 05:30:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:16:24.866 05:30:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:24.866 05:30:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:24.866 rmmod nvme_tcp 00:16:24.866 rmmod nvme_fabrics 00:16:25.124 rmmod nvme_keyring 00:16:25.124 05:30:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:25.124 05:30:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:16:25.124 05:30:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:16:25.124 05:30:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 3208906 ']' 00:16:25.124 05:30:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 3208906 00:16:25.124 05:30:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@946 -- # '[' -z 3208906 ']' 00:16:25.124 05:30:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@950 -- # kill -0 3208906 00:16:25.124 05:30:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # uname 00:16:25.124 05:30:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:25.124 05:30:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3208906 00:16:25.124 05:30:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:25.124 05:30:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:25.124 05:30:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3208906' 00:16:25.124 killing process with pid 3208906 00:16:25.124 05:30:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@965 -- # kill 3208906 00:16:25.124 05:30:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@970 -- # wait 3208906 00:16:25.383 [2024-07-14 05:30:32.232137] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:16:25.383 05:30:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:25.383 05:30:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:25.383 05:30:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:25.383 05:30:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:25.383 05:30:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:25.383 05:30:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:25.383 05:30:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:25.383 05:30:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:27.282 05:30:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:27.282 05:30:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:16:27.282 00:16:27.282 real 0m8.707s 00:16:27.282 user 0m19.942s 00:16:27.282 sys 0m2.689s 00:16:27.282 05:30:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:27.282 05:30:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:27.282 ************************************ 00:16:27.282 END TEST nvmf_host_management 00:16:27.282 ************************************ 00:16:27.282 05:30:34 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:27.282 05:30:34 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:27.282 05:30:34 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:27.282 05:30:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:27.282 ************************************ 00:16:27.282 START TEST nvmf_lvol 00:16:27.282 ************************************ 00:16:27.282 05:30:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:27.540 * Looking for test storage... 00:16:27.540 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:27.540 05:30:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:27.540 05:30:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:16:27.540 05:30:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:27.540 05:30:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:27.540 05:30:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:27.540 05:30:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:27.540 05:30:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:27.540 05:30:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:27.540 05:30:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:27.540 05:30:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:27.540 05:30:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:27.540 05:30:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:27.540 05:30:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:27.540 05:30:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:27.540 05:30:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:27.540 05:30:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:27.540 05:30:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:27.540 05:30:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:27.540 05:30:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:27.540 05:30:34 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:27.540 05:30:34 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:27.540 05:30:34 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:27.540 05:30:34 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.540 05:30:34 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.540 05:30:34 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.540 05:30:34 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:16:27.540 05:30:34 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.540 05:30:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:16:27.540 05:30:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:27.540 05:30:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:27.540 05:30:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:27.540 05:30:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:27.540 05:30:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:27.540 05:30:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:27.540 05:30:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:27.540 05:30:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:27.540 05:30:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:27.540 05:30:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:27.540 05:30:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:16:27.540 05:30:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:16:27.540 05:30:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:27.540 05:30:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:16:27.540 05:30:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:27.540 05:30:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:27.540 05:30:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:27.541 05:30:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:27.541 05:30:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:27.541 05:30:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:27.541 05:30:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:27.541 05:30:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:27.541 05:30:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:27.541 05:30:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:27.541 05:30:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:16:27.541 05:30:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:29.441 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:29.441 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:29.441 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:29.441 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:29.441 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:29.441 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.263 ms 00:16:29.441 00:16:29.441 --- 10.0.0.2 ping statistics --- 00:16:29.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:29.441 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:29.441 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:29.441 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:16:29.441 00:16:29.441 --- 10.0.0.1 ping statistics --- 00:16:29.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:29.441 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:16:29.441 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:29.442 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:16:29.442 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:29.442 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:29.442 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:29.442 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:29.442 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:29.442 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:29.442 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:29.442 05:30:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:16:29.442 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:29.442 05:30:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:29.442 05:30:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:29.699 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=3211420 00:16:29.699 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:29.699 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 3211420 00:16:29.699 05:30:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@827 -- # '[' -z 3211420 ']' 00:16:29.699 05:30:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:29.699 05:30:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:29.699 05:30:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:29.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:29.699 05:30:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:29.699 05:30:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:29.699 [2024-07-14 05:30:36.593400] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:16:29.699 [2024-07-14 05:30:36.593475] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:29.699 EAL: No free 2048 kB hugepages reported on node 1 00:16:29.699 [2024-07-14 05:30:36.661130] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:29.699 [2024-07-14 05:30:36.752879] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:29.699 [2024-07-14 05:30:36.752945] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:29.699 [2024-07-14 05:30:36.752973] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:29.699 [2024-07-14 05:30:36.752993] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:29.699 [2024-07-14 05:30:36.753005] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:29.699 [2024-07-14 05:30:36.753069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:29.699 [2024-07-14 05:30:36.753124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:29.699 [2024-07-14 05:30:36.753141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:29.957 05:30:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:29.957 05:30:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@860 -- # return 0 00:16:29.957 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:29.957 05:30:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:29.957 05:30:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:29.957 05:30:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:29.957 05:30:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:30.215 [2024-07-14 05:30:37.099023] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:30.215 05:30:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:30.473 05:30:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:16:30.473 05:30:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:30.731 05:30:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:16:30.731 05:30:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:16:30.989 05:30:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:16:31.247 05:30:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=08f7f5e7-26c8-45c1-9883-390ceac52d7a 00:16:31.247 05:30:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 08f7f5e7-26c8-45c1-9883-390ceac52d7a lvol 20 00:16:31.505 05:30:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=262f3031-8b05-4fc3-a778-fce3ced09cd9 00:16:31.505 05:30:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:31.763 05:30:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 262f3031-8b05-4fc3-a778-fce3ced09cd9 00:16:32.021 05:30:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:32.278 [2024-07-14 05:30:39.154513] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:32.278 05:30:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:32.535 05:30:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3211724 00:16:32.535 05:30:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:16:32.535 05:30:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:16:32.535 EAL: No free 2048 kB hugepages reported on node 1 00:16:33.499 05:30:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 262f3031-8b05-4fc3-a778-fce3ced09cd9 MY_SNAPSHOT 00:16:33.758 05:30:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=49740455-d277-4ed0-852b-ae6bd753e4bd 00:16:33.758 05:30:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 262f3031-8b05-4fc3-a778-fce3ced09cd9 30 00:16:34.015 05:30:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 49740455-d277-4ed0-852b-ae6bd753e4bd MY_CLONE 00:16:34.279 05:30:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=ec19f780-ff7e-46ff-b749-836796261443 00:16:34.279 05:30:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate ec19f780-ff7e-46ff-b749-836796261443 00:16:34.845 05:30:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3211724 00:16:42.948 Initializing NVMe Controllers 00:16:42.948 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:16:42.948 Controller IO queue size 128, less than required. 00:16:42.948 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:42.948 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:16:42.948 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:16:42.948 Initialization complete. Launching workers. 00:16:42.948 ======================================================== 00:16:42.948 Latency(us) 00:16:42.948 Device Information : IOPS MiB/s Average min max 00:16:42.948 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10775.90 42.09 11880.81 1406.88 67007.78 00:16:42.948 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9868.40 38.55 12975.00 2093.63 65214.38 00:16:42.948 ======================================================== 00:16:42.948 Total : 20644.30 80.64 12403.86 1406.88 67007.78 00:16:42.948 00:16:42.948 05:30:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:43.206 05:30:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 262f3031-8b05-4fc3-a778-fce3ced09cd9 00:16:43.464 05:30:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 08f7f5e7-26c8-45c1-9883-390ceac52d7a 00:16:43.722 05:30:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:16:43.722 05:30:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:16:43.722 05:30:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:16:43.722 05:30:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:43.722 05:30:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:16:43.722 05:30:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:43.722 05:30:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:16:43.722 05:30:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:43.722 05:30:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:43.722 rmmod nvme_tcp 00:16:43.722 rmmod nvme_fabrics 00:16:43.722 rmmod nvme_keyring 00:16:43.722 05:30:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:43.722 05:30:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:16:43.722 05:30:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:16:43.722 05:30:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 3211420 ']' 00:16:43.722 05:30:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 3211420 00:16:43.722 05:30:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@946 -- # '[' -z 3211420 ']' 00:16:43.722 05:30:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@950 -- # kill -0 3211420 00:16:43.722 05:30:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # uname 00:16:43.722 05:30:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:43.722 05:30:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3211420 00:16:43.722 05:30:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:43.722 05:30:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:43.722 05:30:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3211420' 00:16:43.722 killing process with pid 3211420 00:16:43.722 05:30:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@965 -- # kill 3211420 00:16:43.722 05:30:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@970 -- # wait 3211420 00:16:43.981 05:30:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:43.981 05:30:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:43.981 05:30:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:43.981 05:30:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:43.981 05:30:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:43.982 05:30:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:43.982 05:30:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:43.982 05:30:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:46.517 05:30:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:46.517 00:16:46.517 real 0m18.651s 00:16:46.517 user 1m3.857s 00:16:46.517 sys 0m5.544s 00:16:46.517 05:30:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:46.517 05:30:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:46.517 ************************************ 00:16:46.517 END TEST nvmf_lvol 00:16:46.517 ************************************ 00:16:46.517 05:30:53 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:46.517 05:30:53 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:46.517 05:30:53 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:46.517 05:30:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:46.517 ************************************ 00:16:46.517 START TEST nvmf_lvs_grow 00:16:46.517 ************************************ 00:16:46.517 05:30:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:46.517 * Looking for test storage... 00:16:46.517 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:46.517 05:30:53 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:46.517 05:30:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:16:46.517 05:30:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:46.517 05:30:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:46.517 05:30:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:46.517 05:30:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:46.517 05:30:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:46.517 05:30:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:46.517 05:30:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:46.517 05:30:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:46.517 05:30:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:46.517 05:30:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:46.517 05:30:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:46.517 05:30:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:46.517 05:30:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:46.517 05:30:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:46.517 05:30:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:46.517 05:30:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:46.517 05:30:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:46.517 05:30:53 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:46.517 05:30:53 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:46.517 05:30:53 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:46.517 05:30:53 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.517 05:30:53 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.517 05:30:53 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.517 05:30:53 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:16:46.517 05:30:53 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.517 05:30:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:16:46.517 05:30:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:46.517 05:30:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:46.517 05:30:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:46.517 05:30:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:46.517 05:30:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:46.517 05:30:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:46.517 05:30:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:46.517 05:30:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:46.517 05:30:53 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:46.517 05:30:53 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:46.517 05:30:53 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:16:46.517 05:30:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:46.517 05:30:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:46.517 05:30:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:46.517 05:30:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:46.517 05:30:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:46.517 05:30:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:46.517 05:30:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:46.517 05:30:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:46.517 05:30:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:46.517 05:30:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:46.517 05:30:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:16:46.517 05:30:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:48.418 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:48.418 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:48.418 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:48.418 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:48.418 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:48.418 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:48.418 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:16:48.418 00:16:48.418 --- 10.0.0.2 ping statistics --- 00:16:48.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.419 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:16:48.419 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:48.419 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:48.419 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:16:48.419 00:16:48.419 --- 10.0.0.1 ping statistics --- 00:16:48.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.419 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:16:48.419 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:48.419 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:16:48.419 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:48.419 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:48.419 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:48.419 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:48.419 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:48.419 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:48.419 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:48.419 05:30:55 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:16:48.419 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:48.419 05:30:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:48.419 05:30:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:48.419 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=3214989 00:16:48.419 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:48.419 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 3214989 00:16:48.419 05:30:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # '[' -z 3214989 ']' 00:16:48.419 05:30:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:48.419 05:30:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:48.419 05:30:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:48.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:48.419 05:30:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:48.419 05:30:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:48.419 [2024-07-14 05:30:55.323952] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:16:48.419 [2024-07-14 05:30:55.324038] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:48.419 EAL: No free 2048 kB hugepages reported on node 1 00:16:48.419 [2024-07-14 05:30:55.388123] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:48.419 [2024-07-14 05:30:55.473421] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:48.419 [2024-07-14 05:30:55.473491] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:48.419 [2024-07-14 05:30:55.473515] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:48.419 [2024-07-14 05:30:55.473526] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:48.419 [2024-07-14 05:30:55.473534] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:48.419 [2024-07-14 05:30:55.473566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:48.677 05:30:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:48.677 05:30:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # return 0 00:16:48.677 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:48.677 05:30:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:48.677 05:30:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:48.677 05:30:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:48.677 05:30:55 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:48.936 [2024-07-14 05:30:55.822064] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:48.936 05:30:55 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:16:48.936 05:30:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:16:48.936 05:30:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:48.936 05:30:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:48.936 ************************************ 00:16:48.936 START TEST lvs_grow_clean 00:16:48.936 ************************************ 00:16:48.936 05:30:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1121 -- # lvs_grow 00:16:48.936 05:30:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:48.936 05:30:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:48.936 05:30:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:48.936 05:30:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:48.936 05:30:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:48.936 05:30:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:48.936 05:30:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:48.936 05:30:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:48.936 05:30:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:49.194 05:30:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:49.194 05:30:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:49.452 05:30:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=e2fadffc-2497-4c33-9aeb-0e80e8903a5e 00:16:49.452 05:30:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2fadffc-2497-4c33-9aeb-0e80e8903a5e 00:16:49.452 05:30:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:49.710 05:30:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:49.711 05:30:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:49.711 05:30:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e2fadffc-2497-4c33-9aeb-0e80e8903a5e lvol 150 00:16:49.969 05:30:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=b43ece13-3fec-4df6-81fc-ec69c9b4ad8a 00:16:49.969 05:30:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:49.969 05:30:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:16:50.228 [2024-07-14 05:30:57.148129] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:16:50.228 [2024-07-14 05:30:57.148220] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:16:50.228 true 00:16:50.228 05:30:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2fadffc-2497-4c33-9aeb-0e80e8903a5e 00:16:50.228 05:30:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:16:50.501 05:30:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:16:50.501 05:30:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:50.758 05:30:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b43ece13-3fec-4df6-81fc-ec69c9b4ad8a 00:16:51.015 05:30:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:51.272 [2024-07-14 05:30:58.211374] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:51.272 05:30:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:51.530 05:30:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3215423 00:16:51.530 05:30:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:51.530 05:30:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3215423 /var/tmp/bdevperf.sock 00:16:51.530 05:30:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@827 -- # '[' -z 3215423 ']' 00:16:51.530 05:30:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:51.530 05:30:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:51.530 05:30:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:51.530 05:30:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:51.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:51.530 05:30:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:51.530 05:30:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:16:51.530 [2024-07-14 05:30:58.541971] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:16:51.530 [2024-07-14 05:30:58.542059] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3215423 ] 00:16:51.530 EAL: No free 2048 kB hugepages reported on node 1 00:16:51.530 [2024-07-14 05:30:58.600591] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.786 [2024-07-14 05:30:58.686749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:51.787 05:30:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:51.787 05:30:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # return 0 00:16:51.787 05:30:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:52.350 Nvme0n1 00:16:52.350 05:30:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:52.607 [ 00:16:52.607 { 00:16:52.607 "name": "Nvme0n1", 00:16:52.607 "aliases": [ 00:16:52.607 "b43ece13-3fec-4df6-81fc-ec69c9b4ad8a" 00:16:52.607 ], 00:16:52.607 "product_name": "NVMe disk", 00:16:52.607 "block_size": 4096, 00:16:52.607 "num_blocks": 38912, 00:16:52.607 "uuid": "b43ece13-3fec-4df6-81fc-ec69c9b4ad8a", 00:16:52.607 "assigned_rate_limits": { 00:16:52.607 "rw_ios_per_sec": 0, 00:16:52.607 "rw_mbytes_per_sec": 0, 00:16:52.607 "r_mbytes_per_sec": 0, 00:16:52.607 "w_mbytes_per_sec": 0 00:16:52.607 }, 00:16:52.607 "claimed": false, 00:16:52.607 "zoned": false, 00:16:52.607 "supported_io_types": { 00:16:52.607 "read": true, 00:16:52.607 "write": true, 00:16:52.607 "unmap": true, 00:16:52.607 "write_zeroes": true, 00:16:52.607 "flush": true, 00:16:52.607 "reset": true, 00:16:52.607 "compare": true, 00:16:52.607 "compare_and_write": true, 00:16:52.607 "abort": true, 00:16:52.607 "nvme_admin": true, 00:16:52.607 "nvme_io": true 00:16:52.607 }, 00:16:52.607 "memory_domains": [ 00:16:52.607 { 00:16:52.607 "dma_device_id": "system", 00:16:52.607 "dma_device_type": 1 00:16:52.607 } 00:16:52.607 ], 00:16:52.607 "driver_specific": { 00:16:52.607 "nvme": [ 00:16:52.607 { 00:16:52.607 "trid": { 00:16:52.607 "trtype": "TCP", 00:16:52.607 "adrfam": "IPv4", 00:16:52.607 "traddr": "10.0.0.2", 00:16:52.607 "trsvcid": "4420", 00:16:52.607 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:52.607 }, 00:16:52.607 "ctrlr_data": { 00:16:52.607 "cntlid": 1, 00:16:52.607 "vendor_id": "0x8086", 00:16:52.607 "model_number": "SPDK bdev Controller", 00:16:52.607 "serial_number": "SPDK0", 00:16:52.607 "firmware_revision": "24.05.1", 00:16:52.607 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:52.607 "oacs": { 00:16:52.607 "security": 0, 00:16:52.607 "format": 0, 00:16:52.607 "firmware": 0, 00:16:52.607 "ns_manage": 0 00:16:52.607 }, 00:16:52.607 "multi_ctrlr": true, 00:16:52.607 "ana_reporting": false 00:16:52.607 }, 00:16:52.607 "vs": { 00:16:52.607 "nvme_version": "1.3" 00:16:52.607 }, 00:16:52.607 "ns_data": { 00:16:52.607 "id": 1, 00:16:52.607 "can_share": true 00:16:52.607 } 00:16:52.607 } 00:16:52.607 ], 00:16:52.607 "mp_policy": "active_passive" 00:16:52.607 } 00:16:52.607 } 00:16:52.607 ] 00:16:52.607 05:30:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3215552 00:16:52.607 05:30:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:52.607 05:30:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:52.607 Running I/O for 10 seconds... 00:16:53.590 Latency(us) 00:16:53.590 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:53.590 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:53.590 Nvme0n1 : 1.00 14023.00 54.78 0.00 0.00 0.00 0.00 0.00 00:16:53.590 =================================================================================================================== 00:16:53.590 Total : 14023.00 54.78 0.00 0.00 0.00 0.00 0.00 00:16:53.590 00:16:54.524 05:31:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e2fadffc-2497-4c33-9aeb-0e80e8903a5e 00:16:54.782 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:54.782 Nvme0n1 : 2.00 14212.00 55.52 0.00 0.00 0.00 0.00 0.00 00:16:54.782 =================================================================================================================== 00:16:54.782 Total : 14212.00 55.52 0.00 0.00 0.00 0.00 0.00 00:16:54.782 00:16:54.782 true 00:16:54.782 05:31:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2fadffc-2497-4c33-9aeb-0e80e8903a5e 00:16:54.782 05:31:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:16:55.040 05:31:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:16:55.040 05:31:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:16:55.040 05:31:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3215552 00:16:55.606 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:55.606 Nvme0n1 : 3.00 14322.67 55.95 0.00 0.00 0.00 0.00 0.00 00:16:55.606 =================================================================================================================== 00:16:55.606 Total : 14322.67 55.95 0.00 0.00 0.00 0.00 0.00 00:16:55.606 00:16:56.541 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:56.541 Nvme0n1 : 4.00 14338.00 56.01 0.00 0.00 0.00 0.00 0.00 00:16:56.541 =================================================================================================================== 00:16:56.541 Total : 14338.00 56.01 0.00 0.00 0.00 0.00 0.00 00:16:56.541 00:16:57.911 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:57.911 Nvme0n1 : 5.00 14350.20 56.06 0.00 0.00 0.00 0.00 0.00 00:16:57.911 =================================================================================================================== 00:16:57.911 Total : 14350.20 56.06 0.00 0.00 0.00 0.00 0.00 00:16:57.911 00:16:58.846 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:58.846 Nvme0n1 : 6.00 14380.00 56.17 0.00 0.00 0.00 0.00 0.00 00:16:58.846 =================================================================================================================== 00:16:58.846 Total : 14380.00 56.17 0.00 0.00 0.00 0.00 0.00 00:16:58.846 00:16:59.783 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:59.783 Nvme0n1 : 7.00 14474.14 56.54 0.00 0.00 0.00 0.00 0.00 00:16:59.783 =================================================================================================================== 00:16:59.783 Total : 14474.14 56.54 0.00 0.00 0.00 0.00 0.00 00:16:59.783 00:17:00.716 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:00.716 Nvme0n1 : 8.00 14504.88 56.66 0.00 0.00 0.00 0.00 0.00 00:17:00.716 =================================================================================================================== 00:17:00.716 Total : 14504.88 56.66 0.00 0.00 0.00 0.00 0.00 00:17:00.716 00:17:01.649 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:01.649 Nvme0n1 : 9.00 14521.78 56.73 0.00 0.00 0.00 0.00 0.00 00:17:01.649 =================================================================================================================== 00:17:01.649 Total : 14521.78 56.73 0.00 0.00 0.00 0.00 0.00 00:17:01.649 00:17:02.582 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:02.582 Nvme0n1 : 10.00 14528.80 56.75 0.00 0.00 0.00 0.00 0.00 00:17:02.582 =================================================================================================================== 00:17:02.582 Total : 14528.80 56.75 0.00 0.00 0.00 0.00 0.00 00:17:02.582 00:17:02.582 00:17:02.582 Latency(us) 00:17:02.582 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:02.582 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:02.582 Nvme0n1 : 10.01 14529.07 56.75 0.00 0.00 8803.74 2269.49 15922.82 00:17:02.582 =================================================================================================================== 00:17:02.582 Total : 14529.07 56.75 0.00 0.00 8803.74 2269.49 15922.82 00:17:02.582 0 00:17:02.582 05:31:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3215423 00:17:02.582 05:31:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@946 -- # '[' -z 3215423 ']' 00:17:02.582 05:31:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # kill -0 3215423 00:17:02.840 05:31:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # uname 00:17:02.840 05:31:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:02.840 05:31:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3215423 00:17:02.840 05:31:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:02.840 05:31:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:02.840 05:31:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3215423' 00:17:02.840 killing process with pid 3215423 00:17:02.840 05:31:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@965 -- # kill 3215423 00:17:02.840 Received shutdown signal, test time was about 10.000000 seconds 00:17:02.840 00:17:02.840 Latency(us) 00:17:02.840 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:02.840 =================================================================================================================== 00:17:02.840 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:02.840 05:31:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # wait 3215423 00:17:02.840 05:31:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:03.098 05:31:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:03.663 05:31:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2fadffc-2497-4c33-9aeb-0e80e8903a5e 00:17:03.663 05:31:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:03.663 05:31:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:03.663 05:31:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:17:03.663 05:31:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:03.920 [2024-07-14 05:31:10.971718] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:03.920 05:31:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2fadffc-2497-4c33-9aeb-0e80e8903a5e 00:17:03.920 05:31:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:17:03.920 05:31:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2fadffc-2497-4c33-9aeb-0e80e8903a5e 00:17:03.920 05:31:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:03.920 05:31:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:03.920 05:31:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:03.920 05:31:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:03.920 05:31:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:03.920 05:31:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:03.920 05:31:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:03.920 05:31:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:03.920 05:31:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2fadffc-2497-4c33-9aeb-0e80e8903a5e 00:17:04.186 request: 00:17:04.186 { 00:17:04.186 "uuid": "e2fadffc-2497-4c33-9aeb-0e80e8903a5e", 00:17:04.186 "method": "bdev_lvol_get_lvstores", 00:17:04.186 "req_id": 1 00:17:04.186 } 00:17:04.186 Got JSON-RPC error response 00:17:04.186 response: 00:17:04.186 { 00:17:04.186 "code": -19, 00:17:04.186 "message": "No such device" 00:17:04.186 } 00:17:04.186 05:31:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:17:04.186 05:31:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:04.186 05:31:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:04.186 05:31:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:04.186 05:31:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:04.446 aio_bdev 00:17:04.446 05:31:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev b43ece13-3fec-4df6-81fc-ec69c9b4ad8a 00:17:04.446 05:31:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@895 -- # local bdev_name=b43ece13-3fec-4df6-81fc-ec69c9b4ad8a 00:17:04.446 05:31:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:04.446 05:31:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local i 00:17:04.446 05:31:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:04.446 05:31:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:04.446 05:31:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:04.703 05:31:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b43ece13-3fec-4df6-81fc-ec69c9b4ad8a -t 2000 00:17:04.961 [ 00:17:04.961 { 00:17:04.961 "name": "b43ece13-3fec-4df6-81fc-ec69c9b4ad8a", 00:17:04.961 "aliases": [ 00:17:04.961 "lvs/lvol" 00:17:04.961 ], 00:17:04.961 "product_name": "Logical Volume", 00:17:04.961 "block_size": 4096, 00:17:04.961 "num_blocks": 38912, 00:17:04.961 "uuid": "b43ece13-3fec-4df6-81fc-ec69c9b4ad8a", 00:17:04.961 "assigned_rate_limits": { 00:17:04.961 "rw_ios_per_sec": 0, 00:17:04.961 "rw_mbytes_per_sec": 0, 00:17:04.961 "r_mbytes_per_sec": 0, 00:17:04.961 "w_mbytes_per_sec": 0 00:17:04.961 }, 00:17:04.961 "claimed": false, 00:17:04.961 "zoned": false, 00:17:04.961 "supported_io_types": { 00:17:04.961 "read": true, 00:17:04.961 "write": true, 00:17:04.961 "unmap": true, 00:17:04.961 "write_zeroes": true, 00:17:04.961 "flush": false, 00:17:04.961 "reset": true, 00:17:04.961 "compare": false, 00:17:04.961 "compare_and_write": false, 00:17:04.961 "abort": false, 00:17:04.961 "nvme_admin": false, 00:17:04.961 "nvme_io": false 00:17:04.961 }, 00:17:04.961 "driver_specific": { 00:17:04.961 "lvol": { 00:17:04.961 "lvol_store_uuid": "e2fadffc-2497-4c33-9aeb-0e80e8903a5e", 00:17:04.961 "base_bdev": "aio_bdev", 00:17:04.961 "thin_provision": false, 00:17:04.961 "num_allocated_clusters": 38, 00:17:04.961 "snapshot": false, 00:17:04.961 "clone": false, 00:17:04.961 "esnap_clone": false 00:17:04.961 } 00:17:04.961 } 00:17:04.961 } 00:17:04.961 ] 00:17:04.961 05:31:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # return 0 00:17:04.961 05:31:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2fadffc-2497-4c33-9aeb-0e80e8903a5e 00:17:04.961 05:31:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:05.219 05:31:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:05.219 05:31:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2fadffc-2497-4c33-9aeb-0e80e8903a5e 00:17:05.219 05:31:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:05.476 05:31:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:05.476 05:31:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b43ece13-3fec-4df6-81fc-ec69c9b4ad8a 00:17:05.734 05:31:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e2fadffc-2497-4c33-9aeb-0e80e8903a5e 00:17:05.991 05:31:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:06.249 05:31:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:06.249 00:17:06.249 real 0m17.401s 00:17:06.249 user 0m16.816s 00:17:06.249 sys 0m1.912s 00:17:06.249 05:31:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:06.249 05:31:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:17:06.249 ************************************ 00:17:06.249 END TEST lvs_grow_clean 00:17:06.249 ************************************ 00:17:06.249 05:31:13 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:17:06.249 05:31:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:06.249 05:31:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:06.249 05:31:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:06.249 ************************************ 00:17:06.249 START TEST lvs_grow_dirty 00:17:06.249 ************************************ 00:17:06.249 05:31:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1121 -- # lvs_grow dirty 00:17:06.249 05:31:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:06.249 05:31:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:06.249 05:31:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:06.249 05:31:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:06.249 05:31:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:06.249 05:31:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:06.249 05:31:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:06.249 05:31:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:06.249 05:31:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:06.507 05:31:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:06.507 05:31:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:06.765 05:31:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=63c0a8bc-007a-43a2-ac3e-4105007922e8 00:17:06.765 05:31:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 63c0a8bc-007a-43a2-ac3e-4105007922e8 00:17:06.765 05:31:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:07.023 05:31:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:07.023 05:31:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:07.023 05:31:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 63c0a8bc-007a-43a2-ac3e-4105007922e8 lvol 150 00:17:07.282 05:31:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=adeeb49b-7ac1-468f-82c7-fa6da2268445 00:17:07.282 05:31:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:07.282 05:31:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:07.573 [2024-07-14 05:31:14.592101] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:07.573 [2024-07-14 05:31:14.592203] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:07.573 true 00:17:07.573 05:31:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 63c0a8bc-007a-43a2-ac3e-4105007922e8 00:17:07.573 05:31:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:07.831 05:31:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:07.831 05:31:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:08.088 05:31:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 adeeb49b-7ac1-468f-82c7-fa6da2268445 00:17:08.345 05:31:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:08.603 [2024-07-14 05:31:15.671370] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:08.603 05:31:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:08.860 05:31:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3217585 00:17:08.860 05:31:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:08.860 05:31:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:08.860 05:31:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3217585 /var/tmp/bdevperf.sock 00:17:08.860 05:31:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 3217585 ']' 00:17:08.860 05:31:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:08.860 05:31:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:08.860 05:31:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:08.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:08.860 05:31:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:08.860 05:31:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:09.117 [2024-07-14 05:31:15.980726] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:09.117 [2024-07-14 05:31:15.980795] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3217585 ] 00:17:09.117 EAL: No free 2048 kB hugepages reported on node 1 00:17:09.117 [2024-07-14 05:31:16.043566] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.117 [2024-07-14 05:31:16.133671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:09.374 05:31:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:09.375 05:31:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:17:09.375 05:31:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:09.633 Nvme0n1 00:17:09.633 05:31:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:09.891 [ 00:17:09.891 { 00:17:09.891 "name": "Nvme0n1", 00:17:09.891 "aliases": [ 00:17:09.891 "adeeb49b-7ac1-468f-82c7-fa6da2268445" 00:17:09.891 ], 00:17:09.891 "product_name": "NVMe disk", 00:17:09.891 "block_size": 4096, 00:17:09.891 "num_blocks": 38912, 00:17:09.891 "uuid": "adeeb49b-7ac1-468f-82c7-fa6da2268445", 00:17:09.891 "assigned_rate_limits": { 00:17:09.891 "rw_ios_per_sec": 0, 00:17:09.891 "rw_mbytes_per_sec": 0, 00:17:09.891 "r_mbytes_per_sec": 0, 00:17:09.891 "w_mbytes_per_sec": 0 00:17:09.891 }, 00:17:09.891 "claimed": false, 00:17:09.891 "zoned": false, 00:17:09.891 "supported_io_types": { 00:17:09.891 "read": true, 00:17:09.891 "write": true, 00:17:09.891 "unmap": true, 00:17:09.891 "write_zeroes": true, 00:17:09.891 "flush": true, 00:17:09.891 "reset": true, 00:17:09.891 "compare": true, 00:17:09.891 "compare_and_write": true, 00:17:09.891 "abort": true, 00:17:09.891 "nvme_admin": true, 00:17:09.891 "nvme_io": true 00:17:09.891 }, 00:17:09.891 "memory_domains": [ 00:17:09.891 { 00:17:09.891 "dma_device_id": "system", 00:17:09.891 "dma_device_type": 1 00:17:09.891 } 00:17:09.891 ], 00:17:09.891 "driver_specific": { 00:17:09.891 "nvme": [ 00:17:09.891 { 00:17:09.891 "trid": { 00:17:09.891 "trtype": "TCP", 00:17:09.891 "adrfam": "IPv4", 00:17:09.891 "traddr": "10.0.0.2", 00:17:09.891 "trsvcid": "4420", 00:17:09.891 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:09.891 }, 00:17:09.891 "ctrlr_data": { 00:17:09.891 "cntlid": 1, 00:17:09.891 "vendor_id": "0x8086", 00:17:09.891 "model_number": "SPDK bdev Controller", 00:17:09.891 "serial_number": "SPDK0", 00:17:09.891 "firmware_revision": "24.05.1", 00:17:09.891 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:09.891 "oacs": { 00:17:09.891 "security": 0, 00:17:09.891 "format": 0, 00:17:09.891 "firmware": 0, 00:17:09.891 "ns_manage": 0 00:17:09.891 }, 00:17:09.891 "multi_ctrlr": true, 00:17:09.891 "ana_reporting": false 00:17:09.891 }, 00:17:09.891 "vs": { 00:17:09.891 "nvme_version": "1.3" 00:17:09.891 }, 00:17:09.891 "ns_data": { 00:17:09.891 "id": 1, 00:17:09.891 "can_share": true 00:17:09.891 } 00:17:09.891 } 00:17:09.891 ], 00:17:09.891 "mp_policy": "active_passive" 00:17:09.891 } 00:17:09.891 } 00:17:09.891 ] 00:17:09.891 05:31:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3217603 00:17:09.891 05:31:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:09.891 05:31:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:09.891 Running I/O for 10 seconds... 00:17:10.827 Latency(us) 00:17:10.827 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:10.827 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:10.827 Nvme0n1 : 1.00 14040.00 54.84 0.00 0.00 0.00 0.00 0.00 00:17:10.827 =================================================================================================================== 00:17:10.827 Total : 14040.00 54.84 0.00 0.00 0.00 0.00 0.00 00:17:10.827 00:17:11.762 05:31:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 63c0a8bc-007a-43a2-ac3e-4105007922e8 00:17:12.021 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:12.021 Nvme0n1 : 2.00 14148.00 55.27 0.00 0.00 0.00 0.00 0.00 00:17:12.021 =================================================================================================================== 00:17:12.021 Total : 14148.00 55.27 0.00 0.00 0.00 0.00 0.00 00:17:12.021 00:17:12.021 true 00:17:12.021 05:31:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 63c0a8bc-007a-43a2-ac3e-4105007922e8 00:17:12.021 05:31:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:12.280 05:31:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:12.280 05:31:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:12.280 05:31:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3217603 00:17:12.847 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:12.847 Nvme0n1 : 3.00 14231.67 55.59 0.00 0.00 0.00 0.00 0.00 00:17:12.847 =================================================================================================================== 00:17:12.847 Total : 14231.67 55.59 0.00 0.00 0.00 0.00 0.00 00:17:12.847 00:17:14.221 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:14.221 Nvme0n1 : 4.00 14289.75 55.82 0.00 0.00 0.00 0.00 0.00 00:17:14.221 =================================================================================================================== 00:17:14.221 Total : 14289.75 55.82 0.00 0.00 0.00 0.00 0.00 00:17:14.221 00:17:15.156 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:15.156 Nvme0n1 : 5.00 14337.60 56.01 0.00 0.00 0.00 0.00 0.00 00:17:15.156 =================================================================================================================== 00:17:15.156 Total : 14337.60 56.01 0.00 0.00 0.00 0.00 0.00 00:17:15.156 00:17:16.091 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:16.091 Nvme0n1 : 6.00 14382.83 56.18 0.00 0.00 0.00 0.00 0.00 00:17:16.091 =================================================================================================================== 00:17:16.091 Total : 14382.83 56.18 0.00 0.00 0.00 0.00 0.00 00:17:16.091 00:17:17.023 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:17.023 Nvme0n1 : 7.00 14419.29 56.33 0.00 0.00 0.00 0.00 0.00 00:17:17.023 =================================================================================================================== 00:17:17.023 Total : 14419.29 56.33 0.00 0.00 0.00 0.00 0.00 00:17:17.023 00:17:17.958 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:17.958 Nvme0n1 : 8.00 14520.88 56.72 0.00 0.00 0.00 0.00 0.00 00:17:17.958 =================================================================================================================== 00:17:17.958 Total : 14520.88 56.72 0.00 0.00 0.00 0.00 0.00 00:17:17.958 00:17:18.893 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:18.893 Nvme0n1 : 9.00 14550.22 56.84 0.00 0.00 0.00 0.00 0.00 00:17:18.893 =================================================================================================================== 00:17:18.893 Total : 14550.22 56.84 0.00 0.00 0.00 0.00 0.00 00:17:18.893 00:17:19.827 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:19.827 Nvme0n1 : 10.00 14618.40 57.10 0.00 0.00 0.00 0.00 0.00 00:17:19.827 =================================================================================================================== 00:17:19.827 Total : 14618.40 57.10 0.00 0.00 0.00 0.00 0.00 00:17:19.827 00:17:19.827 00:17:19.827 Latency(us) 00:17:19.827 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:19.827 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:19.827 Nvme0n1 : 10.01 14622.15 57.12 0.00 0.00 8748.18 2269.49 13689.74 00:17:19.827 =================================================================================================================== 00:17:19.827 Total : 14622.15 57.12 0.00 0.00 8748.18 2269.49 13689.74 00:17:19.827 0 00:17:20.085 05:31:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3217585 00:17:20.085 05:31:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@946 -- # '[' -z 3217585 ']' 00:17:20.085 05:31:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # kill -0 3217585 00:17:20.085 05:31:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # uname 00:17:20.085 05:31:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:20.085 05:31:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3217585 00:17:20.085 05:31:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:20.085 05:31:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:20.085 05:31:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3217585' 00:17:20.085 killing process with pid 3217585 00:17:20.085 05:31:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@965 -- # kill 3217585 00:17:20.085 Received shutdown signal, test time was about 10.000000 seconds 00:17:20.085 00:17:20.085 Latency(us) 00:17:20.085 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:20.085 =================================================================================================================== 00:17:20.085 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:20.085 05:31:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # wait 3217585 00:17:20.343 05:31:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:20.600 05:31:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:20.858 05:31:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 63c0a8bc-007a-43a2-ac3e-4105007922e8 00:17:20.858 05:31:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:21.117 05:31:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:21.117 05:31:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:17:21.117 05:31:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3214989 00:17:21.117 05:31:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3214989 00:17:21.117 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3214989 Killed "${NVMF_APP[@]}" "$@" 00:17:21.117 05:31:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:17:21.117 05:31:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:17:21.117 05:31:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:21.117 05:31:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:21.117 05:31:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:21.117 05:31:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=3218924 00:17:21.117 05:31:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 3218924 00:17:21.117 05:31:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:21.117 05:31:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 3218924 ']' 00:17:21.117 05:31:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:21.117 05:31:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:21.117 05:31:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:21.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:21.117 05:31:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:21.117 05:31:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:21.117 [2024-07-14 05:31:28.067270] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:21.117 [2024-07-14 05:31:28.067352] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:21.117 EAL: No free 2048 kB hugepages reported on node 1 00:17:21.117 [2024-07-14 05:31:28.133608] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:21.117 [2024-07-14 05:31:28.220687] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:21.117 [2024-07-14 05:31:28.220776] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:21.117 [2024-07-14 05:31:28.220804] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:21.117 [2024-07-14 05:31:28.220830] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:21.117 [2024-07-14 05:31:28.220850] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:21.117 [2024-07-14 05:31:28.220888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:21.377 05:31:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:21.377 05:31:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:17:21.377 05:31:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:21.377 05:31:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:21.377 05:31:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:21.377 05:31:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:21.377 05:31:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:21.636 [2024-07-14 05:31:28.632635] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:17:21.636 [2024-07-14 05:31:28.632781] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:17:21.636 [2024-07-14 05:31:28.632841] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:17:21.636 05:31:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:17:21.636 05:31:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev adeeb49b-7ac1-468f-82c7-fa6da2268445 00:17:21.636 05:31:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=adeeb49b-7ac1-468f-82c7-fa6da2268445 00:17:21.636 05:31:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:21.636 05:31:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:17:21.636 05:31:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:21.636 05:31:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:21.636 05:31:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:21.929 05:31:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b adeeb49b-7ac1-468f-82c7-fa6da2268445 -t 2000 00:17:22.187 [ 00:17:22.187 { 00:17:22.187 "name": "adeeb49b-7ac1-468f-82c7-fa6da2268445", 00:17:22.187 "aliases": [ 00:17:22.187 "lvs/lvol" 00:17:22.187 ], 00:17:22.187 "product_name": "Logical Volume", 00:17:22.187 "block_size": 4096, 00:17:22.187 "num_blocks": 38912, 00:17:22.187 "uuid": "adeeb49b-7ac1-468f-82c7-fa6da2268445", 00:17:22.187 "assigned_rate_limits": { 00:17:22.187 "rw_ios_per_sec": 0, 00:17:22.187 "rw_mbytes_per_sec": 0, 00:17:22.187 "r_mbytes_per_sec": 0, 00:17:22.187 "w_mbytes_per_sec": 0 00:17:22.187 }, 00:17:22.187 "claimed": false, 00:17:22.187 "zoned": false, 00:17:22.187 "supported_io_types": { 00:17:22.187 "read": true, 00:17:22.187 "write": true, 00:17:22.187 "unmap": true, 00:17:22.187 "write_zeroes": true, 00:17:22.187 "flush": false, 00:17:22.187 "reset": true, 00:17:22.187 "compare": false, 00:17:22.187 "compare_and_write": false, 00:17:22.187 "abort": false, 00:17:22.187 "nvme_admin": false, 00:17:22.187 "nvme_io": false 00:17:22.187 }, 00:17:22.187 "driver_specific": { 00:17:22.187 "lvol": { 00:17:22.187 "lvol_store_uuid": "63c0a8bc-007a-43a2-ac3e-4105007922e8", 00:17:22.187 "base_bdev": "aio_bdev", 00:17:22.187 "thin_provision": false, 00:17:22.187 "num_allocated_clusters": 38, 00:17:22.187 "snapshot": false, 00:17:22.187 "clone": false, 00:17:22.187 "esnap_clone": false 00:17:22.187 } 00:17:22.187 } 00:17:22.187 } 00:17:22.187 ] 00:17:22.187 05:31:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:17:22.187 05:31:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 63c0a8bc-007a-43a2-ac3e-4105007922e8 00:17:22.187 05:31:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:17:22.445 05:31:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:17:22.445 05:31:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 63c0a8bc-007a-43a2-ac3e-4105007922e8 00:17:22.445 05:31:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:17:22.703 05:31:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:17:22.703 05:31:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:22.962 [2024-07-14 05:31:29.941639] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:22.962 05:31:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 63c0a8bc-007a-43a2-ac3e-4105007922e8 00:17:22.962 05:31:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:17:22.962 05:31:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 63c0a8bc-007a-43a2-ac3e-4105007922e8 00:17:22.962 05:31:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:22.962 05:31:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:22.962 05:31:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:22.962 05:31:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:22.962 05:31:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:22.962 05:31:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:22.962 05:31:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:22.962 05:31:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:22.962 05:31:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 63c0a8bc-007a-43a2-ac3e-4105007922e8 00:17:23.220 request: 00:17:23.220 { 00:17:23.220 "uuid": "63c0a8bc-007a-43a2-ac3e-4105007922e8", 00:17:23.220 "method": "bdev_lvol_get_lvstores", 00:17:23.220 "req_id": 1 00:17:23.220 } 00:17:23.220 Got JSON-RPC error response 00:17:23.220 response: 00:17:23.220 { 00:17:23.220 "code": -19, 00:17:23.220 "message": "No such device" 00:17:23.220 } 00:17:23.220 05:31:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:17:23.220 05:31:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:23.220 05:31:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:23.220 05:31:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:23.220 05:31:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:23.478 aio_bdev 00:17:23.478 05:31:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev adeeb49b-7ac1-468f-82c7-fa6da2268445 00:17:23.478 05:31:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=adeeb49b-7ac1-468f-82c7-fa6da2268445 00:17:23.478 05:31:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:23.478 05:31:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:17:23.478 05:31:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:23.478 05:31:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:23.478 05:31:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:23.736 05:31:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b adeeb49b-7ac1-468f-82c7-fa6da2268445 -t 2000 00:17:23.994 [ 00:17:23.994 { 00:17:23.994 "name": "adeeb49b-7ac1-468f-82c7-fa6da2268445", 00:17:23.994 "aliases": [ 00:17:23.994 "lvs/lvol" 00:17:23.994 ], 00:17:23.994 "product_name": "Logical Volume", 00:17:23.994 "block_size": 4096, 00:17:23.994 "num_blocks": 38912, 00:17:23.994 "uuid": "adeeb49b-7ac1-468f-82c7-fa6da2268445", 00:17:23.994 "assigned_rate_limits": { 00:17:23.994 "rw_ios_per_sec": 0, 00:17:23.994 "rw_mbytes_per_sec": 0, 00:17:23.994 "r_mbytes_per_sec": 0, 00:17:23.994 "w_mbytes_per_sec": 0 00:17:23.994 }, 00:17:23.994 "claimed": false, 00:17:23.994 "zoned": false, 00:17:23.994 "supported_io_types": { 00:17:23.994 "read": true, 00:17:23.994 "write": true, 00:17:23.994 "unmap": true, 00:17:23.994 "write_zeroes": true, 00:17:23.994 "flush": false, 00:17:23.994 "reset": true, 00:17:23.994 "compare": false, 00:17:23.994 "compare_and_write": false, 00:17:23.994 "abort": false, 00:17:23.994 "nvme_admin": false, 00:17:23.994 "nvme_io": false 00:17:23.994 }, 00:17:23.994 "driver_specific": { 00:17:23.994 "lvol": { 00:17:23.994 "lvol_store_uuid": "63c0a8bc-007a-43a2-ac3e-4105007922e8", 00:17:23.994 "base_bdev": "aio_bdev", 00:17:23.994 "thin_provision": false, 00:17:23.994 "num_allocated_clusters": 38, 00:17:23.994 "snapshot": false, 00:17:23.994 "clone": false, 00:17:23.994 "esnap_clone": false 00:17:23.994 } 00:17:23.994 } 00:17:23.994 } 00:17:23.994 ] 00:17:23.994 05:31:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:17:23.994 05:31:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 63c0a8bc-007a-43a2-ac3e-4105007922e8 00:17:23.994 05:31:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:24.252 05:31:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:24.252 05:31:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 63c0a8bc-007a-43a2-ac3e-4105007922e8 00:17:24.252 05:31:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:24.510 05:31:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:24.511 05:31:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete adeeb49b-7ac1-468f-82c7-fa6da2268445 00:17:24.769 05:31:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 63c0a8bc-007a-43a2-ac3e-4105007922e8 00:17:25.028 05:31:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:25.286 05:31:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:25.286 00:17:25.286 real 0m18.924s 00:17:25.286 user 0m47.701s 00:17:25.286 sys 0m4.842s 00:17:25.286 05:31:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:25.286 05:31:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:25.286 ************************************ 00:17:25.286 END TEST lvs_grow_dirty 00:17:25.286 ************************************ 00:17:25.286 05:31:32 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:17:25.286 05:31:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@804 -- # type=--id 00:17:25.286 05:31:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@805 -- # id=0 00:17:25.286 05:31:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:17:25.286 05:31:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:25.286 05:31:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:17:25.286 05:31:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:17:25.286 05:31:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # for n in $shm_files 00:17:25.286 05:31:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:25.286 nvmf_trace.0 00:17:25.286 05:31:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # return 0 00:17:25.286 05:31:32 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:17:25.286 05:31:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:25.286 05:31:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:17:25.286 05:31:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:25.286 05:31:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:17:25.286 05:31:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:25.286 05:31:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:25.286 rmmod nvme_tcp 00:17:25.286 rmmod nvme_fabrics 00:17:25.286 rmmod nvme_keyring 00:17:25.286 05:31:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:25.286 05:31:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:17:25.286 05:31:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:17:25.286 05:31:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 3218924 ']' 00:17:25.286 05:31:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 3218924 00:17:25.286 05:31:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@946 -- # '[' -z 3218924 ']' 00:17:25.286 05:31:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # kill -0 3218924 00:17:25.286 05:31:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # uname 00:17:25.286 05:31:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:25.286 05:31:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3218924 00:17:25.286 05:31:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:25.286 05:31:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:25.286 05:31:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3218924' 00:17:25.286 killing process with pid 3218924 00:17:25.286 05:31:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@965 -- # kill 3218924 00:17:25.286 05:31:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # wait 3218924 00:17:25.544 05:31:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:25.544 05:31:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:25.544 05:31:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:25.544 05:31:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:25.544 05:31:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:25.544 05:31:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:25.544 05:31:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:25.544 05:31:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:28.076 05:31:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:28.076 00:17:28.076 real 0m41.603s 00:17:28.076 user 1m10.164s 00:17:28.076 sys 0m8.639s 00:17:28.076 05:31:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:28.076 05:31:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:28.076 ************************************ 00:17:28.076 END TEST nvmf_lvs_grow 00:17:28.076 ************************************ 00:17:28.076 05:31:34 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:28.076 05:31:34 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:28.076 05:31:34 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:28.076 05:31:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:28.076 ************************************ 00:17:28.076 START TEST nvmf_bdev_io_wait 00:17:28.076 ************************************ 00:17:28.076 05:31:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:28.076 * Looking for test storage... 00:17:28.076 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:28.076 05:31:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:28.076 05:31:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:17:28.076 05:31:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:28.077 05:31:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:28.077 05:31:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:28.077 05:31:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:28.077 05:31:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:28.077 05:31:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:28.077 05:31:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:28.077 05:31:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:28.077 05:31:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:28.077 05:31:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:28.077 05:31:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:28.077 05:31:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:28.077 05:31:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:28.077 05:31:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:28.077 05:31:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:28.077 05:31:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:28.077 05:31:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:28.077 05:31:34 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:28.077 05:31:34 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:28.077 05:31:34 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:28.077 05:31:34 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.077 05:31:34 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.077 05:31:34 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.077 05:31:34 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:17:28.077 05:31:34 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.077 05:31:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:17:28.077 05:31:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:28.077 05:31:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:28.077 05:31:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:28.077 05:31:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:28.077 05:31:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:28.077 05:31:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:28.077 05:31:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:28.077 05:31:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:28.077 05:31:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:28.077 05:31:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:28.077 05:31:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:17:28.077 05:31:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:28.077 05:31:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:28.077 05:31:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:28.077 05:31:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:28.077 05:31:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:28.077 05:31:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:28.077 05:31:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:28.077 05:31:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:28.077 05:31:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:28.077 05:31:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:28.077 05:31:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:17:28.077 05:31:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:29.973 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:29.973 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:17:29.973 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:29.973 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:29.973 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:29.973 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:29.973 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:29.973 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:17:29.973 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:29.973 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:17:29.973 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:17:29.973 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:17:29.973 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:17:29.973 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:17:29.973 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:17:29.973 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:29.973 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:29.973 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:29.973 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:29.973 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:29.973 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:29.973 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:29.973 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:29.973 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:29.973 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:29.973 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:29.973 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:29.973 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:29.973 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:29.973 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:29.973 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:29.973 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:29.973 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:29.973 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:29.973 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:29.973 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:29.973 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:29.973 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:29.973 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:29.973 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:29.973 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:29.973 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:29.973 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:29.973 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:29.973 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:29.973 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:29.973 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:29.973 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:29.973 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:29.973 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:29.973 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:29.973 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:29.973 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:29.973 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:29.973 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:29.973 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:29.974 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:29.974 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:29.974 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:29.974 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:29.974 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:29.974 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:29.974 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:29.974 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:29.974 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:29.974 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:29.974 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:29.974 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:29.974 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:29.974 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:29.974 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:29.974 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:29.974 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:17:29.974 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:29.974 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:29.974 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:29.974 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:29.974 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:29.974 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:29.974 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:29.974 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:29.974 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:29.974 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:29.974 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:29.974 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:29.974 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:29.974 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:29.974 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:29.974 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:29.974 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:29.974 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:29.974 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:29.974 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:29.974 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:29.974 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:29.974 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:29.974 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:29.974 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:17:29.974 00:17:29.974 --- 10.0.0.2 ping statistics --- 00:17:29.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:29.974 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:17:29.974 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:29.974 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:29.974 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:17:29.974 00:17:29.974 --- 10.0.0.1 ping statistics --- 00:17:29.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:29.974 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:17:29.974 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:29.974 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:17:29.974 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:29.974 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:29.974 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:29.974 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:29.974 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:29.974 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:29.974 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:29.974 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:17:29.974 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:29.974 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:29.974 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:29.974 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=3221443 00:17:29.974 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:17:29.974 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 3221443 00:17:29.974 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@827 -- # '[' -z 3221443 ']' 00:17:29.974 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:29.974 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:29.974 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:29.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:29.974 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:29.974 05:31:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:29.974 [2024-07-14 05:31:37.022369] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:29.974 [2024-07-14 05:31:37.022445] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:29.974 EAL: No free 2048 kB hugepages reported on node 1 00:17:30.230 [2024-07-14 05:31:37.089123] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:30.230 [2024-07-14 05:31:37.175549] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:30.230 [2024-07-14 05:31:37.175602] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:30.230 [2024-07-14 05:31:37.175615] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:30.230 [2024-07-14 05:31:37.175626] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:30.230 [2024-07-14 05:31:37.175636] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:30.230 [2024-07-14 05:31:37.175780] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:30.230 [2024-07-14 05:31:37.175844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:30.230 [2024-07-14 05:31:37.175913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:30.230 [2024-07-14 05:31:37.175918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:30.230 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:30.230 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # return 0 00:17:30.230 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:30.230 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:30.230 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:30.230 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:30.230 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:17:30.230 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.230 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:30.230 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.230 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:17:30.230 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.230 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:30.230 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.230 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:30.230 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.230 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:30.230 [2024-07-14 05:31:37.328885] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:30.488 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.488 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:30.488 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.488 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:30.488 Malloc0 00:17:30.488 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.488 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:30.488 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.488 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:30.488 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.488 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:30.488 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.488 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:30.488 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.488 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:30.488 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.488 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:30.488 [2024-07-14 05:31:37.389110] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:30.488 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.488 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3221489 00:17:30.488 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:17:30.488 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:17:30.488 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3221491 00:17:30.488 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:30.488 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:30.488 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:30.488 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:30.488 { 00:17:30.488 "params": { 00:17:30.488 "name": "Nvme$subsystem", 00:17:30.488 "trtype": "$TEST_TRANSPORT", 00:17:30.488 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:30.488 "adrfam": "ipv4", 00:17:30.488 "trsvcid": "$NVMF_PORT", 00:17:30.488 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:30.488 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:30.488 "hdgst": ${hdgst:-false}, 00:17:30.488 "ddgst": ${ddgst:-false} 00:17:30.488 }, 00:17:30.488 "method": "bdev_nvme_attach_controller" 00:17:30.488 } 00:17:30.488 EOF 00:17:30.488 )") 00:17:30.488 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:17:30.488 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:17:30.488 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3221493 00:17:30.488 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:30.488 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:30.488 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:30.488 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:30.488 { 00:17:30.488 "params": { 00:17:30.488 "name": "Nvme$subsystem", 00:17:30.488 "trtype": "$TEST_TRANSPORT", 00:17:30.488 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:30.488 "adrfam": "ipv4", 00:17:30.488 "trsvcid": "$NVMF_PORT", 00:17:30.488 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:30.488 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:30.488 "hdgst": ${hdgst:-false}, 00:17:30.488 "ddgst": ${ddgst:-false} 00:17:30.488 }, 00:17:30.488 "method": "bdev_nvme_attach_controller" 00:17:30.488 } 00:17:30.488 EOF 00:17:30.488 )") 00:17:30.488 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:17:30.488 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:17:30.488 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:30.488 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3221496 00:17:30.488 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:30.488 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:17:30.489 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:30.489 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:30.489 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:30.489 { 00:17:30.489 "params": { 00:17:30.489 "name": "Nvme$subsystem", 00:17:30.489 "trtype": "$TEST_TRANSPORT", 00:17:30.489 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:30.489 "adrfam": "ipv4", 00:17:30.489 "trsvcid": "$NVMF_PORT", 00:17:30.489 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:30.489 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:30.489 "hdgst": ${hdgst:-false}, 00:17:30.489 "ddgst": ${ddgst:-false} 00:17:30.489 }, 00:17:30.489 "method": "bdev_nvme_attach_controller" 00:17:30.489 } 00:17:30.489 EOF 00:17:30.489 )") 00:17:30.489 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:17:30.489 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:17:30.489 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:30.489 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:30.489 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:30.489 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:30.489 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:30.489 { 00:17:30.489 "params": { 00:17:30.489 "name": "Nvme$subsystem", 00:17:30.489 "trtype": "$TEST_TRANSPORT", 00:17:30.489 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:30.489 "adrfam": "ipv4", 00:17:30.489 "trsvcid": "$NVMF_PORT", 00:17:30.489 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:30.489 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:30.489 "hdgst": ${hdgst:-false}, 00:17:30.489 "ddgst": ${ddgst:-false} 00:17:30.489 }, 00:17:30.489 "method": "bdev_nvme_attach_controller" 00:17:30.489 } 00:17:30.489 EOF 00:17:30.489 )") 00:17:30.489 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:30.489 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3221489 00:17:30.489 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:30.489 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:30.489 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:30.489 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:30.489 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:30.489 "params": { 00:17:30.489 "name": "Nvme1", 00:17:30.489 "trtype": "tcp", 00:17:30.489 "traddr": "10.0.0.2", 00:17:30.489 "adrfam": "ipv4", 00:17:30.489 "trsvcid": "4420", 00:17:30.489 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:30.489 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:30.489 "hdgst": false, 00:17:30.489 "ddgst": false 00:17:30.489 }, 00:17:30.489 "method": "bdev_nvme_attach_controller" 00:17:30.489 }' 00:17:30.489 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:30.489 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:30.489 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:30.489 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:30.489 "params": { 00:17:30.489 "name": "Nvme1", 00:17:30.489 "trtype": "tcp", 00:17:30.489 "traddr": "10.0.0.2", 00:17:30.489 "adrfam": "ipv4", 00:17:30.489 "trsvcid": "4420", 00:17:30.489 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:30.489 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:30.489 "hdgst": false, 00:17:30.489 "ddgst": false 00:17:30.489 }, 00:17:30.489 "method": "bdev_nvme_attach_controller" 00:17:30.489 }' 00:17:30.489 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:30.489 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:30.489 "params": { 00:17:30.489 "name": "Nvme1", 00:17:30.489 "trtype": "tcp", 00:17:30.489 "traddr": "10.0.0.2", 00:17:30.489 "adrfam": "ipv4", 00:17:30.489 "trsvcid": "4420", 00:17:30.489 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:30.489 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:30.489 "hdgst": false, 00:17:30.489 "ddgst": false 00:17:30.489 }, 00:17:30.489 "method": "bdev_nvme_attach_controller" 00:17:30.489 }' 00:17:30.489 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:30.489 05:31:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:30.489 "params": { 00:17:30.489 "name": "Nvme1", 00:17:30.489 "trtype": "tcp", 00:17:30.489 "traddr": "10.0.0.2", 00:17:30.489 "adrfam": "ipv4", 00:17:30.489 "trsvcid": "4420", 00:17:30.489 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:30.489 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:30.489 "hdgst": false, 00:17:30.489 "ddgst": false 00:17:30.489 }, 00:17:30.489 "method": "bdev_nvme_attach_controller" 00:17:30.489 }' 00:17:30.489 [2024-07-14 05:31:37.434260] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:30.489 [2024-07-14 05:31:37.434344] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:17:30.489 [2024-07-14 05:31:37.435366] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:30.489 [2024-07-14 05:31:37.435366] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:30.489 [2024-07-14 05:31:37.435440] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-14 05:31:37.435440] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:17:30.489 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:17:30.489 [2024-07-14 05:31:37.435515] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:30.489 [2024-07-14 05:31:37.435574] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:17:30.489 EAL: No free 2048 kB hugepages reported on node 1 00:17:30.489 EAL: No free 2048 kB hugepages reported on node 1 00:17:30.489 [2024-07-14 05:31:37.585826] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.747 EAL: No free 2048 kB hugepages reported on node 1 00:17:30.747 [2024-07-14 05:31:37.655978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:30.747 [2024-07-14 05:31:37.656594] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.747 [2024-07-14 05:31:37.723618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:17:30.747 EAL: No free 2048 kB hugepages reported on node 1 00:17:30.747 [2024-07-14 05:31:37.755277] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.747 [2024-07-14 05:31:37.829801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:31.012 [2024-07-14 05:31:37.855713] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.012 [2024-07-14 05:31:37.935348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:31.012 Running I/O for 1 seconds... 00:17:31.012 Running I/O for 1 seconds... 00:17:31.273 Running I/O for 1 seconds... 00:17:31.273 Running I/O for 1 seconds... 00:17:32.208 00:17:32.208 Latency(us) 00:17:32.208 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:32.208 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:17:32.208 Nvme1n1 : 1.01 7097.21 27.72 0.00 0.00 17915.13 8786.68 24563.86 00:17:32.208 =================================================================================================================== 00:17:32.208 Total : 7097.21 27.72 0.00 0.00 17915.13 8786.68 24563.86 00:17:32.208 00:17:32.208 Latency(us) 00:17:32.208 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:32.208 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:17:32.208 Nvme1n1 : 1.01 9465.26 36.97 0.00 0.00 13464.62 7670.14 25826.04 00:17:32.208 =================================================================================================================== 00:17:32.208 Total : 9465.26 36.97 0.00 0.00 13464.62 7670.14 25826.04 00:17:32.208 00:17:32.208 Latency(us) 00:17:32.208 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:32.208 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:17:32.208 Nvme1n1 : 1.01 8934.44 34.90 0.00 0.00 14258.84 8932.31 26796.94 00:17:32.208 =================================================================================================================== 00:17:32.208 Total : 8934.44 34.90 0.00 0.00 14258.84 8932.31 26796.94 00:17:32.208 00:17:32.208 Latency(us) 00:17:32.208 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:32.208 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:17:32.208 Nvme1n1 : 1.00 17530.32 68.48 0.00 0.00 7235.56 273.07 57477.50 00:17:32.208 =================================================================================================================== 00:17:32.208 Total : 17530.32 68.48 0.00 0.00 7235.56 273.07 57477.50 00:17:32.466 05:31:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3221491 00:17:32.466 05:31:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3221493 00:17:32.466 05:31:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3221496 00:17:32.466 05:31:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:32.466 05:31:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.466 05:31:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:32.466 05:31:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.466 05:31:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:17:32.466 05:31:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:17:32.466 05:31:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:32.466 05:31:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:17:32.466 05:31:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:32.466 05:31:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:17:32.466 05:31:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:32.466 05:31:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:32.466 rmmod nvme_tcp 00:17:32.725 rmmod nvme_fabrics 00:17:32.725 rmmod nvme_keyring 00:17:32.725 05:31:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:32.725 05:31:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:17:32.725 05:31:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:17:32.725 05:31:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 3221443 ']' 00:17:32.725 05:31:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 3221443 00:17:32.725 05:31:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@946 -- # '[' -z 3221443 ']' 00:17:32.725 05:31:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # kill -0 3221443 00:17:32.725 05:31:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # uname 00:17:32.725 05:31:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:32.725 05:31:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3221443 00:17:32.725 05:31:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:32.725 05:31:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:32.725 05:31:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3221443' 00:17:32.725 killing process with pid 3221443 00:17:32.725 05:31:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@965 -- # kill 3221443 00:17:32.725 05:31:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # wait 3221443 00:17:32.983 05:31:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:32.983 05:31:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:32.983 05:31:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:32.983 05:31:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:32.983 05:31:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:32.983 05:31:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:32.983 05:31:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:32.983 05:31:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:34.884 05:31:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:34.884 00:17:34.884 real 0m7.193s 00:17:34.884 user 0m15.179s 00:17:34.884 sys 0m3.724s 00:17:34.884 05:31:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:34.884 05:31:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:34.884 ************************************ 00:17:34.884 END TEST nvmf_bdev_io_wait 00:17:34.884 ************************************ 00:17:34.884 05:31:41 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:34.884 05:31:41 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:34.884 05:31:41 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:34.884 05:31:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:34.884 ************************************ 00:17:34.884 START TEST nvmf_queue_depth 00:17:34.884 ************************************ 00:17:34.884 05:31:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:34.884 * Looking for test storage... 00:17:35.143 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:35.143 05:31:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:35.143 05:31:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:17:35.143 05:31:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:35.143 05:31:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:35.143 05:31:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:35.143 05:31:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:35.143 05:31:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:35.143 05:31:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:35.143 05:31:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:35.143 05:31:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:35.143 05:31:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:35.143 05:31:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:35.143 05:31:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:35.143 05:31:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:35.143 05:31:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:35.143 05:31:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:35.143 05:31:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:35.143 05:31:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:35.143 05:31:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:35.143 05:31:41 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:35.143 05:31:41 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:35.143 05:31:42 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:35.143 05:31:42 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.143 05:31:42 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.143 05:31:42 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.144 05:31:42 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:17:35.144 05:31:42 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.144 05:31:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:17:35.144 05:31:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:35.144 05:31:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:35.144 05:31:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:35.144 05:31:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:35.144 05:31:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:35.144 05:31:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:35.144 05:31:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:35.144 05:31:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:35.144 05:31:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:17:35.144 05:31:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:17:35.144 05:31:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:35.144 05:31:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:17:35.144 05:31:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:35.144 05:31:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:35.144 05:31:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:35.144 05:31:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:35.144 05:31:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:35.144 05:31:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:35.144 05:31:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:35.144 05:31:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:35.144 05:31:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:35.144 05:31:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:35.144 05:31:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:17:35.144 05:31:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:37.044 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:37.044 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:17:37.044 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:37.044 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:37.044 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:37.044 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:37.044 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:37.044 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:17:37.044 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:37.044 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:17:37.044 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:17:37.044 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:17:37.044 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:17:37.044 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:17:37.044 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:17:37.044 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:37.044 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:37.044 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:37.044 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:37.044 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:37.044 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:37.044 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:37.044 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:37.044 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:37.044 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:37.044 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:37.044 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:37.044 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:37.044 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:37.044 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:37.044 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:37.044 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:37.044 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:37.044 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:37.044 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:37.044 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:37.044 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:37.044 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:37.044 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:37.044 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:37.044 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:37.044 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:37.044 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:37.044 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:37.044 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:37.044 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:37.044 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:37.044 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:37.044 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:37.044 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:37.044 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:37.044 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:37.044 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:37.044 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:37.044 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:37.044 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:37.044 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:37.044 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:37.044 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:37.044 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:37.045 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:37.045 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:37.045 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:37.045 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:37.045 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:37.045 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:37.045 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:37.045 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:37.045 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:37.045 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:37.045 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:37.045 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:37.045 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:17:37.045 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:37.045 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:37.045 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:37.045 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:37.045 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:37.045 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:37.045 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:37.045 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:37.045 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:37.045 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:37.045 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:37.045 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:37.045 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:37.045 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:37.045 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:37.045 05:31:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:37.045 05:31:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:37.045 05:31:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:37.045 05:31:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:37.045 05:31:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:37.045 05:31:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:37.045 05:31:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:37.045 05:31:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:37.045 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:37.045 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:17:37.045 00:17:37.045 --- 10.0.0.2 ping statistics --- 00:17:37.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:37.045 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:17:37.045 05:31:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:37.045 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:37.045 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:17:37.045 00:17:37.045 --- 10.0.0.1 ping statistics --- 00:17:37.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:37.045 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:17:37.045 05:31:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:37.045 05:31:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:17:37.045 05:31:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:37.045 05:31:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:37.045 05:31:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:37.045 05:31:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:37.045 05:31:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:37.045 05:31:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:37.045 05:31:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:37.045 05:31:44 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:17:37.045 05:31:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:37.045 05:31:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:37.045 05:31:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:37.045 05:31:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=3223690 00:17:37.045 05:31:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:37.045 05:31:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 3223690 00:17:37.045 05:31:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 3223690 ']' 00:17:37.045 05:31:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:37.045 05:31:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:37.045 05:31:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:37.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:37.045 05:31:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:37.045 05:31:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:37.332 [2024-07-14 05:31:44.186057] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:37.332 [2024-07-14 05:31:44.186136] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:37.332 EAL: No free 2048 kB hugepages reported on node 1 00:17:37.332 [2024-07-14 05:31:44.257015] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.332 [2024-07-14 05:31:44.350197] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:37.332 [2024-07-14 05:31:44.350262] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:37.332 [2024-07-14 05:31:44.350289] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:37.332 [2024-07-14 05:31:44.350303] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:37.332 [2024-07-14 05:31:44.350315] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:37.332 [2024-07-14 05:31:44.350344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:37.591 05:31:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:37.591 05:31:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:17:37.591 05:31:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:37.591 05:31:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:37.591 05:31:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:37.591 05:31:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:37.591 05:31:44 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:37.591 05:31:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.591 05:31:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:37.591 [2024-07-14 05:31:44.492483] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:37.591 05:31:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.591 05:31:44 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:37.591 05:31:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.591 05:31:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:37.591 Malloc0 00:17:37.591 05:31:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.591 05:31:44 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:37.591 05:31:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.591 05:31:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:37.591 05:31:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.591 05:31:44 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:37.591 05:31:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.591 05:31:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:37.591 05:31:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.591 05:31:44 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:37.591 05:31:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.591 05:31:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:37.591 [2024-07-14 05:31:44.550496] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:37.591 05:31:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.591 05:31:44 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3223839 00:17:37.591 05:31:44 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:17:37.591 05:31:44 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:37.591 05:31:44 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3223839 /var/tmp/bdevperf.sock 00:17:37.591 05:31:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 3223839 ']' 00:17:37.591 05:31:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:37.591 05:31:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:37.591 05:31:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:37.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:37.591 05:31:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:37.591 05:31:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:37.591 [2024-07-14 05:31:44.595810] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:37.591 [2024-07-14 05:31:44.595897] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3223839 ] 00:17:37.591 EAL: No free 2048 kB hugepages reported on node 1 00:17:37.591 [2024-07-14 05:31:44.658680] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.850 [2024-07-14 05:31:44.751228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:37.850 05:31:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:37.850 05:31:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:17:37.850 05:31:44 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:37.850 05:31:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.850 05:31:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:38.109 NVMe0n1 00:17:38.109 05:31:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.109 05:31:45 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:38.109 Running I/O for 10 seconds... 00:17:50.322 00:17:50.322 Latency(us) 00:17:50.322 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:50.322 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:17:50.322 Verification LBA range: start 0x0 length 0x4000 00:17:50.322 NVMe0n1 : 10.07 9073.43 35.44 0.00 0.00 112348.27 11359.57 69516.71 00:17:50.322 =================================================================================================================== 00:17:50.322 Total : 9073.43 35.44 0.00 0.00 112348.27 11359.57 69516.71 00:17:50.322 0 00:17:50.322 05:31:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3223839 00:17:50.322 05:31:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 3223839 ']' 00:17:50.322 05:31:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 3223839 00:17:50.322 05:31:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:17:50.322 05:31:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:50.322 05:31:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3223839 00:17:50.322 05:31:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:50.322 05:31:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:50.322 05:31:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3223839' 00:17:50.322 killing process with pid 3223839 00:17:50.322 05:31:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 3223839 00:17:50.322 Received shutdown signal, test time was about 10.000000 seconds 00:17:50.322 00:17:50.322 Latency(us) 00:17:50.322 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:50.322 =================================================================================================================== 00:17:50.322 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:50.322 05:31:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 3223839 00:17:50.322 05:31:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:50.322 05:31:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:17:50.322 05:31:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:50.322 05:31:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:17:50.322 05:31:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:50.322 05:31:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:17:50.322 05:31:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:50.322 05:31:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:50.322 rmmod nvme_tcp 00:17:50.322 rmmod nvme_fabrics 00:17:50.322 rmmod nvme_keyring 00:17:50.322 05:31:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:50.322 05:31:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:17:50.322 05:31:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:17:50.322 05:31:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 3223690 ']' 00:17:50.322 05:31:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 3223690 00:17:50.322 05:31:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 3223690 ']' 00:17:50.322 05:31:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 3223690 00:17:50.322 05:31:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:17:50.322 05:31:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:50.322 05:31:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3223690 00:17:50.322 05:31:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:50.322 05:31:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:50.322 05:31:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3223690' 00:17:50.322 killing process with pid 3223690 00:17:50.323 05:31:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 3223690 00:17:50.323 05:31:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 3223690 00:17:50.323 05:31:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:50.323 05:31:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:50.323 05:31:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:50.323 05:31:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:50.323 05:31:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:50.323 05:31:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:50.323 05:31:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:50.323 05:31:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:50.889 05:31:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:50.889 00:17:50.889 real 0m15.988s 00:17:50.889 user 0m22.607s 00:17:50.889 sys 0m2.974s 00:17:50.889 05:31:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:50.889 05:31:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:50.889 ************************************ 00:17:50.889 END TEST nvmf_queue_depth 00:17:50.889 ************************************ 00:17:50.889 05:31:57 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:17:50.889 05:31:57 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:50.889 05:31:57 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:50.889 05:31:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:50.889 ************************************ 00:17:50.889 START TEST nvmf_target_multipath 00:17:50.889 ************************************ 00:17:50.889 05:31:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:17:51.148 * Looking for test storage... 00:17:51.148 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:51.148 05:31:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:51.148 05:31:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:17:51.148 05:31:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:51.148 05:31:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:51.148 05:31:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:51.148 05:31:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:51.148 05:31:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:51.148 05:31:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:51.148 05:31:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:51.148 05:31:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:51.148 05:31:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:51.148 05:31:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:51.148 05:31:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:51.148 05:31:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:51.148 05:31:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:51.148 05:31:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:51.148 05:31:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:51.148 05:31:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:51.148 05:31:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:51.148 05:31:58 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:51.148 05:31:58 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:51.148 05:31:58 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:51.148 05:31:58 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.148 05:31:58 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.148 05:31:58 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.148 05:31:58 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:17:51.148 05:31:58 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.149 05:31:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:17:51.149 05:31:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:51.149 05:31:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:51.149 05:31:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:51.149 05:31:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:51.149 05:31:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:51.149 05:31:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:51.149 05:31:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:51.149 05:31:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:51.149 05:31:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:51.149 05:31:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:51.149 05:31:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:51.149 05:31:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:51.149 05:31:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:17:51.149 05:31:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:51.149 05:31:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:51.149 05:31:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:51.149 05:31:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:51.149 05:31:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:51.149 05:31:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:51.149 05:31:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:51.149 05:31:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:51.149 05:31:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:51.149 05:31:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:51.149 05:31:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:17:51.149 05:31:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:53.050 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:53.051 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:53.051 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:53.051 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:53.051 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:53.051 05:31:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:53.051 05:32:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:53.051 05:32:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:53.051 05:32:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:53.051 05:32:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:53.051 05:32:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:53.051 05:32:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:53.051 05:32:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:53.051 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:53.051 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:17:53.051 00:17:53.051 --- 10.0.0.2 ping statistics --- 00:17:53.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:53.051 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:17:53.051 05:32:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:53.051 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:53.051 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:17:53.051 00:17:53.051 --- 10.0.0.1 ping statistics --- 00:17:53.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:53.051 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:17:53.051 05:32:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:53.051 05:32:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:17:53.051 05:32:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:53.051 05:32:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:53.051 05:32:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:53.051 05:32:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:53.051 05:32:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:53.051 05:32:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:53.051 05:32:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:53.051 05:32:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:17:53.051 05:32:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:17:53.051 only one NIC for nvmf test 00:17:53.051 05:32:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:17:53.051 05:32:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:53.051 05:32:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:17:53.051 05:32:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:53.051 05:32:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:17:53.051 05:32:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:53.051 05:32:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:53.051 rmmod nvme_tcp 00:17:53.310 rmmod nvme_fabrics 00:17:53.310 rmmod nvme_keyring 00:17:53.310 05:32:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:53.310 05:32:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:17:53.310 05:32:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:17:53.310 05:32:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:17:53.310 05:32:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:53.310 05:32:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:53.310 05:32:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:53.310 05:32:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:53.310 05:32:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:53.310 05:32:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:53.310 05:32:00 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:53.310 05:32:00 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:55.209 05:32:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:55.209 05:32:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:17:55.209 05:32:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:17:55.209 05:32:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:55.209 05:32:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:17:55.209 05:32:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:55.209 05:32:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:17:55.209 05:32:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:55.209 05:32:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:55.209 05:32:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:55.209 05:32:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:17:55.209 05:32:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:17:55.209 05:32:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:17:55.209 05:32:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:55.209 05:32:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:55.209 05:32:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:55.209 05:32:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:55.209 05:32:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:55.209 05:32:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:55.209 05:32:02 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:55.209 05:32:02 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:55.209 05:32:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:55.209 00:17:55.209 real 0m4.285s 00:17:55.209 user 0m0.786s 00:17:55.209 sys 0m1.479s 00:17:55.209 05:32:02 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:55.209 05:32:02 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:55.209 ************************************ 00:17:55.209 END TEST nvmf_target_multipath 00:17:55.209 ************************************ 00:17:55.209 05:32:02 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:17:55.209 05:32:02 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:55.209 05:32:02 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:55.209 05:32:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:55.209 ************************************ 00:17:55.209 START TEST nvmf_zcopy 00:17:55.209 ************************************ 00:17:55.209 05:32:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:17:55.467 * Looking for test storage... 00:17:55.467 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:55.467 05:32:02 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:55.467 05:32:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:17:55.467 05:32:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:55.467 05:32:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:55.467 05:32:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:55.467 05:32:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:55.467 05:32:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:55.467 05:32:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:55.467 05:32:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:55.467 05:32:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:55.467 05:32:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:55.467 05:32:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:55.467 05:32:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:55.467 05:32:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:55.467 05:32:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:55.467 05:32:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:55.467 05:32:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:55.467 05:32:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:55.467 05:32:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:55.468 05:32:02 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:55.468 05:32:02 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:55.468 05:32:02 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:55.468 05:32:02 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.468 05:32:02 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.468 05:32:02 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.468 05:32:02 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:17:55.468 05:32:02 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.468 05:32:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:17:55.468 05:32:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:55.468 05:32:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:55.468 05:32:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:55.468 05:32:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:55.468 05:32:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:55.468 05:32:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:55.468 05:32:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:55.468 05:32:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:55.468 05:32:02 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:17:55.468 05:32:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:55.468 05:32:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:55.468 05:32:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:55.468 05:32:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:55.468 05:32:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:55.468 05:32:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:55.468 05:32:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:55.468 05:32:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:55.468 05:32:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:55.468 05:32:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:55.468 05:32:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:17:55.468 05:32:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:57.364 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:57.364 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:17:57.364 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:57.364 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:57.364 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:57.364 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:57.364 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:57.364 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:17:57.364 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:57.364 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:17:57.364 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:17:57.364 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:17:57.364 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:17:57.364 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:17:57.364 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:17:57.364 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:57.364 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:57.364 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:57.364 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:57.364 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:57.364 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:57.364 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:57.364 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:57.364 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:57.364 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:57.364 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:57.364 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:57.623 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:57.623 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:57.623 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:57.623 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:57.623 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:57.623 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:17:57.623 00:17:57.623 --- 10.0.0.2 ping statistics --- 00:17:57.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.623 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:57.623 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:57.623 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:17:57.623 00:17:57.623 --- 10.0.0.1 ping statistics --- 00:17:57.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.623 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=3228892 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 3228892 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@827 -- # '[' -z 3228892 ']' 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:57.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:57.623 05:32:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:57.623 [2024-07-14 05:32:04.677763] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:57.623 [2024-07-14 05:32:04.677847] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:57.623 EAL: No free 2048 kB hugepages reported on node 1 00:17:57.881 [2024-07-14 05:32:04.746299] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.881 [2024-07-14 05:32:04.831826] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:57.881 [2024-07-14 05:32:04.831886] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:57.881 [2024-07-14 05:32:04.831910] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:57.881 [2024-07-14 05:32:04.831921] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:57.881 [2024-07-14 05:32:04.831930] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:57.881 [2024-07-14 05:32:04.831955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:57.881 05:32:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:57.881 05:32:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@860 -- # return 0 00:17:57.881 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:57.881 05:32:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:57.881 05:32:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:57.881 05:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:57.881 05:32:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:17:57.881 05:32:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:17:57.881 05:32:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.881 05:32:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:57.881 [2024-07-14 05:32:04.961598] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:57.881 05:32:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.881 05:32:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:57.881 05:32:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.881 05:32:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:57.881 05:32:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.881 05:32:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:57.881 05:32:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.881 05:32:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:57.881 [2024-07-14 05:32:04.977795] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:57.881 05:32:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.881 05:32:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:57.881 05:32:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.882 05:32:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:58.140 05:32:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.140 05:32:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:17:58.140 05:32:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.140 05:32:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:58.140 malloc0 00:17:58.140 05:32:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.140 05:32:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:58.140 05:32:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.140 05:32:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:58.140 05:32:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.140 05:32:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:17:58.140 05:32:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:17:58.140 05:32:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:17:58.140 05:32:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:17:58.140 05:32:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:58.140 05:32:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:58.140 { 00:17:58.140 "params": { 00:17:58.140 "name": "Nvme$subsystem", 00:17:58.140 "trtype": "$TEST_TRANSPORT", 00:17:58.140 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:58.140 "adrfam": "ipv4", 00:17:58.140 "trsvcid": "$NVMF_PORT", 00:17:58.140 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:58.140 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:58.140 "hdgst": ${hdgst:-false}, 00:17:58.140 "ddgst": ${ddgst:-false} 00:17:58.140 }, 00:17:58.140 "method": "bdev_nvme_attach_controller" 00:17:58.140 } 00:17:58.140 EOF 00:17:58.140 )") 00:17:58.140 05:32:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:17:58.140 05:32:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:17:58.140 05:32:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:17:58.140 05:32:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:58.140 "params": { 00:17:58.140 "name": "Nvme1", 00:17:58.140 "trtype": "tcp", 00:17:58.140 "traddr": "10.0.0.2", 00:17:58.140 "adrfam": "ipv4", 00:17:58.140 "trsvcid": "4420", 00:17:58.140 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:58.140 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:58.140 "hdgst": false, 00:17:58.140 "ddgst": false 00:17:58.140 }, 00:17:58.140 "method": "bdev_nvme_attach_controller" 00:17:58.140 }' 00:17:58.140 [2024-07-14 05:32:05.050692] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:58.140 [2024-07-14 05:32:05.050783] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3229027 ] 00:17:58.140 EAL: No free 2048 kB hugepages reported on node 1 00:17:58.140 [2024-07-14 05:32:05.114017] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.140 [2024-07-14 05:32:05.205742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:58.398 Running I/O for 10 seconds... 00:18:08.468 00:18:08.468 Latency(us) 00:18:08.468 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:08.468 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:18:08.468 Verification LBA range: start 0x0 length 0x1000 00:18:08.468 Nvme1n1 : 10.02 6022.27 47.05 0.00 0.00 21196.70 3446.71 33010.73 00:18:08.468 =================================================================================================================== 00:18:08.468 Total : 6022.27 47.05 0.00 0.00 21196.70 3446.71 33010.73 00:18:08.725 05:32:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3230218 00:18:08.725 05:32:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:18:08.725 05:32:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:08.725 05:32:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:18:08.725 05:32:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:18:08.725 05:32:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:18:08.725 05:32:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:18:08.725 05:32:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:08.725 05:32:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:08.725 { 00:18:08.725 "params": { 00:18:08.725 "name": "Nvme$subsystem", 00:18:08.725 "trtype": "$TEST_TRANSPORT", 00:18:08.725 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:08.725 "adrfam": "ipv4", 00:18:08.725 "trsvcid": "$NVMF_PORT", 00:18:08.725 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:08.725 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:08.725 "hdgst": ${hdgst:-false}, 00:18:08.725 "ddgst": ${ddgst:-false} 00:18:08.725 }, 00:18:08.725 "method": "bdev_nvme_attach_controller" 00:18:08.725 } 00:18:08.725 EOF 00:18:08.725 )") 00:18:08.725 [2024-07-14 05:32:15.696048] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.725 [2024-07-14 05:32:15.696095] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.725 05:32:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:18:08.725 05:32:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:18:08.725 05:32:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:18:08.725 05:32:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:08.725 "params": { 00:18:08.725 "name": "Nvme1", 00:18:08.725 "trtype": "tcp", 00:18:08.725 "traddr": "10.0.0.2", 00:18:08.725 "adrfam": "ipv4", 00:18:08.725 "trsvcid": "4420", 00:18:08.725 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:08.725 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:08.725 "hdgst": false, 00:18:08.725 "ddgst": false 00:18:08.725 }, 00:18:08.725 "method": "bdev_nvme_attach_controller" 00:18:08.725 }' 00:18:08.725 [2024-07-14 05:32:15.703981] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.725 [2024-07-14 05:32:15.704006] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.725 [2024-07-14 05:32:15.712006] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.725 [2024-07-14 05:32:15.712030] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.725 [2024-07-14 05:32:15.720027] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.725 [2024-07-14 05:32:15.720050] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.725 [2024-07-14 05:32:15.728050] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.725 [2024-07-14 05:32:15.728074] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.725 [2024-07-14 05:32:15.734268] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:18:08.725 [2024-07-14 05:32:15.734324] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3230218 ] 00:18:08.725 [2024-07-14 05:32:15.736069] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.725 [2024-07-14 05:32:15.736093] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.725 [2024-07-14 05:32:15.744088] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.725 [2024-07-14 05:32:15.744110] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.725 [2024-07-14 05:32:15.752108] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.725 [2024-07-14 05:32:15.752130] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.725 [2024-07-14 05:32:15.760130] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.725 [2024-07-14 05:32:15.760165] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.725 EAL: No free 2048 kB hugepages reported on node 1 00:18:08.725 [2024-07-14 05:32:15.768167] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.725 [2024-07-14 05:32:15.768194] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.725 [2024-07-14 05:32:15.776195] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.725 [2024-07-14 05:32:15.776220] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.725 [2024-07-14 05:32:15.784228] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.725 [2024-07-14 05:32:15.784253] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.725 [2024-07-14 05:32:15.792246] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.725 [2024-07-14 05:32:15.792271] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.725 [2024-07-14 05:32:15.798828] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.725 [2024-07-14 05:32:15.800258] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.725 [2024-07-14 05:32:15.800282] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.725 [2024-07-14 05:32:15.808323] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.725 [2024-07-14 05:32:15.808364] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.725 [2024-07-14 05:32:15.816329] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.726 [2024-07-14 05:32:15.816365] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.726 [2024-07-14 05:32:15.824329] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.726 [2024-07-14 05:32:15.824354] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.983 [2024-07-14 05:32:15.832365] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.983 [2024-07-14 05:32:15.832395] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.983 [2024-07-14 05:32:15.840379] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.983 [2024-07-14 05:32:15.840407] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.983 [2024-07-14 05:32:15.848400] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.983 [2024-07-14 05:32:15.848426] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.983 [2024-07-14 05:32:15.856449] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.983 [2024-07-14 05:32:15.856488] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.983 [2024-07-14 05:32:15.864452] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.983 [2024-07-14 05:32:15.864483] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.983 [2024-07-14 05:32:15.872462] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.983 [2024-07-14 05:32:15.872488] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.983 [2024-07-14 05:32:15.880481] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.983 [2024-07-14 05:32:15.880506] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.983 [2024-07-14 05:32:15.888504] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.983 [2024-07-14 05:32:15.888529] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.983 [2024-07-14 05:32:15.895626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:08.983 [2024-07-14 05:32:15.896525] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.983 [2024-07-14 05:32:15.896550] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.983 [2024-07-14 05:32:15.904545] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.983 [2024-07-14 05:32:15.904569] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.983 [2024-07-14 05:32:15.912592] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.983 [2024-07-14 05:32:15.912636] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.983 [2024-07-14 05:32:15.920620] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.983 [2024-07-14 05:32:15.920660] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.983 [2024-07-14 05:32:15.928644] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.983 [2024-07-14 05:32:15.928685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.983 [2024-07-14 05:32:15.936666] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.983 [2024-07-14 05:32:15.936706] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.983 [2024-07-14 05:32:15.944691] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.983 [2024-07-14 05:32:15.944734] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.983 [2024-07-14 05:32:15.952709] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.983 [2024-07-14 05:32:15.952749] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.983 [2024-07-14 05:32:15.960739] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.983 [2024-07-14 05:32:15.960781] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.983 [2024-07-14 05:32:15.968725] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.983 [2024-07-14 05:32:15.968753] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.983 [2024-07-14 05:32:15.976776] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.983 [2024-07-14 05:32:15.976816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.983 [2024-07-14 05:32:15.984797] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.983 [2024-07-14 05:32:15.984838] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.983 [2024-07-14 05:32:15.992813] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.983 [2024-07-14 05:32:15.992848] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.983 [2024-07-14 05:32:16.000808] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.983 [2024-07-14 05:32:16.000833] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.983 [2024-07-14 05:32:16.008829] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.983 [2024-07-14 05:32:16.008855] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.983 [2024-07-14 05:32:16.016862] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.983 [2024-07-14 05:32:16.016902] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.983 [2024-07-14 05:32:16.024895] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.983 [2024-07-14 05:32:16.024938] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.983 [2024-07-14 05:32:16.032911] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.983 [2024-07-14 05:32:16.032949] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.983 [2024-07-14 05:32:16.040942] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.983 [2024-07-14 05:32:16.040966] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.983 [2024-07-14 05:32:16.048957] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.983 [2024-07-14 05:32:16.048979] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.983 [2024-07-14 05:32:16.056973] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.983 [2024-07-14 05:32:16.056995] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.983 [2024-07-14 05:32:16.064982] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.983 [2024-07-14 05:32:16.065012] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.983 [2024-07-14 05:32:16.073003] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.983 [2024-07-14 05:32:16.073024] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.983 [2024-07-14 05:32:16.081028] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.983 [2024-07-14 05:32:16.081051] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.240 [2024-07-14 05:32:16.089071] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.240 [2024-07-14 05:32:16.089099] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.240 [2024-07-14 05:32:16.097079] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.240 [2024-07-14 05:32:16.097103] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.240 [2024-07-14 05:32:16.105095] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.240 [2024-07-14 05:32:16.105117] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.240 [2024-07-14 05:32:16.113120] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.240 [2024-07-14 05:32:16.113155] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.240 [2024-07-14 05:32:16.121159] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.240 [2024-07-14 05:32:16.121184] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.240 [2024-07-14 05:32:16.129182] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.240 [2024-07-14 05:32:16.129203] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.240 [2024-07-14 05:32:16.137209] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.240 [2024-07-14 05:32:16.137232] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.240 [2024-07-14 05:32:16.145239] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.240 [2024-07-14 05:32:16.145265] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.240 [2024-07-14 05:32:16.153263] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.240 [2024-07-14 05:32:16.153288] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.240 [2024-07-14 05:32:16.161286] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.240 [2024-07-14 05:32:16.161311] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.240 [2024-07-14 05:32:16.169299] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.240 [2024-07-14 05:32:16.169324] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.240 [2024-07-14 05:32:16.177324] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.240 [2024-07-14 05:32:16.177350] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.240 [2024-07-14 05:32:16.185343] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.240 [2024-07-14 05:32:16.185370] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.240 [2024-07-14 05:32:16.193363] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.240 [2024-07-14 05:32:16.193388] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.240 [2024-07-14 05:32:16.201388] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.240 [2024-07-14 05:32:16.201413] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.240 [2024-07-14 05:32:16.209409] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.240 [2024-07-14 05:32:16.209433] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.240 [2024-07-14 05:32:16.217434] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.241 [2024-07-14 05:32:16.217464] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.241 [2024-07-14 05:32:16.225458] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.241 [2024-07-14 05:32:16.225484] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.241 [2024-07-14 05:32:16.233921] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.241 [2024-07-14 05:32:16.233950] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.241 [2024-07-14 05:32:16.241509] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.241 [2024-07-14 05:32:16.241538] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.241 Running I/O for 5 seconds... 00:18:09.241 [2024-07-14 05:32:16.249528] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.241 [2024-07-14 05:32:16.249554] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.241 [2024-07-14 05:32:16.264771] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.241 [2024-07-14 05:32:16.264802] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.241 [2024-07-14 05:32:16.275832] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.241 [2024-07-14 05:32:16.275861] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.241 [2024-07-14 05:32:16.286717] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.241 [2024-07-14 05:32:16.286746] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.241 [2024-07-14 05:32:16.297222] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.241 [2024-07-14 05:32:16.297251] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.241 [2024-07-14 05:32:16.307958] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.241 [2024-07-14 05:32:16.307995] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.241 [2024-07-14 05:32:16.318523] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.241 [2024-07-14 05:32:16.318552] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.241 [2024-07-14 05:32:16.329063] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.241 [2024-07-14 05:32:16.329090] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.241 [2024-07-14 05:32:16.339840] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.241 [2024-07-14 05:32:16.339875] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.498 [2024-07-14 05:32:16.350433] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.498 [2024-07-14 05:32:16.350461] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.498 [2024-07-14 05:32:16.360909] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.498 [2024-07-14 05:32:16.360937] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.498 [2024-07-14 05:32:16.373662] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.498 [2024-07-14 05:32:16.373692] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.498 [2024-07-14 05:32:16.384903] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.498 [2024-07-14 05:32:16.384940] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.498 [2024-07-14 05:32:16.394223] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.498 [2024-07-14 05:32:16.394252] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.498 [2024-07-14 05:32:16.405593] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.498 [2024-07-14 05:32:16.405622] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.498 [2024-07-14 05:32:16.415787] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.498 [2024-07-14 05:32:16.415815] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.498 [2024-07-14 05:32:16.426048] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.498 [2024-07-14 05:32:16.426076] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.498 [2024-07-14 05:32:16.438671] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.498 [2024-07-14 05:32:16.438699] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.498 [2024-07-14 05:32:16.447862] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.498 [2024-07-14 05:32:16.447897] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.498 [2024-07-14 05:32:16.458597] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.498 [2024-07-14 05:32:16.458625] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.498 [2024-07-14 05:32:16.468958] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.498 [2024-07-14 05:32:16.468986] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.498 [2024-07-14 05:32:16.479215] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.498 [2024-07-14 05:32:16.479243] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.498 [2024-07-14 05:32:16.489569] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.498 [2024-07-14 05:32:16.489597] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.498 [2024-07-14 05:32:16.501936] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.498 [2024-07-14 05:32:16.501964] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.498 [2024-07-14 05:32:16.511198] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.498 [2024-07-14 05:32:16.511226] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.498 [2024-07-14 05:32:16.523549] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.498 [2024-07-14 05:32:16.523577] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.498 [2024-07-14 05:32:16.532718] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.498 [2024-07-14 05:32:16.532746] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.498 [2024-07-14 05:32:16.545271] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.498 [2024-07-14 05:32:16.545299] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.498 [2024-07-14 05:32:16.554314] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.498 [2024-07-14 05:32:16.554342] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.498 [2024-07-14 05:32:16.565283] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.498 [2024-07-14 05:32:16.565311] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.498 [2024-07-14 05:32:16.575436] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.498 [2024-07-14 05:32:16.575464] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.498 [2024-07-14 05:32:16.585652] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.498 [2024-07-14 05:32:16.585680] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.498 [2024-07-14 05:32:16.596186] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.498 [2024-07-14 05:32:16.596214] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.756 [2024-07-14 05:32:16.609402] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.756 [2024-07-14 05:32:16.609431] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.756 [2024-07-14 05:32:16.618715] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.756 [2024-07-14 05:32:16.618743] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.756 [2024-07-14 05:32:16.629214] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.756 [2024-07-14 05:32:16.629241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.756 [2024-07-14 05:32:16.639537] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.756 [2024-07-14 05:32:16.639565] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.756 [2024-07-14 05:32:16.649739] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.756 [2024-07-14 05:32:16.649767] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.756 [2024-07-14 05:32:16.661916] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.756 [2024-07-14 05:32:16.661943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.756 [2024-07-14 05:32:16.671114] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.756 [2024-07-14 05:32:16.671142] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.756 [2024-07-14 05:32:16.681920] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.756 [2024-07-14 05:32:16.681947] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.756 [2024-07-14 05:32:16.693478] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.756 [2024-07-14 05:32:16.693506] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.756 [2024-07-14 05:32:16.702298] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.756 [2024-07-14 05:32:16.702326] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.756 [2024-07-14 05:32:16.712918] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.756 [2024-07-14 05:32:16.712947] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.756 [2024-07-14 05:32:16.723166] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.756 [2024-07-14 05:32:16.723195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.756 [2024-07-14 05:32:16.733697] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.756 [2024-07-14 05:32:16.733726] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.756 [2024-07-14 05:32:16.743893] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.756 [2024-07-14 05:32:16.743921] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.756 [2024-07-14 05:32:16.754419] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.756 [2024-07-14 05:32:16.754446] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.756 [2024-07-14 05:32:16.764833] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.756 [2024-07-14 05:32:16.764861] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.757 [2024-07-14 05:32:16.775499] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.757 [2024-07-14 05:32:16.775527] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.757 [2024-07-14 05:32:16.785560] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.757 [2024-07-14 05:32:16.785588] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.757 [2024-07-14 05:32:16.795841] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.757 [2024-07-14 05:32:16.795879] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.757 [2024-07-14 05:32:16.806520] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.757 [2024-07-14 05:32:16.806548] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.757 [2024-07-14 05:32:16.816927] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.757 [2024-07-14 05:32:16.816955] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.757 [2024-07-14 05:32:16.829344] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.757 [2024-07-14 05:32:16.829372] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.757 [2024-07-14 05:32:16.838488] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.757 [2024-07-14 05:32:16.838517] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.757 [2024-07-14 05:32:16.850987] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.757 [2024-07-14 05:32:16.851015] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.014 [2024-07-14 05:32:16.862974] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.014 [2024-07-14 05:32:16.863008] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.014 [2024-07-14 05:32:16.872426] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.014 [2024-07-14 05:32:16.872455] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.014 [2024-07-14 05:32:16.883528] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.014 [2024-07-14 05:32:16.883555] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.014 [2024-07-14 05:32:16.893278] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.014 [2024-07-14 05:32:16.893306] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.014 [2024-07-14 05:32:16.904059] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.014 [2024-07-14 05:32:16.904086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.014 [2024-07-14 05:32:16.916021] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.014 [2024-07-14 05:32:16.916049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.014 [2024-07-14 05:32:16.925629] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.014 [2024-07-14 05:32:16.925657] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.014 [2024-07-14 05:32:16.936555] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.014 [2024-07-14 05:32:16.936583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.014 [2024-07-14 05:32:16.946718] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.014 [2024-07-14 05:32:16.946747] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.014 [2024-07-14 05:32:16.957101] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.014 [2024-07-14 05:32:16.957129] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.014 [2024-07-14 05:32:16.969272] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.014 [2024-07-14 05:32:16.969300] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.014 [2024-07-14 05:32:16.978471] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.014 [2024-07-14 05:32:16.978499] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.014 [2024-07-14 05:32:16.989427] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.014 [2024-07-14 05:32:16.989464] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.014 [2024-07-14 05:32:17.000233] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.014 [2024-07-14 05:32:17.000261] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.014 [2024-07-14 05:32:17.011236] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.014 [2024-07-14 05:32:17.011269] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.014 [2024-07-14 05:32:17.023387] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.014 [2024-07-14 05:32:17.023415] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.014 [2024-07-14 05:32:17.032747] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.014 [2024-07-14 05:32:17.032775] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.014 [2024-07-14 05:32:17.044720] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.014 [2024-07-14 05:32:17.044748] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.014 [2024-07-14 05:32:17.055714] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.014 [2024-07-14 05:32:17.055742] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.014 [2024-07-14 05:32:17.066783] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.014 [2024-07-14 05:32:17.066811] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.014 [2024-07-14 05:32:17.077393] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.014 [2024-07-14 05:32:17.077420] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.014 [2024-07-14 05:32:17.088170] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.014 [2024-07-14 05:32:17.088198] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.014 [2024-07-14 05:32:17.100471] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.014 [2024-07-14 05:32:17.100500] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.014 [2024-07-14 05:32:17.109708] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.014 [2024-07-14 05:32:17.109735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.271 [2024-07-14 05:32:17.121527] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.271 [2024-07-14 05:32:17.121556] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.271 [2024-07-14 05:32:17.131911] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.271 [2024-07-14 05:32:17.131939] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.271 [2024-07-14 05:32:17.142845] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.271 [2024-07-14 05:32:17.142880] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.271 [2024-07-14 05:32:17.153551] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.271 [2024-07-14 05:32:17.153579] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.271 [2024-07-14 05:32:17.164544] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.271 [2024-07-14 05:32:17.164587] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.271 [2024-07-14 05:32:17.175027] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.271 [2024-07-14 05:32:17.175055] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.271 [2024-07-14 05:32:17.186754] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.271 [2024-07-14 05:32:17.186782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.271 [2024-07-14 05:32:17.197393] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.271 [2024-07-14 05:32:17.197420] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.271 [2024-07-14 05:32:17.210250] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.271 [2024-07-14 05:32:17.210278] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.271 [2024-07-14 05:32:17.219792] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.271 [2024-07-14 05:32:17.219828] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.271 [2024-07-14 05:32:17.231220] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.271 [2024-07-14 05:32:17.231248] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.271 [2024-07-14 05:32:17.243713] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.271 [2024-07-14 05:32:17.243740] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.271 [2024-07-14 05:32:17.253003] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.271 [2024-07-14 05:32:17.253031] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.271 [2024-07-14 05:32:17.264123] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.271 [2024-07-14 05:32:17.264158] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.271 [2024-07-14 05:32:17.274674] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.271 [2024-07-14 05:32:17.274702] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.271 [2024-07-14 05:32:17.284327] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.271 [2024-07-14 05:32:17.284354] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.271 [2024-07-14 05:32:17.295164] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.271 [2024-07-14 05:32:17.295192] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.271 [2024-07-14 05:32:17.305682] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.271 [2024-07-14 05:32:17.305710] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.271 [2024-07-14 05:32:17.316703] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.271 [2024-07-14 05:32:17.316745] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.271 [2024-07-14 05:32:17.326738] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.271 [2024-07-14 05:32:17.326766] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.271 [2024-07-14 05:32:17.337681] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.271 [2024-07-14 05:32:17.337708] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.271 [2024-07-14 05:32:17.348270] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.271 [2024-07-14 05:32:17.348297] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.271 [2024-07-14 05:32:17.359393] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.271 [2024-07-14 05:32:17.359420] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.271 [2024-07-14 05:32:17.369826] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.271 [2024-07-14 05:32:17.369854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.528 [2024-07-14 05:32:17.380180] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.529 [2024-07-14 05:32:17.380208] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.529 [2024-07-14 05:32:17.390364] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.529 [2024-07-14 05:32:17.390392] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.529 [2024-07-14 05:32:17.400791] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.529 [2024-07-14 05:32:17.400820] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.529 [2024-07-14 05:32:17.411284] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.529 [2024-07-14 05:32:17.411311] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.529 [2024-07-14 05:32:17.423939] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.529 [2024-07-14 05:32:17.423974] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.529 [2024-07-14 05:32:17.432661] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.529 [2024-07-14 05:32:17.432689] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.529 [2024-07-14 05:32:17.443978] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.529 [2024-07-14 05:32:17.444005] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.529 [2024-07-14 05:32:17.454591] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.529 [2024-07-14 05:32:17.454619] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.529 [2024-07-14 05:32:17.465134] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.529 [2024-07-14 05:32:17.465161] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.529 [2024-07-14 05:32:17.475929] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.529 [2024-07-14 05:32:17.475957] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.529 [2024-07-14 05:32:17.486121] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.529 [2024-07-14 05:32:17.486149] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.529 [2024-07-14 05:32:17.496123] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.529 [2024-07-14 05:32:17.496150] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.529 [2024-07-14 05:32:17.507419] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.529 [2024-07-14 05:32:17.507446] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.529 [2024-07-14 05:32:17.517563] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.529 [2024-07-14 05:32:17.517591] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.529 [2024-07-14 05:32:17.528714] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.529 [2024-07-14 05:32:17.528740] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.529 [2024-07-14 05:32:17.541305] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.529 [2024-07-14 05:32:17.541333] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.529 [2024-07-14 05:32:17.550954] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.529 [2024-07-14 05:32:17.550982] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.529 [2024-07-14 05:32:17.561892] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.529 [2024-07-14 05:32:17.561920] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.529 [2024-07-14 05:32:17.573460] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.529 [2024-07-14 05:32:17.573487] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.529 [2024-07-14 05:32:17.582455] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.529 [2024-07-14 05:32:17.582483] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.529 [2024-07-14 05:32:17.593696] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.529 [2024-07-14 05:32:17.593723] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.529 [2024-07-14 05:32:17.604069] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.529 [2024-07-14 05:32:17.604097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.529 [2024-07-14 05:32:17.614199] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.529 [2024-07-14 05:32:17.614241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.529 [2024-07-14 05:32:17.624768] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.529 [2024-07-14 05:32:17.624803] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.786 [2024-07-14 05:32:17.635352] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.786 [2024-07-14 05:32:17.635380] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.786 [2024-07-14 05:32:17.645835] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.786 [2024-07-14 05:32:17.645863] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.786 [2024-07-14 05:32:17.656620] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.786 [2024-07-14 05:32:17.656647] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.786 [2024-07-14 05:32:17.669502] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.786 [2024-07-14 05:32:17.669529] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.786 [2024-07-14 05:32:17.680665] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.786 [2024-07-14 05:32:17.680692] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.786 [2024-07-14 05:32:17.689287] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.786 [2024-07-14 05:32:17.689313] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.786 [2024-07-14 05:32:17.700634] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.786 [2024-07-14 05:32:17.700660] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.786 [2024-07-14 05:32:17.710703] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.786 [2024-07-14 05:32:17.710730] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.786 [2024-07-14 05:32:17.721628] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.786 [2024-07-14 05:32:17.721656] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.786 [2024-07-14 05:32:17.731624] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.786 [2024-07-14 05:32:17.731652] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.786 [2024-07-14 05:32:17.742295] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.786 [2024-07-14 05:32:17.742323] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.786 [2024-07-14 05:32:17.753481] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.786 [2024-07-14 05:32:17.753508] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.786 [2024-07-14 05:32:17.764701] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.786 [2024-07-14 05:32:17.764727] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.786 [2024-07-14 05:32:17.775538] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.786 [2024-07-14 05:32:17.775565] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.786 [2024-07-14 05:32:17.786514] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.786 [2024-07-14 05:32:17.786540] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.786 [2024-07-14 05:32:17.797547] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.786 [2024-07-14 05:32:17.797574] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.786 [2024-07-14 05:32:17.808505] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.786 [2024-07-14 05:32:17.808532] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.786 [2024-07-14 05:32:17.818771] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.786 [2024-07-14 05:32:17.818798] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.786 [2024-07-14 05:32:17.829475] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.786 [2024-07-14 05:32:17.829502] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.786 [2024-07-14 05:32:17.841834] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.786 [2024-07-14 05:32:17.841884] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.786 [2024-07-14 05:32:17.853227] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.786 [2024-07-14 05:32:17.853255] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.786 [2024-07-14 05:32:17.862020] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.786 [2024-07-14 05:32:17.862048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.786 [2024-07-14 05:32:17.873751] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.786 [2024-07-14 05:32:17.873779] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.786 [2024-07-14 05:32:17.884676] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.786 [2024-07-14 05:32:17.884703] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.044 [2024-07-14 05:32:17.895660] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.044 [2024-07-14 05:32:17.895687] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.044 [2024-07-14 05:32:17.907833] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.044 [2024-07-14 05:32:17.907886] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.044 [2024-07-14 05:32:17.916938] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.044 [2024-07-14 05:32:17.916966] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.044 [2024-07-14 05:32:17.928193] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.044 [2024-07-14 05:32:17.928220] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.044 [2024-07-14 05:32:17.938573] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.044 [2024-07-14 05:32:17.938601] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.044 [2024-07-14 05:32:17.948980] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.044 [2024-07-14 05:32:17.949007] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.044 [2024-07-14 05:32:17.960211] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.044 [2024-07-14 05:32:17.960238] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.044 [2024-07-14 05:32:17.972611] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.044 [2024-07-14 05:32:17.972639] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.044 [2024-07-14 05:32:17.981622] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.044 [2024-07-14 05:32:17.981649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.044 [2024-07-14 05:32:17.992714] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.044 [2024-07-14 05:32:17.992743] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.044 [2024-07-14 05:32:18.003018] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.044 [2024-07-14 05:32:18.003047] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.044 [2024-07-14 05:32:18.013677] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.044 [2024-07-14 05:32:18.013706] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.044 [2024-07-14 05:32:18.025799] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.044 [2024-07-14 05:32:18.025827] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.044 [2024-07-14 05:32:18.034550] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.044 [2024-07-14 05:32:18.034579] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.044 [2024-07-14 05:32:18.045512] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.044 [2024-07-14 05:32:18.045540] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.044 [2024-07-14 05:32:18.055792] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.044 [2024-07-14 05:32:18.055819] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.044 [2024-07-14 05:32:18.066167] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.044 [2024-07-14 05:32:18.066195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.044 [2024-07-14 05:32:18.076614] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.044 [2024-07-14 05:32:18.076642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.044 [2024-07-14 05:32:18.088525] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.044 [2024-07-14 05:32:18.088553] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.044 [2024-07-14 05:32:18.097126] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.044 [2024-07-14 05:32:18.097154] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.044 [2024-07-14 05:32:18.107975] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.044 [2024-07-14 05:32:18.108003] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.044 [2024-07-14 05:32:18.118114] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.044 [2024-07-14 05:32:18.118142] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.044 [2024-07-14 05:32:18.128489] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.044 [2024-07-14 05:32:18.128531] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.044 [2024-07-14 05:32:18.141071] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.044 [2024-07-14 05:32:18.141099] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.303 [2024-07-14 05:32:18.150327] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.303 [2024-07-14 05:32:18.150362] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.303 [2024-07-14 05:32:18.161004] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.303 [2024-07-14 05:32:18.161032] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.303 [2024-07-14 05:32:18.171047] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.303 [2024-07-14 05:32:18.171076] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.303 [2024-07-14 05:32:18.180972] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.303 [2024-07-14 05:32:18.181000] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.303 [2024-07-14 05:32:18.191414] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.303 [2024-07-14 05:32:18.191442] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.303 [2024-07-14 05:32:18.204185] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.303 [2024-07-14 05:32:18.204213] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.303 [2024-07-14 05:32:18.213199] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.303 [2024-07-14 05:32:18.213227] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.303 [2024-07-14 05:32:18.223911] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.303 [2024-07-14 05:32:18.223939] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.303 [2024-07-14 05:32:18.234153] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.303 [2024-07-14 05:32:18.234180] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.303 [2024-07-14 05:32:18.244510] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.303 [2024-07-14 05:32:18.244537] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.303 [2024-07-14 05:32:18.254628] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.303 [2024-07-14 05:32:18.254656] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.303 [2024-07-14 05:32:18.263810] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.303 [2024-07-14 05:32:18.263837] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.303 [2024-07-14 05:32:18.274428] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.303 [2024-07-14 05:32:18.274455] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.303 [2024-07-14 05:32:18.286389] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.303 [2024-07-14 05:32:18.286416] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.303 [2024-07-14 05:32:18.295769] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.303 [2024-07-14 05:32:18.295796] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.303 [2024-07-14 05:32:18.306434] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.303 [2024-07-14 05:32:18.306462] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.303 [2024-07-14 05:32:18.316778] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.303 [2024-07-14 05:32:18.316806] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.303 [2024-07-14 05:32:18.327129] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.303 [2024-07-14 05:32:18.327156] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.303 [2024-07-14 05:32:18.339757] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.303 [2024-07-14 05:32:18.339784] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.303 [2024-07-14 05:32:18.348769] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.303 [2024-07-14 05:32:18.348797] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.303 [2024-07-14 05:32:18.359703] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.303 [2024-07-14 05:32:18.359730] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.303 [2024-07-14 05:32:18.369557] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.303 [2024-07-14 05:32:18.369584] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.303 [2024-07-14 05:32:18.379908] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.304 [2024-07-14 05:32:18.379935] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.304 [2024-07-14 05:32:18.390034] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.304 [2024-07-14 05:32:18.390061] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.304 [2024-07-14 05:32:18.400020] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.304 [2024-07-14 05:32:18.400046] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.562 [2024-07-14 05:32:18.410977] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.562 [2024-07-14 05:32:18.411006] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.562 [2024-07-14 05:32:18.421503] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.562 [2024-07-14 05:32:18.421542] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.562 [2024-07-14 05:32:18.432014] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.562 [2024-07-14 05:32:18.432042] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.562 [2024-07-14 05:32:18.443556] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.562 [2024-07-14 05:32:18.443585] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.562 [2024-07-14 05:32:18.452887] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.562 [2024-07-14 05:32:18.452914] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.562 [2024-07-14 05:32:18.464200] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.562 [2024-07-14 05:32:18.464227] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.562 [2024-07-14 05:32:18.474696] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.562 [2024-07-14 05:32:18.474723] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.562 [2024-07-14 05:32:18.485341] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.562 [2024-07-14 05:32:18.485369] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.562 [2024-07-14 05:32:18.495818] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.562 [2024-07-14 05:32:18.495847] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.562 [2024-07-14 05:32:18.506903] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.562 [2024-07-14 05:32:18.506932] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.562 [2024-07-14 05:32:18.517263] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.562 [2024-07-14 05:32:18.517291] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.562 [2024-07-14 05:32:18.527948] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.562 [2024-07-14 05:32:18.527975] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.562 [2024-07-14 05:32:18.538262] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.562 [2024-07-14 05:32:18.538290] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.562 [2024-07-14 05:32:18.548846] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.562 [2024-07-14 05:32:18.548885] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.562 [2024-07-14 05:32:18.560608] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.562 [2024-07-14 05:32:18.560637] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.562 [2024-07-14 05:32:18.569572] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.562 [2024-07-14 05:32:18.569599] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.562 [2024-07-14 05:32:18.580489] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.562 [2024-07-14 05:32:18.580516] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.562 [2024-07-14 05:32:18.590720] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.562 [2024-07-14 05:32:18.590747] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.562 [2024-07-14 05:32:18.600656] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.562 [2024-07-14 05:32:18.600683] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.562 [2024-07-14 05:32:18.610723] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.562 [2024-07-14 05:32:18.610750] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.562 [2024-07-14 05:32:18.620819] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.562 [2024-07-14 05:32:18.620855] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.562 [2024-07-14 05:32:18.630876] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.562 [2024-07-14 05:32:18.630904] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.562 [2024-07-14 05:32:18.640933] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.562 [2024-07-14 05:32:18.640960] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.562 [2024-07-14 05:32:18.651790] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.562 [2024-07-14 05:32:18.651817] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.562 [2024-07-14 05:32:18.661942] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.562 [2024-07-14 05:32:18.661970] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.820 [2024-07-14 05:32:18.672056] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.820 [2024-07-14 05:32:18.672084] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.820 [2024-07-14 05:32:18.682575] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.820 [2024-07-14 05:32:18.682604] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.820 [2024-07-14 05:32:18.694723] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.820 [2024-07-14 05:32:18.694751] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.820 [2024-07-14 05:32:18.704549] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.820 [2024-07-14 05:32:18.704577] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.820 [2024-07-14 05:32:18.715703] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.820 [2024-07-14 05:32:18.715729] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.820 [2024-07-14 05:32:18.726525] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.820 [2024-07-14 05:32:18.726552] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.820 [2024-07-14 05:32:18.738597] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.820 [2024-07-14 05:32:18.738623] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.820 [2024-07-14 05:32:18.748056] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.820 [2024-07-14 05:32:18.748084] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.820 [2024-07-14 05:32:18.759273] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.820 [2024-07-14 05:32:18.759300] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.820 [2024-07-14 05:32:18.769702] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.820 [2024-07-14 05:32:18.769729] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.820 [2024-07-14 05:32:18.780265] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.820 [2024-07-14 05:32:18.780292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.820 [2024-07-14 05:32:18.790702] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.820 [2024-07-14 05:32:18.790729] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.820 [2024-07-14 05:32:18.802956] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.820 [2024-07-14 05:32:18.802983] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.820 [2024-07-14 05:32:18.812210] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.820 [2024-07-14 05:32:18.812236] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.820 [2024-07-14 05:32:18.823357] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.820 [2024-07-14 05:32:18.823393] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.820 [2024-07-14 05:32:18.834026] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.820 [2024-07-14 05:32:18.834054] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.820 [2024-07-14 05:32:18.844018] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.820 [2024-07-14 05:32:18.844045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.821 [2024-07-14 05:32:18.855085] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.821 [2024-07-14 05:32:18.855112] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.821 [2024-07-14 05:32:18.865522] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.821 [2024-07-14 05:32:18.865549] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.821 [2024-07-14 05:32:18.876032] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.821 [2024-07-14 05:32:18.876060] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.821 [2024-07-14 05:32:18.886296] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.821 [2024-07-14 05:32:18.886324] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.821 [2024-07-14 05:32:18.896711] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.821 [2024-07-14 05:32:18.896738] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.821 [2024-07-14 05:32:18.906841] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.821 [2024-07-14 05:32:18.906890] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.821 [2024-07-14 05:32:18.917749] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.821 [2024-07-14 05:32:18.917776] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.079 [2024-07-14 05:32:18.930080] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.079 [2024-07-14 05:32:18.930108] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.079 [2024-07-14 05:32:18.939343] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.079 [2024-07-14 05:32:18.939371] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.079 [2024-07-14 05:32:18.950379] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.079 [2024-07-14 05:32:18.950407] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.079 [2024-07-14 05:32:18.960890] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.079 [2024-07-14 05:32:18.960918] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.079 [2024-07-14 05:32:18.971020] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.079 [2024-07-14 05:32:18.971047] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.079 [2024-07-14 05:32:18.981616] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.079 [2024-07-14 05:32:18.981643] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.079 [2024-07-14 05:32:18.992167] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.079 [2024-07-14 05:32:18.992195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.079 [2024-07-14 05:32:19.002875] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.079 [2024-07-14 05:32:19.002902] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.079 [2024-07-14 05:32:19.015301] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.079 [2024-07-14 05:32:19.015328] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.079 [2024-07-14 05:32:19.026192] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.079 [2024-07-14 05:32:19.026231] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.079 [2024-07-14 05:32:19.035595] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.079 [2024-07-14 05:32:19.035623] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.079 [2024-07-14 05:32:19.047170] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.079 [2024-07-14 05:32:19.047197] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.079 [2024-07-14 05:32:19.057801] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.079 [2024-07-14 05:32:19.057829] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.079 [2024-07-14 05:32:19.068450] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.079 [2024-07-14 05:32:19.068478] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.079 [2024-07-14 05:32:19.079388] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.079 [2024-07-14 05:32:19.079417] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.079 [2024-07-14 05:32:19.089188] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.079 [2024-07-14 05:32:19.089216] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.079 [2024-07-14 05:32:19.100182] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.079 [2024-07-14 05:32:19.100210] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.079 [2024-07-14 05:32:19.110495] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.079 [2024-07-14 05:32:19.110522] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.079 [2024-07-14 05:32:19.120876] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.079 [2024-07-14 05:32:19.120903] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.079 [2024-07-14 05:32:19.131838] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.079 [2024-07-14 05:32:19.131874] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.079 [2024-07-14 05:32:19.142408] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.079 [2024-07-14 05:32:19.142436] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.079 [2024-07-14 05:32:19.152926] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.079 [2024-07-14 05:32:19.152954] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.079 [2024-07-14 05:32:19.165410] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.079 [2024-07-14 05:32:19.165438] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.079 [2024-07-14 05:32:19.174811] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.079 [2024-07-14 05:32:19.174837] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.337 [2024-07-14 05:32:19.186299] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.337 [2024-07-14 05:32:19.186327] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.337 [2024-07-14 05:32:19.196862] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.337 [2024-07-14 05:32:19.196901] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.337 [2024-07-14 05:32:19.208205] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.337 [2024-07-14 05:32:19.208233] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.337 [2024-07-14 05:32:19.218820] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.337 [2024-07-14 05:32:19.218862] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.337 [2024-07-14 05:32:19.229496] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.337 [2024-07-14 05:32:19.229532] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.337 [2024-07-14 05:32:19.240319] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.337 [2024-07-14 05:32:19.240346] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.337 [2024-07-14 05:32:19.251183] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.337 [2024-07-14 05:32:19.251210] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.337 [2024-07-14 05:32:19.261352] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.337 [2024-07-14 05:32:19.261379] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.337 [2024-07-14 05:32:19.271716] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.337 [2024-07-14 05:32:19.271743] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.337 [2024-07-14 05:32:19.282049] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.337 [2024-07-14 05:32:19.282077] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.337 [2024-07-14 05:32:19.292306] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.337 [2024-07-14 05:32:19.292333] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.337 [2024-07-14 05:32:19.302378] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.337 [2024-07-14 05:32:19.302405] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.337 [2024-07-14 05:32:19.313417] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.338 [2024-07-14 05:32:19.313445] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.338 [2024-07-14 05:32:19.323029] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.338 [2024-07-14 05:32:19.323056] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.338 [2024-07-14 05:32:19.333809] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.338 [2024-07-14 05:32:19.333836] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.338 [2024-07-14 05:32:19.344209] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.338 [2024-07-14 05:32:19.344236] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.338 [2024-07-14 05:32:19.354968] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.338 [2024-07-14 05:32:19.354995] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.338 [2024-07-14 05:32:19.367208] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.338 [2024-07-14 05:32:19.367235] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.338 [2024-07-14 05:32:19.376658] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.338 [2024-07-14 05:32:19.376686] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.338 [2024-07-14 05:32:19.387656] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.338 [2024-07-14 05:32:19.387683] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.338 [2024-07-14 05:32:19.398633] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.338 [2024-07-14 05:32:19.398660] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.338 [2024-07-14 05:32:19.409796] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.338 [2024-07-14 05:32:19.409825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.338 [2024-07-14 05:32:19.420230] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.338 [2024-07-14 05:32:19.420256] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.338 [2024-07-14 05:32:19.431070] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.338 [2024-07-14 05:32:19.431098] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.338 [2024-07-14 05:32:19.442040] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.338 [2024-07-14 05:32:19.442068] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.596 [2024-07-14 05:32:19.453373] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.596 [2024-07-14 05:32:19.453400] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.596 [2024-07-14 05:32:19.463244] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.596 [2024-07-14 05:32:19.463272] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.596 [2024-07-14 05:32:19.474734] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.596 [2024-07-14 05:32:19.474762] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.596 [2024-07-14 05:32:19.485498] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.596 [2024-07-14 05:32:19.485525] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.596 [2024-07-14 05:32:19.496183] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.596 [2024-07-14 05:32:19.496210] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.596 [2024-07-14 05:32:19.507108] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.596 [2024-07-14 05:32:19.507136] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.596 [2024-07-14 05:32:19.517958] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.597 [2024-07-14 05:32:19.517986] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.597 [2024-07-14 05:32:19.528964] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.597 [2024-07-14 05:32:19.528992] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.597 [2024-07-14 05:32:19.539873] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.597 [2024-07-14 05:32:19.539901] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.597 [2024-07-14 05:32:19.550485] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.597 [2024-07-14 05:32:19.550528] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.597 [2024-07-14 05:32:19.561057] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.597 [2024-07-14 05:32:19.561085] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.597 [2024-07-14 05:32:19.571694] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.597 [2024-07-14 05:32:19.571723] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.597 [2024-07-14 05:32:19.582487] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.597 [2024-07-14 05:32:19.582515] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.597 [2024-07-14 05:32:19.595113] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.597 [2024-07-14 05:32:19.595141] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.597 [2024-07-14 05:32:19.604305] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.597 [2024-07-14 05:32:19.604333] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.597 [2024-07-14 05:32:19.615290] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.597 [2024-07-14 05:32:19.615335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.597 [2024-07-14 05:32:19.626249] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.597 [2024-07-14 05:32:19.626278] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.597 [2024-07-14 05:32:19.636946] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.597 [2024-07-14 05:32:19.636974] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.597 [2024-07-14 05:32:19.647832] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.597 [2024-07-14 05:32:19.647883] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.597 [2024-07-14 05:32:19.658409] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.597 [2024-07-14 05:32:19.658437] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.597 [2024-07-14 05:32:19.668457] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.597 [2024-07-14 05:32:19.668484] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.597 [2024-07-14 05:32:19.680083] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.597 [2024-07-14 05:32:19.680111] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.597 [2024-07-14 05:32:19.690908] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.597 [2024-07-14 05:32:19.690949] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.597 [2024-07-14 05:32:19.701998] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.597 [2024-07-14 05:32:19.702034] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.856 [2024-07-14 05:32:19.712551] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.856 [2024-07-14 05:32:19.712580] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.856 [2024-07-14 05:32:19.723141] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.856 [2024-07-14 05:32:19.723178] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.856 [2024-07-14 05:32:19.735699] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.856 [2024-07-14 05:32:19.735727] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.856 [2024-07-14 05:32:19.745294] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.856 [2024-07-14 05:32:19.745321] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.856 [2024-07-14 05:32:19.756240] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.856 [2024-07-14 05:32:19.756268] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.856 [2024-07-14 05:32:19.767092] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.856 [2024-07-14 05:32:19.767120] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.856 [2024-07-14 05:32:19.778081] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.856 [2024-07-14 05:32:19.778109] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.856 [2024-07-14 05:32:19.789119] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.856 [2024-07-14 05:32:19.789146] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.856 [2024-07-14 05:32:19.801683] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.856 [2024-07-14 05:32:19.801710] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.856 [2024-07-14 05:32:19.811518] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.856 [2024-07-14 05:32:19.811546] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.856 [2024-07-14 05:32:19.822616] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.856 [2024-07-14 05:32:19.822643] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.856 [2024-07-14 05:32:19.834771] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.856 [2024-07-14 05:32:19.834798] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.856 [2024-07-14 05:32:19.844230] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.856 [2024-07-14 05:32:19.844256] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.856 [2024-07-14 05:32:19.855440] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.856 [2024-07-14 05:32:19.855482] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.856 [2024-07-14 05:32:19.866032] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.856 [2024-07-14 05:32:19.866060] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.856 [2024-07-14 05:32:19.876495] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.856 [2024-07-14 05:32:19.876522] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.856 [2024-07-14 05:32:19.887060] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.856 [2024-07-14 05:32:19.887088] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.856 [2024-07-14 05:32:19.897458] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.856 [2024-07-14 05:32:19.897486] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.856 [2024-07-14 05:32:19.910157] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.856 [2024-07-14 05:32:19.910183] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.856 [2024-07-14 05:32:19.920289] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.856 [2024-07-14 05:32:19.920316] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.856 [2024-07-14 05:32:19.931918] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.856 [2024-07-14 05:32:19.931946] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.856 [2024-07-14 05:32:19.944692] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.856 [2024-07-14 05:32:19.944721] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.856 [2024-07-14 05:32:19.954347] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.856 [2024-07-14 05:32:19.954374] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.114 [2024-07-14 05:32:19.965686] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.114 [2024-07-14 05:32:19.965713] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.114 [2024-07-14 05:32:19.976267] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.114 [2024-07-14 05:32:19.976294] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.114 [2024-07-14 05:32:19.986635] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.114 [2024-07-14 05:32:19.986663] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.114 [2024-07-14 05:32:19.999277] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.114 [2024-07-14 05:32:19.999304] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.114 [2024-07-14 05:32:20.008682] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.114 [2024-07-14 05:32:20.008709] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.114 [2024-07-14 05:32:20.020031] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.114 [2024-07-14 05:32:20.020063] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.114 [2024-07-14 05:32:20.031956] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.114 [2024-07-14 05:32:20.031984] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.114 [2024-07-14 05:32:20.040994] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.114 [2024-07-14 05:32:20.041022] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.114 [2024-07-14 05:32:20.052817] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.114 [2024-07-14 05:32:20.052876] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.114 [2024-07-14 05:32:20.063623] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.114 [2024-07-14 05:32:20.063652] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.114 [2024-07-14 05:32:20.074542] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.114 [2024-07-14 05:32:20.074569] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.114 [2024-07-14 05:32:20.086760] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.114 [2024-07-14 05:32:20.086786] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.114 [2024-07-14 05:32:20.096411] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.114 [2024-07-14 05:32:20.096438] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.114 [2024-07-14 05:32:20.107513] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.114 [2024-07-14 05:32:20.107541] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.114 [2024-07-14 05:32:20.117608] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.114 [2024-07-14 05:32:20.117636] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.114 [2024-07-14 05:32:20.128478] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.114 [2024-07-14 05:32:20.128505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.114 [2024-07-14 05:32:20.140963] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.114 [2024-07-14 05:32:20.140991] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.114 [2024-07-14 05:32:20.150229] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.114 [2024-07-14 05:32:20.150255] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.114 [2024-07-14 05:32:20.161452] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.114 [2024-07-14 05:32:20.161480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.114 [2024-07-14 05:32:20.171757] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.114 [2024-07-14 05:32:20.171785] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.114 [2024-07-14 05:32:20.182236] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.114 [2024-07-14 05:32:20.182278] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.114 [2024-07-14 05:32:20.192916] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.114 [2024-07-14 05:32:20.192943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.114 [2024-07-14 05:32:20.203420] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.114 [2024-07-14 05:32:20.203446] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.114 [2024-07-14 05:32:20.214319] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.114 [2024-07-14 05:32:20.214346] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.377 [2024-07-14 05:32:20.225408] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.377 [2024-07-14 05:32:20.225438] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.377 [2024-07-14 05:32:20.236676] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.377 [2024-07-14 05:32:20.236718] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.377 [2024-07-14 05:32:20.247913] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.377 [2024-07-14 05:32:20.247948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.377 [2024-07-14 05:32:20.259053] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.377 [2024-07-14 05:32:20.259082] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.377 [2024-07-14 05:32:20.269679] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.377 [2024-07-14 05:32:20.269707] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.377 [2024-07-14 05:32:20.279988] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.377 [2024-07-14 05:32:20.280016] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.377 [2024-07-14 05:32:20.290276] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.377 [2024-07-14 05:32:20.290303] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.377 [2024-07-14 05:32:20.301287] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.377 [2024-07-14 05:32:20.301315] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.377 [2024-07-14 05:32:20.311955] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.377 [2024-07-14 05:32:20.311983] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.377 [2024-07-14 05:32:20.322973] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.377 [2024-07-14 05:32:20.323001] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.377 [2024-07-14 05:32:20.333676] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.377 [2024-07-14 05:32:20.333704] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.377 [2024-07-14 05:32:20.347825] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.377 [2024-07-14 05:32:20.347854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.377 [2024-07-14 05:32:20.358464] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.377 [2024-07-14 05:32:20.358491] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.377 [2024-07-14 05:32:20.368999] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.377 [2024-07-14 05:32:20.369027] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.377 [2024-07-14 05:32:20.381813] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.377 [2024-07-14 05:32:20.381841] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.377 [2024-07-14 05:32:20.391618] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.377 [2024-07-14 05:32:20.391646] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.377 [2024-07-14 05:32:20.402707] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.377 [2024-07-14 05:32:20.402734] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.377 [2024-07-14 05:32:20.413562] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.377 [2024-07-14 05:32:20.413589] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.377 [2024-07-14 05:32:20.424064] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.377 [2024-07-14 05:32:20.424092] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.377 [2024-07-14 05:32:20.434136] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.377 [2024-07-14 05:32:20.434163] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.377 [2024-07-14 05:32:20.445190] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.377 [2024-07-14 05:32:20.445218] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.377 [2024-07-14 05:32:20.455434] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.377 [2024-07-14 05:32:20.455470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.377 [2024-07-14 05:32:20.465850] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.377 [2024-07-14 05:32:20.465886] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.377 [2024-07-14 05:32:20.476517] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.377 [2024-07-14 05:32:20.476544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.637 [2024-07-14 05:32:20.487135] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.637 [2024-07-14 05:32:20.487178] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.637 [2024-07-14 05:32:20.499537] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.637 [2024-07-14 05:32:20.499563] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.637 [2024-07-14 05:32:20.509429] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.637 [2024-07-14 05:32:20.509456] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.637 [2024-07-14 05:32:20.520673] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.637 [2024-07-14 05:32:20.520700] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.637 [2024-07-14 05:32:20.531332] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.637 [2024-07-14 05:32:20.531359] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.637 [2024-07-14 05:32:20.541929] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.637 [2024-07-14 05:32:20.541957] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.637 [2024-07-14 05:32:20.554229] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.637 [2024-07-14 05:32:20.554256] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.637 [2024-07-14 05:32:20.563355] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.637 [2024-07-14 05:32:20.563382] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.637 [2024-07-14 05:32:20.574778] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.637 [2024-07-14 05:32:20.574806] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.637 [2024-07-14 05:32:20.585687] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.637 [2024-07-14 05:32:20.585715] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.637 [2024-07-14 05:32:20.596801] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.637 [2024-07-14 05:32:20.596830] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.637 [2024-07-14 05:32:20.607686] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.637 [2024-07-14 05:32:20.607714] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.637 [2024-07-14 05:32:20.618472] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.637 [2024-07-14 05:32:20.618501] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.637 [2024-07-14 05:32:20.629510] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.637 [2024-07-14 05:32:20.629538] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.637 [2024-07-14 05:32:20.640633] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.637 [2024-07-14 05:32:20.640661] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.637 [2024-07-14 05:32:20.651372] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.637 [2024-07-14 05:32:20.651400] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.637 [2024-07-14 05:32:20.661532] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.637 [2024-07-14 05:32:20.661567] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.637 [2024-07-14 05:32:20.672257] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.637 [2024-07-14 05:32:20.672285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.637 [2024-07-14 05:32:20.683143] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.637 [2024-07-14 05:32:20.683171] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.637 [2024-07-14 05:32:20.693855] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.637 [2024-07-14 05:32:20.693891] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.637 [2024-07-14 05:32:20.706129] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.637 [2024-07-14 05:32:20.706158] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.637 [2024-07-14 05:32:20.715768] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.637 [2024-07-14 05:32:20.715798] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.637 [2024-07-14 05:32:20.727161] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.637 [2024-07-14 05:32:20.727190] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.637 [2024-07-14 05:32:20.737280] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.637 [2024-07-14 05:32:20.737308] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.895 [2024-07-14 05:32:20.749222] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.895 [2024-07-14 05:32:20.749265] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.895 [2024-07-14 05:32:20.760380] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.895 [2024-07-14 05:32:20.760409] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.895 [2024-07-14 05:32:20.771454] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.895 [2024-07-14 05:32:20.771486] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.895 [2024-07-14 05:32:20.782576] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.895 [2024-07-14 05:32:20.782605] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.895 [2024-07-14 05:32:20.793573] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.895 [2024-07-14 05:32:20.793600] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.895 [2024-07-14 05:32:20.803877] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.895 [2024-07-14 05:32:20.803904] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.895 [2024-07-14 05:32:20.814222] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.895 [2024-07-14 05:32:20.814249] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.895 [2024-07-14 05:32:20.824978] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.895 [2024-07-14 05:32:20.825006] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.895 [2024-07-14 05:32:20.835641] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.895 [2024-07-14 05:32:20.835669] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.895 [2024-07-14 05:32:20.846205] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.895 [2024-07-14 05:32:20.846232] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.895 [2024-07-14 05:32:20.856532] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.895 [2024-07-14 05:32:20.856558] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.895 [2024-07-14 05:32:20.866874] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.895 [2024-07-14 05:32:20.866908] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.895 [2024-07-14 05:32:20.877208] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.895 [2024-07-14 05:32:20.877235] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.895 [2024-07-14 05:32:20.887607] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.895 [2024-07-14 05:32:20.887633] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.895 [2024-07-14 05:32:20.898633] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.895 [2024-07-14 05:32:20.898660] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.895 [2024-07-14 05:32:20.909049] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.895 [2024-07-14 05:32:20.909076] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.895 [2024-07-14 05:32:20.919721] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.895 [2024-07-14 05:32:20.919748] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.895 [2024-07-14 05:32:20.930381] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.895 [2024-07-14 05:32:20.930408] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.895 [2024-07-14 05:32:20.940738] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.895 [2024-07-14 05:32:20.940764] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.895 [2024-07-14 05:32:20.951457] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.895 [2024-07-14 05:32:20.951485] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.895 [2024-07-14 05:32:20.964020] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.895 [2024-07-14 05:32:20.964049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.895 [2024-07-14 05:32:20.973302] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.895 [2024-07-14 05:32:20.973343] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.895 [2024-07-14 05:32:20.984920] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.895 [2024-07-14 05:32:20.984948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.895 [2024-07-14 05:32:20.995489] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.895 [2024-07-14 05:32:20.995517] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.152 [2024-07-14 05:32:21.006783] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.152 [2024-07-14 05:32:21.006813] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.152 [2024-07-14 05:32:21.017980] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.152 [2024-07-14 05:32:21.018008] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.153 [2024-07-14 05:32:21.028817] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.153 [2024-07-14 05:32:21.028845] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.153 [2024-07-14 05:32:21.039169] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.153 [2024-07-14 05:32:21.039197] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.153 [2024-07-14 05:32:21.050090] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.153 [2024-07-14 05:32:21.050118] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.153 [2024-07-14 05:32:21.060882] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.153 [2024-07-14 05:32:21.060910] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.153 [2024-07-14 05:32:21.071841] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.153 [2024-07-14 05:32:21.071891] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.153 [2024-07-14 05:32:21.082084] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.153 [2024-07-14 05:32:21.082112] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.153 [2024-07-14 05:32:21.093580] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.153 [2024-07-14 05:32:21.093607] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.153 [2024-07-14 05:32:21.104210] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.153 [2024-07-14 05:32:21.104237] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.153 [2024-07-14 05:32:21.114984] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.153 [2024-07-14 05:32:21.115011] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.153 [2024-07-14 05:32:21.125220] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.153 [2024-07-14 05:32:21.125248] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.153 [2024-07-14 05:32:21.135621] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.153 [2024-07-14 05:32:21.135647] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.153 [2024-07-14 05:32:21.146260] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.153 [2024-07-14 05:32:21.146287] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.153 [2024-07-14 05:32:21.156951] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.153 [2024-07-14 05:32:21.156979] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.153 [2024-07-14 05:32:21.167944] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.153 [2024-07-14 05:32:21.167972] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.153 [2024-07-14 05:32:21.179170] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.153 [2024-07-14 05:32:21.179197] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.153 [2024-07-14 05:32:21.189833] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.153 [2024-07-14 05:32:21.189882] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.153 [2024-07-14 05:32:21.200600] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.153 [2024-07-14 05:32:21.200627] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.153 [2024-07-14 05:32:21.211155] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.153 [2024-07-14 05:32:21.211197] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.153 [2024-07-14 05:32:21.223538] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.153 [2024-07-14 05:32:21.223565] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.153 [2024-07-14 05:32:21.232920] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.153 [2024-07-14 05:32:21.232949] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.153 [2024-07-14 05:32:21.244189] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.153 [2024-07-14 05:32:21.244218] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.153 [2024-07-14 05:32:21.254255] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.153 [2024-07-14 05:32:21.254287] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.411 [2024-07-14 05:32:21.264610] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.411 [2024-07-14 05:32:21.264639] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.411 [2024-07-14 05:32:21.269882] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.411 [2024-07-14 05:32:21.269931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.411 00:18:14.411 Latency(us) 00:18:14.411 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:14.411 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:18:14.411 Nvme1n1 : 5.01 12042.57 94.08 0.00 0.00 10615.50 4247.70 21165.70 00:18:14.411 =================================================================================================================== 00:18:14.411 Total : 12042.57 94.08 0.00 0.00 10615.50 4247.70 21165.70 00:18:14.411 [2024-07-14 05:32:21.277923] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.411 [2024-07-14 05:32:21.277948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.411 [2024-07-14 05:32:21.285927] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.411 [2024-07-14 05:32:21.285956] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.411 [2024-07-14 05:32:21.294018] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.411 [2024-07-14 05:32:21.294068] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.411 [2024-07-14 05:32:21.302029] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.411 [2024-07-14 05:32:21.302077] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.411 [2024-07-14 05:32:21.310050] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.411 [2024-07-14 05:32:21.310097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.411 [2024-07-14 05:32:21.318049] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.411 [2024-07-14 05:32:21.318100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.411 [2024-07-14 05:32:21.326087] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.411 [2024-07-14 05:32:21.326138] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.411 [2024-07-14 05:32:21.334097] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.411 [2024-07-14 05:32:21.334146] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.411 [2024-07-14 05:32:21.342109] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.411 [2024-07-14 05:32:21.342166] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.411 [2024-07-14 05:32:21.350139] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.411 [2024-07-14 05:32:21.350197] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.411 [2024-07-14 05:32:21.358166] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.411 [2024-07-14 05:32:21.358214] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.411 [2024-07-14 05:32:21.366186] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.411 [2024-07-14 05:32:21.366233] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.411 [2024-07-14 05:32:21.374199] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.412 [2024-07-14 05:32:21.374244] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.412 [2024-07-14 05:32:21.382225] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.412 [2024-07-14 05:32:21.382270] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.412 [2024-07-14 05:32:21.390255] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.412 [2024-07-14 05:32:21.390301] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.412 [2024-07-14 05:32:21.398271] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.412 [2024-07-14 05:32:21.398304] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.412 [2024-07-14 05:32:21.406248] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.412 [2024-07-14 05:32:21.406272] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.412 [2024-07-14 05:32:21.414331] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.412 [2024-07-14 05:32:21.414375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.412 [2024-07-14 05:32:21.422346] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.412 [2024-07-14 05:32:21.422393] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.412 [2024-07-14 05:32:21.430358] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.412 [2024-07-14 05:32:21.430400] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.412 [2024-07-14 05:32:21.438349] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.412 [2024-07-14 05:32:21.438375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.412 [2024-07-14 05:32:21.446419] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.412 [2024-07-14 05:32:21.446465] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.412 [2024-07-14 05:32:21.454450] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.412 [2024-07-14 05:32:21.454498] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.412 [2024-07-14 05:32:21.462436] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.412 [2024-07-14 05:32:21.462466] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.412 [2024-07-14 05:32:21.470434] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.412 [2024-07-14 05:32:21.470459] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.412 [2024-07-14 05:32:21.478454] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.412 [2024-07-14 05:32:21.478478] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.412 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3230218) - No such process 00:18:14.412 05:32:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3230218 00:18:14.412 05:32:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:14.412 05:32:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.412 05:32:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:14.412 05:32:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.412 05:32:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:18:14.412 05:32:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.412 05:32:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:14.412 delay0 00:18:14.412 05:32:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.412 05:32:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:18:14.412 05:32:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.412 05:32:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:14.412 05:32:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.412 05:32:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:18:14.669 EAL: No free 2048 kB hugepages reported on node 1 00:18:14.669 [2024-07-14 05:32:21.646074] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:18:21.226 Initializing NVMe Controllers 00:18:21.226 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:21.226 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:21.226 Initialization complete. Launching workers. 00:18:21.226 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 87 00:18:21.226 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 374, failed to submit 33 00:18:21.226 success 175, unsuccess 199, failed 0 00:18:21.226 05:32:27 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:18:21.226 05:32:27 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:18:21.226 05:32:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:21.226 05:32:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:18:21.226 05:32:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:21.226 05:32:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:18:21.226 05:32:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:21.226 05:32:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:21.226 rmmod nvme_tcp 00:18:21.226 rmmod nvme_fabrics 00:18:21.226 rmmod nvme_keyring 00:18:21.226 05:32:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:21.226 05:32:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:18:21.226 05:32:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:18:21.226 05:32:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 3228892 ']' 00:18:21.226 05:32:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 3228892 00:18:21.226 05:32:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@946 -- # '[' -z 3228892 ']' 00:18:21.226 05:32:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@950 -- # kill -0 3228892 00:18:21.226 05:32:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # uname 00:18:21.226 05:32:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:21.226 05:32:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3228892 00:18:21.226 05:32:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:21.226 05:32:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:21.226 05:32:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3228892' 00:18:21.226 killing process with pid 3228892 00:18:21.226 05:32:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@965 -- # kill 3228892 00:18:21.226 05:32:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@970 -- # wait 3228892 00:18:21.226 05:32:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:21.226 05:32:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:21.226 05:32:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:21.226 05:32:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:21.226 05:32:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:21.226 05:32:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:21.226 05:32:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:21.226 05:32:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:23.761 05:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:23.761 00:18:23.761 real 0m27.981s 00:18:23.761 user 0m41.021s 00:18:23.761 sys 0m8.664s 00:18:23.761 05:32:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:23.761 05:32:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:23.761 ************************************ 00:18:23.761 END TEST nvmf_zcopy 00:18:23.761 ************************************ 00:18:23.761 05:32:30 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:23.761 05:32:30 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:23.761 05:32:30 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:23.761 05:32:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:23.761 ************************************ 00:18:23.761 START TEST nvmf_nmic 00:18:23.761 ************************************ 00:18:23.761 05:32:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:23.761 * Looking for test storage... 00:18:23.761 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:23.761 05:32:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:23.761 05:32:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:18:23.761 05:32:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:23.761 05:32:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:23.761 05:32:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:23.761 05:32:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:23.761 05:32:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:23.761 05:32:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:23.761 05:32:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:23.761 05:32:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:23.761 05:32:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:23.761 05:32:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:23.761 05:32:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:23.761 05:32:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:23.761 05:32:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:23.761 05:32:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:23.761 05:32:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:23.761 05:32:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:23.761 05:32:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:23.761 05:32:30 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:23.761 05:32:30 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:23.761 05:32:30 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:23.761 05:32:30 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:23.761 05:32:30 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:23.761 05:32:30 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:23.761 05:32:30 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:18:23.762 05:32:30 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:23.762 05:32:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:18:23.762 05:32:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:23.762 05:32:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:23.762 05:32:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:23.762 05:32:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:23.762 05:32:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:23.762 05:32:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:23.762 05:32:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:23.762 05:32:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:23.762 05:32:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:23.762 05:32:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:23.762 05:32:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:18:23.762 05:32:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:23.762 05:32:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:23.762 05:32:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:23.762 05:32:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:23.762 05:32:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:23.762 05:32:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:23.762 05:32:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:23.762 05:32:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:23.762 05:32:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:23.762 05:32:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:23.762 05:32:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:18:23.762 05:32:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:25.701 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:25.701 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:18:25.701 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:25.701 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:25.701 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:25.701 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:25.701 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:25.701 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:18:25.701 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:25.701 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:18:25.701 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:18:25.701 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:18:25.701 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:18:25.701 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:18:25.701 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:18:25.701 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:25.701 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:25.701 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:25.701 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:25.701 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:25.701 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:25.701 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:25.701 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:25.701 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:25.701 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:25.701 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:25.701 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:25.701 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:25.701 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:25.701 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:25.701 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:25.701 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:25.701 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:25.701 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:25.701 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:25.701 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:25.701 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:25.701 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:25.701 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:25.701 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:25.701 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:25.701 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:25.701 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:25.701 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:25.701 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:25.701 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:25.701 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:25.701 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:25.701 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:25.701 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:25.701 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:25.701 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:25.701 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:25.701 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:25.701 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:25.701 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:25.701 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:25.701 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:25.701 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:25.701 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:25.701 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:25.702 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:25.702 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:25.702 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:25.702 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:25.702 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:25.702 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:25.702 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:25.702 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:25.702 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:25.702 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:25.702 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:25.702 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:18:25.702 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:25.702 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:25.702 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:25.702 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:25.702 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:25.702 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:25.702 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:25.702 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:25.702 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:25.702 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:25.702 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:25.702 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:25.702 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:25.702 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:25.702 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:25.702 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:25.702 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:25.702 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:25.702 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:25.702 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:25.702 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:25.702 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:25.702 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:25.702 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:25.702 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:18:25.702 00:18:25.702 --- 10.0.0.2 ping statistics --- 00:18:25.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:25.702 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:18:25.702 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:25.702 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:25.702 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:18:25.702 00:18:25.702 --- 10.0.0.1 ping statistics --- 00:18:25.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:25.702 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:18:25.702 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:25.702 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:18:25.702 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:25.702 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:25.702 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:25.702 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:25.702 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:25.702 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:25.702 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:25.702 05:32:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:18:25.702 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:25.702 05:32:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:25.702 05:32:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:25.702 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=3233595 00:18:25.702 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:25.702 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 3233595 00:18:25.702 05:32:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@827 -- # '[' -z 3233595 ']' 00:18:25.702 05:32:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:25.702 05:32:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:25.702 05:32:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:25.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:25.702 05:32:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:25.702 05:32:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:25.702 [2024-07-14 05:32:32.640066] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:18:25.702 [2024-07-14 05:32:32.640136] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:25.702 EAL: No free 2048 kB hugepages reported on node 1 00:18:25.702 [2024-07-14 05:32:32.707162] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:25.702 [2024-07-14 05:32:32.799718] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:25.702 [2024-07-14 05:32:32.799774] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:25.702 [2024-07-14 05:32:32.799791] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:25.702 [2024-07-14 05:32:32.799803] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:25.702 [2024-07-14 05:32:32.799815] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:25.702 [2024-07-14 05:32:32.799895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:25.702 [2024-07-14 05:32:32.799939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:25.702 [2024-07-14 05:32:32.800022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:25.702 [2024-07-14 05:32:32.800025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:25.961 05:32:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:25.961 05:32:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@860 -- # return 0 00:18:25.961 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:25.961 05:32:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:25.961 05:32:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:25.961 05:32:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:25.961 05:32:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:25.961 05:32:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.961 05:32:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:25.961 [2024-07-14 05:32:32.943730] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:25.961 05:32:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.961 05:32:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:25.961 05:32:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.961 05:32:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:25.961 Malloc0 00:18:25.961 05:32:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.961 05:32:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:25.961 05:32:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.961 05:32:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:25.961 05:32:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.961 05:32:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:25.961 05:32:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.961 05:32:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:25.961 05:32:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.961 05:32:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:25.961 05:32:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.961 05:32:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:25.961 [2024-07-14 05:32:32.996112] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:25.961 05:32:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.961 05:32:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:18:25.961 test case1: single bdev can't be used in multiple subsystems 00:18:25.961 05:32:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:25.961 05:32:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.961 05:32:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:25.961 05:32:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.961 05:32:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:25.961 05:32:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.961 05:32:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:25.961 05:32:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.961 05:32:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:18:25.961 05:32:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:18:25.961 05:32:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.961 05:32:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:25.961 [2024-07-14 05:32:33.019983] bdev.c:8035:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:18:25.961 [2024-07-14 05:32:33.020013] subsystem.c:2063:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:18:25.961 [2024-07-14 05:32:33.020029] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.961 request: 00:18:25.961 { 00:18:25.961 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:18:25.961 "namespace": { 00:18:25.961 "bdev_name": "Malloc0", 00:18:25.961 "no_auto_visible": false 00:18:25.961 }, 00:18:25.961 "method": "nvmf_subsystem_add_ns", 00:18:25.961 "req_id": 1 00:18:25.961 } 00:18:25.961 Got JSON-RPC error response 00:18:25.961 response: 00:18:25.961 { 00:18:25.961 "code": -32602, 00:18:25.961 "message": "Invalid parameters" 00:18:25.961 } 00:18:25.961 05:32:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:25.961 05:32:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:18:25.961 05:32:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:18:25.961 05:32:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:18:25.961 Adding namespace failed - expected result. 00:18:25.961 05:32:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:18:25.961 test case2: host connect to nvmf target in multiple paths 00:18:25.961 05:32:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:25.961 05:32:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.961 05:32:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:25.961 [2024-07-14 05:32:33.028105] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:25.961 05:32:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.961 05:32:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:26.893 05:32:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:18:27.457 05:32:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:18:27.457 05:32:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1194 -- # local i=0 00:18:27.457 05:32:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:18:27.457 05:32:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:18:27.457 05:32:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1201 -- # sleep 2 00:18:29.348 05:32:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:18:29.348 05:32:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:18:29.348 05:32:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:18:29.348 05:32:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:18:29.348 05:32:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:18:29.348 05:32:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # return 0 00:18:29.348 05:32:36 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:29.348 [global] 00:18:29.348 thread=1 00:18:29.348 invalidate=1 00:18:29.348 rw=write 00:18:29.348 time_based=1 00:18:29.348 runtime=1 00:18:29.348 ioengine=libaio 00:18:29.348 direct=1 00:18:29.348 bs=4096 00:18:29.348 iodepth=1 00:18:29.348 norandommap=0 00:18:29.348 numjobs=1 00:18:29.348 00:18:29.348 verify_dump=1 00:18:29.348 verify_backlog=512 00:18:29.348 verify_state_save=0 00:18:29.348 do_verify=1 00:18:29.348 verify=crc32c-intel 00:18:29.348 [job0] 00:18:29.348 filename=/dev/nvme0n1 00:18:29.348 Could not set queue depth (nvme0n1) 00:18:29.606 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:29.606 fio-3.35 00:18:29.606 Starting 1 thread 00:18:30.976 00:18:30.976 job0: (groupid=0, jobs=1): err= 0: pid=3234115: Sun Jul 14 05:32:37 2024 00:18:30.976 read: IOPS=21, BW=85.5KiB/s (87.6kB/s)(88.0KiB/1029msec) 00:18:30.976 slat (nsec): min=15405, max=44519, avg=31327.91, stdev=9674.22 00:18:30.976 clat (usec): min=648, max=42220, avg=39523.55, stdev=8697.41 00:18:30.976 lat (usec): min=689, max=42253, avg=39554.88, stdev=8695.14 00:18:30.976 clat percentiles (usec): 00:18:30.976 | 1.00th=[ 652], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:18:30.976 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:18:30.976 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:18:30.976 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:30.976 | 99.99th=[42206] 00:18:30.976 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:18:30.976 slat (usec): min=6, max=27863, avg=69.81, stdev=1230.75 00:18:30.976 clat (usec): min=200, max=385, avg=229.54, stdev=25.00 00:18:30.976 lat (usec): min=212, max=28115, avg=299.35, stdev=1232.04 00:18:30.976 clat percentiles (usec): 00:18:30.976 | 1.00th=[ 204], 5.00th=[ 208], 10.00th=[ 210], 20.00th=[ 217], 00:18:30.976 | 30.00th=[ 219], 40.00th=[ 223], 50.00th=[ 225], 60.00th=[ 227], 00:18:30.976 | 70.00th=[ 231], 80.00th=[ 237], 90.00th=[ 245], 95.00th=[ 260], 00:18:30.976 | 99.00th=[ 355], 99.50th=[ 375], 99.90th=[ 388], 99.95th=[ 388], 00:18:30.976 | 99.99th=[ 388] 00:18:30.976 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:18:30.976 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:30.976 lat (usec) : 250=89.14%, 500=6.74%, 750=0.19% 00:18:30.976 lat (msec) : 50=3.93% 00:18:30.976 cpu : usr=0.39%, sys=0.78%, ctx=536, majf=0, minf=2 00:18:30.976 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:30.976 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:30.976 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:30.976 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:30.976 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:30.976 00:18:30.976 Run status group 0 (all jobs): 00:18:30.976 READ: bw=85.5KiB/s (87.6kB/s), 85.5KiB/s-85.5KiB/s (87.6kB/s-87.6kB/s), io=88.0KiB (90.1kB), run=1029-1029msec 00:18:30.976 WRITE: bw=1990KiB/s (2038kB/s), 1990KiB/s-1990KiB/s (2038kB/s-2038kB/s), io=2048KiB (2097kB), run=1029-1029msec 00:18:30.976 00:18:30.976 Disk stats (read/write): 00:18:30.976 nvme0n1: ios=45/512, merge=0/0, ticks=1694/105, in_queue=1799, util=98.50% 00:18:30.977 05:32:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:30.977 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:18:30.977 05:32:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:30.977 05:32:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1215 -- # local i=0 00:18:30.977 05:32:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:18:30.977 05:32:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:30.977 05:32:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:18:30.977 05:32:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:30.977 05:32:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # return 0 00:18:30.977 05:32:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:18:30.977 05:32:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:18:30.977 05:32:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:30.977 05:32:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:18:30.977 05:32:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:30.977 05:32:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:18:30.977 05:32:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:30.977 05:32:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:30.977 rmmod nvme_tcp 00:18:30.977 rmmod nvme_fabrics 00:18:30.977 rmmod nvme_keyring 00:18:30.977 05:32:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:30.977 05:32:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:18:30.977 05:32:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:18:30.977 05:32:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 3233595 ']' 00:18:30.977 05:32:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 3233595 00:18:30.977 05:32:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@946 -- # '[' -z 3233595 ']' 00:18:30.977 05:32:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@950 -- # kill -0 3233595 00:18:30.977 05:32:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # uname 00:18:30.977 05:32:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:30.977 05:32:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3233595 00:18:30.977 05:32:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:30.977 05:32:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:30.977 05:32:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3233595' 00:18:30.977 killing process with pid 3233595 00:18:30.977 05:32:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@965 -- # kill 3233595 00:18:30.977 05:32:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@970 -- # wait 3233595 00:18:31.235 05:32:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:31.235 05:32:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:31.235 05:32:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:31.235 05:32:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:31.235 05:32:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:31.235 05:32:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:31.235 05:32:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:31.235 05:32:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:33.138 05:32:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:33.138 00:18:33.138 real 0m9.915s 00:18:33.138 user 0m22.309s 00:18:33.138 sys 0m2.320s 00:18:33.138 05:32:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:33.138 05:32:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:33.138 ************************************ 00:18:33.138 END TEST nvmf_nmic 00:18:33.138 ************************************ 00:18:33.397 05:32:40 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:33.397 05:32:40 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:33.397 05:32:40 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:33.397 05:32:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:33.397 ************************************ 00:18:33.397 START TEST nvmf_fio_target 00:18:33.397 ************************************ 00:18:33.397 05:32:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:33.397 * Looking for test storage... 00:18:33.397 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:33.397 05:32:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:33.397 05:32:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:18:33.397 05:32:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:33.397 05:32:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:33.397 05:32:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:33.397 05:32:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:33.397 05:32:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:33.397 05:32:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:33.397 05:32:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:33.397 05:32:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:33.397 05:32:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:33.397 05:32:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:33.397 05:32:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:33.397 05:32:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:33.397 05:32:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:33.397 05:32:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:33.397 05:32:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:33.398 05:32:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:33.398 05:32:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:33.398 05:32:40 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:33.398 05:32:40 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:33.398 05:32:40 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:33.398 05:32:40 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.398 05:32:40 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.398 05:32:40 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.398 05:32:40 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:18:33.398 05:32:40 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.398 05:32:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:18:33.398 05:32:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:33.398 05:32:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:33.398 05:32:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:33.398 05:32:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:33.398 05:32:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:33.398 05:32:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:33.398 05:32:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:33.398 05:32:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:33.398 05:32:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:33.398 05:32:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:33.398 05:32:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:33.398 05:32:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:18:33.398 05:32:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:33.398 05:32:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:33.398 05:32:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:33.398 05:32:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:33.398 05:32:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:33.398 05:32:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:33.398 05:32:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:33.398 05:32:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:33.398 05:32:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:33.398 05:32:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:33.398 05:32:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:18:33.398 05:32:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.298 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:35.298 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:18:35.298 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:35.298 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:35.298 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:35.298 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:35.298 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:35.298 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:18:35.298 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:35.298 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:18:35.298 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:18:35.298 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:18:35.298 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:18:35.298 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:18:35.298 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:18:35.298 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:35.298 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:35.298 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:35.298 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:35.298 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:35.298 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:35.298 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:35.298 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:35.298 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:35.298 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:35.298 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:35.299 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:35.299 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:35.299 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:35.299 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:35.299 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:35.299 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:35.299 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:35.299 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:35.299 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:35.299 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:35.299 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:35.299 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:35.299 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:35.299 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:35.299 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:35.299 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:35.299 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:35.299 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:35.299 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:35.299 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:35.299 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:35.299 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:35.299 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:35.299 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:35.299 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:35.299 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:35.299 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:35.299 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:35.299 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:35.299 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:35.299 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:35.299 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:35.299 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:35.299 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:35.299 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:35.299 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:35.299 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:35.299 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:35.299 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:35.299 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:35.299 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:35.299 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:35.299 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:35.299 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:35.299 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:35.299 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:35.299 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:18:35.299 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:35.299 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:35.299 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:35.299 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:35.299 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:35.299 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:35.299 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:35.299 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:35.299 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:35.299 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:35.299 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:35.299 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:35.299 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:35.299 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:35.299 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:35.299 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:35.557 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:35.557 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:35.557 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:35.557 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:35.557 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:35.557 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:35.557 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:35.557 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:35.557 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:18:35.557 00:18:35.557 --- 10.0.0.2 ping statistics --- 00:18:35.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:35.557 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:18:35.557 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:35.557 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:35.557 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:18:35.557 00:18:35.557 --- 10.0.0.1 ping statistics --- 00:18:35.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:35.558 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:18:35.558 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:35.558 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:18:35.558 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:35.558 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:35.558 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:35.558 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:35.558 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:35.558 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:35.558 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:35.558 05:32:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:18:35.558 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:35.558 05:32:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:35.558 05:32:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.558 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=3236280 00:18:35.558 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 3236280 00:18:35.558 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:35.558 05:32:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@827 -- # '[' -z 3236280 ']' 00:18:35.558 05:32:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:35.558 05:32:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:35.558 05:32:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:35.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:35.558 05:32:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:35.558 05:32:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.558 [2024-07-14 05:32:42.604550] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:18:35.558 [2024-07-14 05:32:42.604627] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:35.558 EAL: No free 2048 kB hugepages reported on node 1 00:18:35.816 [2024-07-14 05:32:42.672175] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:35.816 [2024-07-14 05:32:42.762785] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:35.816 [2024-07-14 05:32:42.762845] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:35.816 [2024-07-14 05:32:42.762878] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:35.816 [2024-07-14 05:32:42.762894] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:35.816 [2024-07-14 05:32:42.762905] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:35.816 [2024-07-14 05:32:42.763000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:35.816 [2024-07-14 05:32:42.763078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:35.816 [2024-07-14 05:32:42.763175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:35.816 [2024-07-14 05:32:42.763177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:35.816 05:32:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:35.816 05:32:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@860 -- # return 0 00:18:35.816 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:35.816 05:32:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:35.816 05:32:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.816 05:32:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:35.816 05:32:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:36.074 [2024-07-14 05:32:43.136461] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:36.074 05:32:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:36.638 05:32:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:18:36.638 05:32:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:36.638 05:32:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:18:36.638 05:32:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:37.203 05:32:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:18:37.203 05:32:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:37.461 05:32:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:18:37.461 05:32:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:18:37.719 05:32:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:37.977 05:32:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:18:37.977 05:32:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:38.234 05:32:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:18:38.234 05:32:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:38.492 05:32:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:18:38.492 05:32:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:18:38.750 05:32:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:39.008 05:32:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:39.008 05:32:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:39.266 05:32:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:39.266 05:32:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:39.523 05:32:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:39.523 [2024-07-14 05:32:46.600901] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:39.523 05:32:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:18:39.781 05:32:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:18:40.039 05:32:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:41.004 05:32:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:18:41.004 05:32:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1194 -- # local i=0 00:18:41.004 05:32:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:18:41.004 05:32:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1196 -- # [[ -n 4 ]] 00:18:41.004 05:32:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # nvme_device_counter=4 00:18:41.004 05:32:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # sleep 2 00:18:42.901 05:32:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:18:42.901 05:32:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:18:42.901 05:32:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:18:42.901 05:32:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_devices=4 00:18:42.901 05:32:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:18:42.901 05:32:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # return 0 00:18:42.901 05:32:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:42.901 [global] 00:18:42.901 thread=1 00:18:42.901 invalidate=1 00:18:42.901 rw=write 00:18:42.901 time_based=1 00:18:42.901 runtime=1 00:18:42.901 ioengine=libaio 00:18:42.901 direct=1 00:18:42.901 bs=4096 00:18:42.901 iodepth=1 00:18:42.901 norandommap=0 00:18:42.901 numjobs=1 00:18:42.901 00:18:42.901 verify_dump=1 00:18:42.901 verify_backlog=512 00:18:42.901 verify_state_save=0 00:18:42.901 do_verify=1 00:18:42.901 verify=crc32c-intel 00:18:42.901 [job0] 00:18:42.901 filename=/dev/nvme0n1 00:18:42.901 [job1] 00:18:42.902 filename=/dev/nvme0n2 00:18:42.902 [job2] 00:18:42.902 filename=/dev/nvme0n3 00:18:42.902 [job3] 00:18:42.902 filename=/dev/nvme0n4 00:18:42.902 Could not set queue depth (nvme0n1) 00:18:42.902 Could not set queue depth (nvme0n2) 00:18:42.902 Could not set queue depth (nvme0n3) 00:18:42.902 Could not set queue depth (nvme0n4) 00:18:43.160 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:43.160 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:43.160 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:43.160 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:43.160 fio-3.35 00:18:43.160 Starting 4 threads 00:18:44.530 00:18:44.530 job0: (groupid=0, jobs=1): err= 0: pid=3237252: Sun Jul 14 05:32:51 2024 00:18:44.530 read: IOPS=19, BW=79.3KiB/s (81.2kB/s)(80.0KiB/1009msec) 00:18:44.530 slat (nsec): min=9663, max=33696, avg=16736.55, stdev=7994.28 00:18:44.530 clat (usec): min=40887, max=41318, avg=40990.57, stdev=83.05 00:18:44.530 lat (usec): min=40920, max=41328, avg=41007.31, stdev=80.26 00:18:44.530 clat percentiles (usec): 00:18:44.530 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:18:44.530 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:18:44.530 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:18:44.530 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:44.530 | 99.99th=[41157] 00:18:44.530 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:18:44.530 slat (nsec): min=7506, max=68974, avg=17088.11, stdev=8529.12 00:18:44.530 clat (usec): min=248, max=571, avg=346.85, stdev=59.26 00:18:44.530 lat (usec): min=257, max=600, avg=363.94, stdev=60.21 00:18:44.530 clat percentiles (usec): 00:18:44.530 | 1.00th=[ 253], 5.00th=[ 265], 10.00th=[ 273], 20.00th=[ 285], 00:18:44.530 | 30.00th=[ 302], 40.00th=[ 322], 50.00th=[ 347], 60.00th=[ 371], 00:18:44.530 | 70.00th=[ 388], 80.00th=[ 400], 90.00th=[ 420], 95.00th=[ 445], 00:18:44.530 | 99.00th=[ 494], 99.50th=[ 502], 99.90th=[ 570], 99.95th=[ 570], 00:18:44.530 | 99.99th=[ 570] 00:18:44.530 bw ( KiB/s): min= 4096, max= 4096, per=33.63%, avg=4096.00, stdev= 0.00, samples=1 00:18:44.530 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:44.530 lat (usec) : 250=0.19%, 500=95.49%, 750=0.56% 00:18:44.530 lat (msec) : 50=3.76% 00:18:44.530 cpu : usr=0.60%, sys=1.09%, ctx=532, majf=0, minf=1 00:18:44.530 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:44.530 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:44.530 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:44.530 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:44.530 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:44.530 job1: (groupid=0, jobs=1): err= 0: pid=3237255: Sun Jul 14 05:32:51 2024 00:18:44.530 read: IOPS=1062, BW=4252KiB/s (4354kB/s)(4256KiB/1001msec) 00:18:44.530 slat (nsec): min=4950, max=67922, avg=24719.55, stdev=11494.52 00:18:44.530 clat (usec): min=362, max=714, avg=479.29, stdev=53.03 00:18:44.530 lat (usec): min=377, max=720, avg=504.01, stdev=56.02 00:18:44.530 clat percentiles (usec): 00:18:44.530 | 1.00th=[ 375], 5.00th=[ 400], 10.00th=[ 416], 20.00th=[ 441], 00:18:44.530 | 30.00th=[ 449], 40.00th=[ 457], 50.00th=[ 469], 60.00th=[ 482], 00:18:44.530 | 70.00th=[ 502], 80.00th=[ 537], 90.00th=[ 553], 95.00th=[ 562], 00:18:44.530 | 99.00th=[ 603], 99.50th=[ 627], 99.90th=[ 676], 99.95th=[ 717], 00:18:44.530 | 99.99th=[ 717] 00:18:44.530 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:18:44.530 slat (nsec): min=6277, max=70734, avg=14002.14, stdev=7505.36 00:18:44.530 clat (usec): min=201, max=592, avg=277.99, stdev=56.01 00:18:44.530 lat (usec): min=211, max=631, avg=291.99, stdev=59.56 00:18:44.530 clat percentiles (usec): 00:18:44.530 | 1.00th=[ 215], 5.00th=[ 235], 10.00th=[ 239], 20.00th=[ 243], 00:18:44.530 | 30.00th=[ 247], 40.00th=[ 249], 50.00th=[ 255], 60.00th=[ 265], 00:18:44.530 | 70.00th=[ 277], 80.00th=[ 310], 90.00th=[ 355], 95.00th=[ 408], 00:18:44.530 | 99.00th=[ 478], 99.50th=[ 510], 99.90th=[ 545], 99.95th=[ 594], 00:18:44.530 | 99.99th=[ 594] 00:18:44.530 bw ( KiB/s): min= 6448, max= 6448, per=52.95%, avg=6448.00, stdev= 0.00, samples=1 00:18:44.530 iops : min= 1612, max= 1612, avg=1612.00, stdev= 0.00, samples=1 00:18:44.530 lat (usec) : 250=24.31%, 500=62.92%, 750=12.77% 00:18:44.530 cpu : usr=2.90%, sys=5.00%, ctx=2600, majf=0, minf=1 00:18:44.530 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:44.530 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:44.530 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:44.530 issued rwts: total=1064,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:44.530 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:44.530 job2: (groupid=0, jobs=1): err= 0: pid=3237261: Sun Jul 14 05:32:51 2024 00:18:44.530 read: IOPS=20, BW=83.6KiB/s (85.6kB/s)(84.0KiB/1005msec) 00:18:44.530 slat (nsec): min=7794, max=32106, avg=17717.62, stdev=8268.19 00:18:44.530 clat (usec): min=20033, max=41131, avg=39983.54, stdev=4571.50 00:18:44.530 lat (usec): min=20048, max=41144, avg=40001.26, stdev=4572.03 00:18:44.530 clat percentiles (usec): 00:18:44.531 | 1.00th=[20055], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:18:44.531 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:18:44.531 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:18:44.531 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:44.531 | 99.99th=[41157] 00:18:44.531 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:18:44.531 slat (nsec): min=6746, max=71435, avg=16516.42, stdev=10353.33 00:18:44.531 clat (usec): min=205, max=518, avg=301.47, stdev=71.56 00:18:44.531 lat (usec): min=215, max=542, avg=317.99, stdev=74.26 00:18:44.531 clat percentiles (usec): 00:18:44.531 | 1.00th=[ 215], 5.00th=[ 223], 10.00th=[ 227], 20.00th=[ 237], 00:18:44.531 | 30.00th=[ 245], 40.00th=[ 255], 50.00th=[ 277], 60.00th=[ 314], 00:18:44.531 | 70.00th=[ 338], 80.00th=[ 371], 90.00th=[ 404], 95.00th=[ 433], 00:18:44.531 | 99.00th=[ 490], 99.50th=[ 502], 99.90th=[ 519], 99.95th=[ 519], 00:18:44.531 | 99.99th=[ 519] 00:18:44.531 bw ( KiB/s): min= 4096, max= 4096, per=33.63%, avg=4096.00, stdev= 0.00, samples=1 00:18:44.531 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:44.531 lat (usec) : 250=33.77%, 500=61.54%, 750=0.75% 00:18:44.531 lat (msec) : 50=3.94% 00:18:44.531 cpu : usr=0.40%, sys=0.90%, ctx=533, majf=0, minf=1 00:18:44.531 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:44.531 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:44.531 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:44.531 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:44.531 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:44.531 job3: (groupid=0, jobs=1): err= 0: pid=3237263: Sun Jul 14 05:32:51 2024 00:18:44.531 read: IOPS=18, BW=75.8KiB/s (77.7kB/s)(76.0KiB/1002msec) 00:18:44.531 slat (nsec): min=13427, max=47466, avg=28623.42, stdev=10029.20 00:18:44.531 clat (usec): min=40908, max=41997, avg=41445.76, stdev=506.20 00:18:44.531 lat (usec): min=40942, max=42036, avg=41474.38, stdev=503.69 00:18:44.531 clat percentiles (usec): 00:18:44.531 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:18:44.531 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:18:44.531 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:18:44.531 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:44.531 | 99.99th=[42206] 00:18:44.531 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:18:44.531 slat (nsec): min=6803, max=76050, avg=21903.00, stdev=10972.08 00:18:44.531 clat (usec): min=250, max=587, avg=390.47, stdev=66.30 00:18:44.531 lat (usec): min=267, max=625, avg=412.37, stdev=68.45 00:18:44.531 clat percentiles (usec): 00:18:44.531 | 1.00th=[ 273], 5.00th=[ 293], 10.00th=[ 314], 20.00th=[ 334], 00:18:44.531 | 30.00th=[ 355], 40.00th=[ 367], 50.00th=[ 379], 60.00th=[ 396], 00:18:44.531 | 70.00th=[ 412], 80.00th=[ 441], 90.00th=[ 490], 95.00th=[ 529], 00:18:44.531 | 99.00th=[ 545], 99.50th=[ 570], 99.90th=[ 586], 99.95th=[ 586], 00:18:44.531 | 99.99th=[ 586] 00:18:44.531 bw ( KiB/s): min= 4096, max= 4096, per=33.63%, avg=4096.00, stdev= 0.00, samples=1 00:18:44.531 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:44.531 lat (usec) : 500=88.14%, 750=8.29% 00:18:44.531 lat (msec) : 50=3.58% 00:18:44.531 cpu : usr=0.80%, sys=0.90%, ctx=531, majf=0, minf=2 00:18:44.531 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:44.531 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:44.531 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:44.531 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:44.531 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:44.531 00:18:44.531 Run status group 0 (all jobs): 00:18:44.531 READ: bw=4456KiB/s (4563kB/s), 75.8KiB/s-4252KiB/s (77.7kB/s-4354kB/s), io=4496KiB (4604kB), run=1001-1009msec 00:18:44.531 WRITE: bw=11.9MiB/s (12.5MB/s), 2030KiB/s-6138KiB/s (2078kB/s-6285kB/s), io=12.0MiB (12.6MB), run=1001-1009msec 00:18:44.531 00:18:44.531 Disk stats (read/write): 00:18:44.531 nvme0n1: ios=66/512, merge=0/0, ticks=825/163, in_queue=988, util=91.28% 00:18:44.531 nvme0n2: ios=1048/1048, merge=0/0, ticks=396/283, in_queue=679, util=87.05% 00:18:44.531 nvme0n3: ios=71/512, merge=0/0, ticks=811/150, in_queue=961, util=95.80% 00:18:44.531 nvme0n4: ios=69/512, merge=0/0, ticks=730/184, in_queue=914, util=95.66% 00:18:44.531 05:32:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:18:44.531 [global] 00:18:44.531 thread=1 00:18:44.531 invalidate=1 00:18:44.531 rw=randwrite 00:18:44.531 time_based=1 00:18:44.531 runtime=1 00:18:44.531 ioengine=libaio 00:18:44.531 direct=1 00:18:44.531 bs=4096 00:18:44.531 iodepth=1 00:18:44.531 norandommap=0 00:18:44.531 numjobs=1 00:18:44.531 00:18:44.531 verify_dump=1 00:18:44.531 verify_backlog=512 00:18:44.531 verify_state_save=0 00:18:44.531 do_verify=1 00:18:44.531 verify=crc32c-intel 00:18:44.531 [job0] 00:18:44.531 filename=/dev/nvme0n1 00:18:44.531 [job1] 00:18:44.531 filename=/dev/nvme0n2 00:18:44.531 [job2] 00:18:44.531 filename=/dev/nvme0n3 00:18:44.531 [job3] 00:18:44.531 filename=/dev/nvme0n4 00:18:44.531 Could not set queue depth (nvme0n1) 00:18:44.531 Could not set queue depth (nvme0n2) 00:18:44.531 Could not set queue depth (nvme0n3) 00:18:44.531 Could not set queue depth (nvme0n4) 00:18:44.531 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:44.531 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:44.531 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:44.531 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:44.531 fio-3.35 00:18:44.531 Starting 4 threads 00:18:45.905 00:18:45.905 job0: (groupid=0, jobs=1): err= 0: pid=3237568: Sun Jul 14 05:32:52 2024 00:18:45.905 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:18:45.905 slat (nsec): min=5600, max=45899, avg=13858.16, stdev=7660.65 00:18:45.905 clat (usec): min=430, max=834, avg=513.50, stdev=59.81 00:18:45.905 lat (usec): min=439, max=841, avg=527.36, stdev=59.74 00:18:45.905 clat percentiles (usec): 00:18:45.905 | 1.00th=[ 441], 5.00th=[ 453], 10.00th=[ 461], 20.00th=[ 469], 00:18:45.905 | 30.00th=[ 478], 40.00th=[ 486], 50.00th=[ 490], 60.00th=[ 502], 00:18:45.905 | 70.00th=[ 515], 80.00th=[ 570], 90.00th=[ 611], 95.00th=[ 635], 00:18:45.905 | 99.00th=[ 685], 99.50th=[ 701], 99.90th=[ 791], 99.95th=[ 832], 00:18:45.905 | 99.99th=[ 832] 00:18:45.905 write: IOPS=1410, BW=5642KiB/s (5778kB/s)(5648KiB/1001msec); 0 zone resets 00:18:45.905 slat (nsec): min=7182, max=77481, avg=17608.73, stdev=8593.99 00:18:45.905 clat (usec): min=215, max=1426, avg=300.15, stdev=72.80 00:18:45.905 lat (usec): min=224, max=1441, avg=317.76, stdev=74.38 00:18:45.905 clat percentiles (usec): 00:18:45.905 | 1.00th=[ 229], 5.00th=[ 241], 10.00th=[ 249], 20.00th=[ 258], 00:18:45.905 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 277], 60.00th=[ 285], 00:18:45.905 | 70.00th=[ 302], 80.00th=[ 334], 90.00th=[ 392], 95.00th=[ 437], 00:18:45.905 | 99.00th=[ 515], 99.50th=[ 586], 99.90th=[ 1074], 99.95th=[ 1434], 00:18:45.905 | 99.99th=[ 1434] 00:18:45.905 bw ( KiB/s): min= 5944, max= 5944, per=32.29%, avg=5944.00, stdev= 0.00, samples=1 00:18:45.905 iops : min= 1486, max= 1486, avg=1486.00, stdev= 0.00, samples=1 00:18:45.905 lat (usec) : 250=7.02%, 500=75.21%, 750=17.61%, 1000=0.08% 00:18:45.905 lat (msec) : 2=0.08% 00:18:45.905 cpu : usr=3.70%, sys=4.80%, ctx=2436, majf=0, minf=1 00:18:45.905 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:45.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:45.905 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:45.905 issued rwts: total=1024,1412,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:45.905 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:45.905 job1: (groupid=0, jobs=1): err= 0: pid=3237588: Sun Jul 14 05:32:52 2024 00:18:45.905 read: IOPS=1259, BW=5039KiB/s (5160kB/s)(5044KiB/1001msec) 00:18:45.905 slat (nsec): min=4888, max=76449, avg=20049.95, stdev=11676.65 00:18:45.905 clat (usec): min=312, max=1204, avg=421.13, stdev=66.16 00:18:45.905 lat (usec): min=318, max=1217, avg=441.18, stdev=69.71 00:18:45.905 clat percentiles (usec): 00:18:45.905 | 1.00th=[ 330], 5.00th=[ 343], 10.00th=[ 351], 20.00th=[ 367], 00:18:45.905 | 30.00th=[ 388], 40.00th=[ 400], 50.00th=[ 412], 60.00th=[ 420], 00:18:45.905 | 70.00th=[ 437], 80.00th=[ 465], 90.00th=[ 506], 95.00th=[ 537], 00:18:45.905 | 99.00th=[ 603], 99.50th=[ 644], 99.90th=[ 947], 99.95th=[ 1205], 00:18:45.905 | 99.99th=[ 1205] 00:18:45.905 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:18:45.905 slat (nsec): min=6635, max=82848, avg=16470.72, stdev=8901.71 00:18:45.905 clat (usec): min=206, max=2777, avg=263.26, stdev=80.99 00:18:45.905 lat (usec): min=215, max=2816, avg=279.73, stdev=82.74 00:18:45.905 clat percentiles (usec): 00:18:45.905 | 1.00th=[ 212], 5.00th=[ 219], 10.00th=[ 223], 20.00th=[ 229], 00:18:45.905 | 30.00th=[ 235], 40.00th=[ 241], 50.00th=[ 247], 60.00th=[ 253], 00:18:45.905 | 70.00th=[ 265], 80.00th=[ 285], 90.00th=[ 326], 95.00th=[ 371], 00:18:45.905 | 99.00th=[ 445], 99.50th=[ 469], 99.90th=[ 873], 99.95th=[ 2769], 00:18:45.905 | 99.99th=[ 2769] 00:18:45.905 bw ( KiB/s): min= 6680, max= 6680, per=36.29%, avg=6680.00, stdev= 0.00, samples=1 00:18:45.905 iops : min= 1670, max= 1670, avg=1670.00, stdev= 0.00, samples=1 00:18:45.905 lat (usec) : 250=30.32%, 500=64.68%, 750=4.83%, 1000=0.11% 00:18:45.905 lat (msec) : 2=0.04%, 4=0.04% 00:18:45.905 cpu : usr=2.70%, sys=5.20%, ctx=2799, majf=0, minf=2 00:18:45.905 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:45.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:45.905 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:45.905 issued rwts: total=1261,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:45.905 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:45.905 job2: (groupid=0, jobs=1): err= 0: pid=3237609: Sun Jul 14 05:32:52 2024 00:18:45.905 read: IOPS=20, BW=81.6KiB/s (83.5kB/s)(84.0KiB/1030msec) 00:18:45.905 slat (nsec): min=7475, max=42408, avg=18511.00, stdev=8771.08 00:18:45.905 clat (usec): min=40869, max=41085, avg=40979.29, stdev=54.57 00:18:45.905 lat (usec): min=40912, max=41112, avg=40997.80, stdev=50.42 00:18:45.905 clat percentiles (usec): 00:18:45.905 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:18:45.905 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:18:45.905 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:18:45.905 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:45.905 | 99.99th=[41157] 00:18:45.905 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:18:45.905 slat (nsec): min=7851, max=74524, avg=20734.66, stdev=9874.46 00:18:45.905 clat (usec): min=220, max=943, avg=302.15, stdev=65.98 00:18:45.905 lat (usec): min=230, max=972, avg=322.89, stdev=68.72 00:18:45.905 clat percentiles (usec): 00:18:45.905 | 1.00th=[ 235], 5.00th=[ 251], 10.00th=[ 258], 20.00th=[ 265], 00:18:45.905 | 30.00th=[ 273], 40.00th=[ 281], 50.00th=[ 285], 60.00th=[ 293], 00:18:45.905 | 70.00th=[ 302], 80.00th=[ 318], 90.00th=[ 355], 95.00th=[ 433], 00:18:45.905 | 99.00th=[ 523], 99.50th=[ 635], 99.90th=[ 947], 99.95th=[ 947], 00:18:45.905 | 99.99th=[ 947] 00:18:45.905 bw ( KiB/s): min= 4096, max= 4096, per=22.25%, avg=4096.00, stdev= 0.00, samples=1 00:18:45.905 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:45.905 lat (usec) : 250=4.50%, 500=89.87%, 750=1.31%, 1000=0.38% 00:18:45.905 lat (msec) : 50=3.94% 00:18:45.905 cpu : usr=0.19%, sys=1.85%, ctx=534, majf=0, minf=1 00:18:45.905 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:45.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:45.905 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:45.905 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:45.905 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:45.905 job3: (groupid=0, jobs=1): err= 0: pid=3237610: Sun Jul 14 05:32:52 2024 00:18:45.905 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:18:45.905 slat (nsec): min=5014, max=70683, avg=20723.93, stdev=10771.86 00:18:45.905 clat (usec): min=331, max=853, avg=559.01, stdev=78.07 00:18:45.905 lat (usec): min=364, max=866, avg=579.74, stdev=74.92 00:18:45.905 clat percentiles (usec): 00:18:45.905 | 1.00th=[ 392], 5.00th=[ 412], 10.00th=[ 453], 20.00th=[ 510], 00:18:45.905 | 30.00th=[ 529], 40.00th=[ 545], 50.00th=[ 553], 60.00th=[ 562], 00:18:45.906 | 70.00th=[ 594], 80.00th=[ 619], 90.00th=[ 668], 95.00th=[ 685], 00:18:45.906 | 99.00th=[ 742], 99.50th=[ 783], 99.90th=[ 799], 99.95th=[ 857], 00:18:45.906 | 99.99th=[ 857] 00:18:45.906 write: IOPS=1278, BW=5115KiB/s (5238kB/s)(5120KiB/1001msec); 0 zone resets 00:18:45.906 slat (nsec): min=6544, max=68375, avg=16957.51, stdev=8638.89 00:18:45.906 clat (usec): min=209, max=1491, avg=291.37, stdev=77.75 00:18:45.906 lat (usec): min=216, max=1503, avg=308.33, stdev=80.67 00:18:45.906 clat percentiles (usec): 00:18:45.906 | 1.00th=[ 219], 5.00th=[ 229], 10.00th=[ 235], 20.00th=[ 243], 00:18:45.906 | 30.00th=[ 251], 40.00th=[ 255], 50.00th=[ 262], 60.00th=[ 273], 00:18:45.906 | 70.00th=[ 289], 80.00th=[ 343], 90.00th=[ 400], 95.00th=[ 429], 00:18:45.906 | 99.00th=[ 506], 99.50th=[ 562], 99.90th=[ 865], 99.95th=[ 1500], 00:18:45.906 | 99.99th=[ 1500] 00:18:45.906 bw ( KiB/s): min= 5776, max= 5776, per=31.38%, avg=5776.00, stdev= 0.00, samples=1 00:18:45.906 iops : min= 1444, max= 1444, avg=1444.00, stdev= 0.00, samples=1 00:18:45.906 lat (usec) : 250=16.15%, 500=45.62%, 750=37.67%, 1000=0.52% 00:18:45.906 lat (msec) : 2=0.04% 00:18:45.906 cpu : usr=2.50%, sys=4.40%, ctx=2306, majf=0, minf=1 00:18:45.906 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:45.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:45.906 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:45.906 issued rwts: total=1024,1280,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:45.906 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:45.906 00:18:45.906 Run status group 0 (all jobs): 00:18:45.906 READ: bw=12.6MiB/s (13.2MB/s), 81.6KiB/s-5039KiB/s (83.5kB/s-5160kB/s), io=13.0MiB (13.6MB), run=1001-1030msec 00:18:45.906 WRITE: bw=18.0MiB/s (18.8MB/s), 1988KiB/s-6138KiB/s (2036kB/s-6285kB/s), io=18.5MiB (19.4MB), run=1001-1030msec 00:18:45.906 00:18:45.906 Disk stats (read/write): 00:18:45.906 nvme0n1: ios=1002/1024, merge=0/0, ticks=787/294, in_queue=1081, util=97.39% 00:18:45.906 nvme0n2: ios=1079/1320, merge=0/0, ticks=475/350, in_queue=825, util=91.25% 00:18:45.906 nvme0n3: ios=73/512, merge=0/0, ticks=1477/136, in_queue=1613, util=93.51% 00:18:45.906 nvme0n4: ios=981/1024, merge=0/0, ticks=956/286, in_queue=1242, util=94.20% 00:18:45.906 05:32:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:18:45.906 [global] 00:18:45.906 thread=1 00:18:45.906 invalidate=1 00:18:45.906 rw=write 00:18:45.906 time_based=1 00:18:45.906 runtime=1 00:18:45.906 ioengine=libaio 00:18:45.906 direct=1 00:18:45.906 bs=4096 00:18:45.906 iodepth=128 00:18:45.906 norandommap=0 00:18:45.906 numjobs=1 00:18:45.906 00:18:45.906 verify_dump=1 00:18:45.906 verify_backlog=512 00:18:45.906 verify_state_save=0 00:18:45.906 do_verify=1 00:18:45.906 verify=crc32c-intel 00:18:45.906 [job0] 00:18:45.906 filename=/dev/nvme0n1 00:18:45.906 [job1] 00:18:45.906 filename=/dev/nvme0n2 00:18:45.906 [job2] 00:18:45.906 filename=/dev/nvme0n3 00:18:45.906 [job3] 00:18:45.906 filename=/dev/nvme0n4 00:18:45.906 Could not set queue depth (nvme0n1) 00:18:45.906 Could not set queue depth (nvme0n2) 00:18:45.906 Could not set queue depth (nvme0n3) 00:18:45.906 Could not set queue depth (nvme0n4) 00:18:45.906 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:45.906 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:45.906 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:45.906 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:45.906 fio-3.35 00:18:45.906 Starting 4 threads 00:18:47.284 00:18:47.284 job0: (groupid=0, jobs=1): err= 0: pid=3237836: Sun Jul 14 05:32:54 2024 00:18:47.284 read: IOPS=6119, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1004msec) 00:18:47.284 slat (usec): min=3, max=3589, avg=77.86, stdev=346.03 00:18:47.284 clat (usec): min=6805, max=14913, avg=10413.31, stdev=1208.10 00:18:47.284 lat (usec): min=6823, max=14957, avg=10491.17, stdev=1217.39 00:18:47.284 clat percentiles (usec): 00:18:47.284 | 1.00th=[ 7767], 5.00th=[ 8291], 10.00th=[ 8717], 20.00th=[ 9372], 00:18:47.284 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10421], 60.00th=[10683], 00:18:47.284 | 70.00th=[11076], 80.00th=[11469], 90.00th=[11994], 95.00th=[12387], 00:18:47.284 | 99.00th=[13304], 99.50th=[13566], 99.90th=[14091], 99.95th=[14877], 00:18:47.284 | 99.99th=[14877] 00:18:47.284 write: IOPS=6447, BW=25.2MiB/s (26.4MB/s)(25.3MiB/1004msec); 0 zone resets 00:18:47.284 slat (usec): min=4, max=3104, avg=70.21, stdev=286.02 00:18:47.284 clat (usec): min=2052, max=13813, avg=9749.12, stdev=1112.38 00:18:47.284 lat (usec): min=3534, max=13822, avg=9819.33, stdev=1123.68 00:18:47.284 clat percentiles (usec): 00:18:47.284 | 1.00th=[ 6652], 5.00th=[ 7832], 10.00th=[ 8291], 20.00th=[ 8979], 00:18:47.284 | 30.00th=[ 9372], 40.00th=[ 9765], 50.00th=[ 9896], 60.00th=[10028], 00:18:47.284 | 70.00th=[10290], 80.00th=[10552], 90.00th=[11076], 95.00th=[11338], 00:18:47.284 | 99.00th=[11994], 99.50th=[12125], 99.90th=[12649], 99.95th=[12780], 00:18:47.284 | 99.99th=[13829] 00:18:47.284 bw ( KiB/s): min=25144, max=25616, per=54.22%, avg=25380.00, stdev=333.75, samples=2 00:18:47.284 iops : min= 6286, max= 6404, avg=6345.00, stdev=83.44, samples=2 00:18:47.284 lat (msec) : 4=0.17%, 10=45.57%, 20=54.25% 00:18:47.284 cpu : usr=9.27%, sys=12.96%, ctx=685, majf=0, minf=9 00:18:47.284 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:18:47.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.284 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:47.284 issued rwts: total=6144,6473,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.284 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.284 job1: (groupid=0, jobs=1): err= 0: pid=3237837: Sun Jul 14 05:32:54 2024 00:18:47.284 read: IOPS=3002, BW=11.7MiB/s (12.3MB/s)(12.0MiB/1023msec) 00:18:47.284 slat (usec): min=3, max=8264, avg=84.61, stdev=500.47 00:18:47.284 clat (usec): min=974, max=270629, avg=13568.54, stdev=24083.68 00:18:47.284 lat (usec): min=982, max=270634, avg=13653.14, stdev=24083.98 00:18:47.284 clat percentiles (usec): 00:18:47.284 | 1.00th=[ 1942], 5.00th=[ 2442], 10.00th=[ 4621], 20.00th=[ 9110], 00:18:47.284 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[ 10552], 60.00th=[ 11207], 00:18:47.284 | 70.00th=[ 12387], 80.00th=[ 15270], 90.00th=[ 18744], 95.00th=[ 21627], 00:18:47.284 | 99.00th=[ 29492], 99.50th=[267387], 99.90th=[270533], 99.95th=[270533], 00:18:47.284 | 99.99th=[270533] 00:18:47.284 write: IOPS=3392, BW=13.3MiB/s (13.9MB/s)(13.6MiB/1023msec); 0 zone resets 00:18:47.284 slat (usec): min=3, max=268060, avg=193.46, stdev=4705.12 00:18:47.284 clat (usec): min=603, max=372231, avg=23724.54, stdev=59745.27 00:18:47.284 lat (usec): min=619, max=372236, avg=23918.00, stdev=59968.35 00:18:47.284 clat percentiles (usec): 00:18:47.284 | 1.00th=[ 1369], 5.00th=[ 5145], 10.00th=[ 6390], 20.00th=[ 8225], 00:18:47.284 | 30.00th=[ 9503], 40.00th=[ 10421], 50.00th=[ 10814], 60.00th=[ 11994], 00:18:47.284 | 70.00th=[ 14353], 80.00th=[ 19530], 90.00th=[ 25035], 95.00th=[ 37487], 00:18:47.284 | 99.00th=[371196], 99.50th=[371196], 99.90th=[371196], 99.95th=[371196], 00:18:47.284 | 99.99th=[371196] 00:18:47.284 bw ( KiB/s): min= 6400, max=20352, per=28.58%, avg=13376.00, stdev=9865.55, samples=2 00:18:47.284 iops : min= 1600, max= 5088, avg=3344.00, stdev=2466.39, samples=2 00:18:47.284 lat (usec) : 750=0.02%, 1000=0.06% 00:18:47.284 lat (msec) : 2=2.22%, 4=4.23%, 10=33.30%, 20=46.45%, 50=11.13% 00:18:47.284 lat (msec) : 100=0.66%, 500=1.94% 00:18:47.284 cpu : usr=4.79%, sys=5.58%, ctx=463, majf=0, minf=15 00:18:47.284 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:18:47.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.284 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:47.284 issued rwts: total=3072,3471,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.284 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.284 job2: (groupid=0, jobs=1): err= 0: pid=3237838: Sun Jul 14 05:32:54 2024 00:18:47.284 read: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec) 00:18:47.284 slat (usec): min=2, max=62860, avg=674.09, stdev=5204.78 00:18:47.284 clat (msec): min=34, max=154, avg=96.96, stdev=28.02 00:18:47.284 lat (msec): min=34, max=157, avg=97.64, stdev=28.18 00:18:47.284 clat percentiles (msec): 00:18:47.284 | 1.00th=[ 35], 5.00th=[ 72], 10.00th=[ 72], 20.00th=[ 79], 00:18:47.284 | 30.00th=[ 80], 40.00th=[ 83], 50.00th=[ 84], 60.00th=[ 92], 00:18:47.284 | 70.00th=[ 105], 80.00th=[ 131], 90.00th=[ 148], 95.00th=[ 155], 00:18:47.284 | 99.00th=[ 155], 99.50th=[ 155], 99.90th=[ 155], 99.95th=[ 155], 00:18:47.284 | 99.99th=[ 155] 00:18:47.284 write: IOPS=772, BW=3092KiB/s (3166kB/s)(3132KiB/1013msec); 0 zone resets 00:18:47.284 slat (usec): min=3, max=75359, avg=833.24, stdev=5848.34 00:18:47.284 clat (msec): min=9, max=155, avg=78.99, stdev=33.45 00:18:47.284 lat (msec): min=21, max=161, avg=79.83, stdev=33.82 00:18:47.284 clat percentiles (msec): 00:18:47.284 | 1.00th=[ 29], 5.00th=[ 30], 10.00th=[ 37], 20.00th=[ 63], 00:18:47.284 | 30.00th=[ 69], 40.00th=[ 70], 50.00th=[ 75], 60.00th=[ 77], 00:18:47.284 | 70.00th=[ 79], 80.00th=[ 105], 90.00th=[ 142], 95.00th=[ 150], 00:18:47.284 | 99.00th=[ 155], 99.50th=[ 155], 99.90th=[ 155], 99.95th=[ 155], 00:18:47.284 | 99.99th=[ 155] 00:18:47.284 bw ( KiB/s): min= 1144, max= 4096, per=5.60%, avg=2620.00, stdev=2087.38, samples=2 00:18:47.284 iops : min= 286, max= 1024, avg=655.00, stdev=521.84, samples=2 00:18:47.284 lat (msec) : 10=0.08%, 50=11.20%, 100=62.16%, 250=26.56% 00:18:47.284 cpu : usr=0.59%, sys=0.99%, ctx=65, majf=0, minf=13 00:18:47.284 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.5%, >=64=95.1% 00:18:47.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.284 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:47.284 issued rwts: total=512,783,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.284 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.284 job3: (groupid=0, jobs=1): err= 0: pid=3237839: Sun Jul 14 05:32:54 2024 00:18:47.284 read: IOPS=1012, BW=4051KiB/s (4149kB/s)(4096KiB/1011msec) 00:18:47.284 slat (usec): min=3, max=66013, avg=520.40, stdev=4669.62 00:18:47.284 clat (msec): min=22, max=153, avg=62.73, stdev=28.76 00:18:47.284 lat (msec): min=22, max=153, avg=63.25, stdev=29.06 00:18:47.284 clat percentiles (msec): 00:18:47.284 | 1.00th=[ 24], 5.00th=[ 24], 10.00th=[ 25], 20.00th=[ 27], 00:18:47.284 | 30.00th=[ 62], 40.00th=[ 64], 50.00th=[ 66], 60.00th=[ 70], 00:18:47.284 | 70.00th=[ 72], 80.00th=[ 86], 90.00th=[ 97], 95.00th=[ 126], 00:18:47.284 | 99.00th=[ 129], 99.50th=[ 129], 99.90th=[ 153], 99.95th=[ 155], 00:18:47.284 | 99.99th=[ 155] 00:18:47.284 write: IOPS=1230, BW=4922KiB/s (5040kB/s)(4976KiB/1011msec); 0 zone resets 00:18:47.284 slat (usec): min=4, max=58936, avg=373.34, stdev=3650.83 00:18:47.284 clat (usec): min=1629, max=145470, avg=48827.40, stdev=22358.15 00:18:47.284 lat (msec): min=13, max=145, avg=49.20, stdev=22.54 00:18:47.284 clat percentiles (msec): 00:18:47.284 | 1.00th=[ 17], 5.00th=[ 19], 10.00th=[ 20], 20.00th=[ 24], 00:18:47.284 | 30.00th=[ 29], 40.00th=[ 37], 50.00th=[ 55], 60.00th=[ 60], 00:18:47.284 | 70.00th=[ 64], 80.00th=[ 67], 90.00th=[ 78], 95.00th=[ 87], 00:18:47.284 | 99.00th=[ 90], 99.50th=[ 90], 99.90th=[ 102], 99.95th=[ 146], 00:18:47.284 | 99.99th=[ 146] 00:18:47.284 bw ( KiB/s): min= 4096, max= 4832, per=9.54%, avg=4464.00, stdev=520.43, samples=2 00:18:47.284 iops : min= 1024, max= 1208, avg=1116.00, stdev=130.11, samples=2 00:18:47.285 lat (msec) : 2=0.04%, 20=7.28%, 50=31.22%, 100=58.42%, 250=3.04% 00:18:47.285 cpu : usr=1.19%, sys=2.28%, ctx=60, majf=0, minf=11 00:18:47.285 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.2% 00:18:47.285 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.285 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:47.285 issued rwts: total=1024,1244,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.285 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.285 00:18:47.285 Run status group 0 (all jobs): 00:18:47.285 READ: bw=41.1MiB/s (43.1MB/s), 2022KiB/s-23.9MiB/s (2070kB/s-25.1MB/s), io=42.0MiB (44.0MB), run=1004-1023msec 00:18:47.285 WRITE: bw=45.7MiB/s (47.9MB/s), 3092KiB/s-25.2MiB/s (3166kB/s-26.4MB/s), io=46.8MiB (49.0MB), run=1004-1023msec 00:18:47.285 00:18:47.285 Disk stats (read/write): 00:18:47.285 nvme0n1: ios=5170/5632, merge=0/0, ticks=17264/16284, in_queue=33548, util=87.47% 00:18:47.285 nvme0n2: ios=2421/2560, merge=0/0, ticks=25533/50192, in_queue=75725, util=98.78% 00:18:47.285 nvme0n3: ios=533/575, merge=0/0, ticks=22337/17166, in_queue=39503, util=95.82% 00:18:47.285 nvme0n4: ios=789/1024, merge=0/0, ticks=24467/25830, in_queue=50297, util=95.90% 00:18:47.285 05:32:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:18:47.285 [global] 00:18:47.285 thread=1 00:18:47.285 invalidate=1 00:18:47.285 rw=randwrite 00:18:47.285 time_based=1 00:18:47.285 runtime=1 00:18:47.285 ioengine=libaio 00:18:47.285 direct=1 00:18:47.285 bs=4096 00:18:47.285 iodepth=128 00:18:47.285 norandommap=0 00:18:47.285 numjobs=1 00:18:47.285 00:18:47.285 verify_dump=1 00:18:47.285 verify_backlog=512 00:18:47.285 verify_state_save=0 00:18:47.285 do_verify=1 00:18:47.285 verify=crc32c-intel 00:18:47.285 [job0] 00:18:47.285 filename=/dev/nvme0n1 00:18:47.285 [job1] 00:18:47.285 filename=/dev/nvme0n2 00:18:47.285 [job2] 00:18:47.285 filename=/dev/nvme0n3 00:18:47.285 [job3] 00:18:47.285 filename=/dev/nvme0n4 00:18:47.285 Could not set queue depth (nvme0n1) 00:18:47.285 Could not set queue depth (nvme0n2) 00:18:47.285 Could not set queue depth (nvme0n3) 00:18:47.285 Could not set queue depth (nvme0n4) 00:18:47.542 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:47.542 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:47.542 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:47.542 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:47.542 fio-3.35 00:18:47.542 Starting 4 threads 00:18:48.917 00:18:48.917 job0: (groupid=0, jobs=1): err= 0: pid=3238069: Sun Jul 14 05:32:55 2024 00:18:48.917 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:18:48.917 slat (usec): min=3, max=6014, avg=86.65, stdev=452.73 00:18:48.917 clat (usec): min=6316, max=18257, avg=11702.62, stdev=1465.39 00:18:48.917 lat (usec): min=6330, max=18271, avg=11789.26, stdev=1493.18 00:18:48.917 clat percentiles (usec): 00:18:48.917 | 1.00th=[ 8848], 5.00th=[ 9372], 10.00th=[ 9896], 20.00th=[10683], 00:18:48.917 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11600], 60.00th=[11994], 00:18:48.917 | 70.00th=[12256], 80.00th=[12649], 90.00th=[13566], 95.00th=[13960], 00:18:48.917 | 99.00th=[15795], 99.50th=[16712], 99.90th=[17957], 99.95th=[17957], 00:18:48.917 | 99.99th=[18220] 00:18:48.917 write: IOPS=4626, BW=18.1MiB/s (18.9MB/s)(18.1MiB/1003msec); 0 zone resets 00:18:48.917 slat (usec): min=4, max=60983, avg=118.24, stdev=1244.00 00:18:48.917 clat (usec): min=2229, max=74107, avg=15687.19, stdev=11925.21 00:18:48.917 lat (usec): min=3845, max=74116, avg=15805.44, stdev=11964.21 00:18:48.917 clat percentiles (usec): 00:18:48.917 | 1.00th=[ 7767], 5.00th=[ 9896], 10.00th=[11207], 20.00th=[12125], 00:18:48.917 | 30.00th=[12780], 40.00th=[13042], 50.00th=[13304], 60.00th=[13435], 00:18:48.917 | 70.00th=[13566], 80.00th=[13698], 90.00th=[14877], 95.00th=[55837], 00:18:48.917 | 99.00th=[71828], 99.50th=[72877], 99.90th=[73925], 99.95th=[73925], 00:18:48.917 | 99.99th=[73925] 00:18:48.917 bw ( KiB/s): min=17896, max=18968, per=30.18%, avg=18432.00, stdev=758.02, samples=2 00:18:48.917 iops : min= 4474, max= 4742, avg=4608.00, stdev=189.50, samples=2 00:18:48.917 lat (msec) : 4=0.06%, 10=8.80%, 20=88.35%, 50=0.03%, 100=2.75% 00:18:48.917 cpu : usr=7.29%, sys=9.28%, ctx=445, majf=0, minf=13 00:18:48.917 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:18:48.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:48.917 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:48.917 issued rwts: total=4608,4640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:48.917 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:48.917 job1: (groupid=0, jobs=1): err= 0: pid=3238070: Sun Jul 14 05:32:55 2024 00:18:48.917 read: IOPS=3012, BW=11.8MiB/s (12.3MB/s)(11.9MiB/1008msec) 00:18:48.917 slat (usec): min=2, max=22728, avg=185.19, stdev=1318.73 00:18:48.917 clat (usec): min=2629, max=72755, avg=20888.27, stdev=14045.86 00:18:48.917 lat (usec): min=7402, max=72773, avg=21073.47, stdev=14177.49 00:18:48.917 clat percentiles (usec): 00:18:48.917 | 1.00th=[ 8455], 5.00th=[ 9241], 10.00th=[ 9765], 20.00th=[10552], 00:18:48.917 | 30.00th=[11338], 40.00th=[12125], 50.00th=[13435], 60.00th=[14746], 00:18:48.917 | 70.00th=[26608], 80.00th=[35390], 90.00th=[44303], 95.00th=[47973], 00:18:48.917 | 99.00th=[59507], 99.50th=[61604], 99.90th=[72877], 99.95th=[72877], 00:18:48.917 | 99.99th=[72877] 00:18:48.917 write: IOPS=3047, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1008msec); 0 zone resets 00:18:48.917 slat (usec): min=3, max=15304, avg=133.92, stdev=768.62 00:18:48.917 clat (usec): min=6105, max=94953, avg=20867.84, stdev=16665.57 00:18:48.917 lat (usec): min=6112, max=94981, avg=21001.75, stdev=16765.81 00:18:48.917 clat percentiles (usec): 00:18:48.917 | 1.00th=[ 7242], 5.00th=[ 9241], 10.00th=[ 9503], 20.00th=[10552], 00:18:48.917 | 30.00th=[11076], 40.00th=[11863], 50.00th=[13960], 60.00th=[19530], 00:18:48.917 | 70.00th=[21890], 80.00th=[24249], 90.00th=[36963], 95.00th=[60556], 00:18:48.917 | 99.00th=[86508], 99.50th=[89654], 99.90th=[94897], 99.95th=[94897], 00:18:48.917 | 99.99th=[94897] 00:18:48.917 bw ( KiB/s): min= 8192, max=16384, per=20.12%, avg=12288.00, stdev=5792.62, samples=2 00:18:48.917 iops : min= 2048, max= 4096, avg=3072.00, stdev=1448.15, samples=2 00:18:48.917 lat (msec) : 4=0.02%, 10=13.70%, 20=50.45%, 50=29.92%, 100=5.91% 00:18:48.917 cpu : usr=3.87%, sys=5.96%, ctx=347, majf=0, minf=13 00:18:48.917 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:18:48.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:48.917 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:48.917 issued rwts: total=3037,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:48.917 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:48.917 job2: (groupid=0, jobs=1): err= 0: pid=3238071: Sun Jul 14 05:32:55 2024 00:18:48.917 read: IOPS=3179, BW=12.4MiB/s (13.0MB/s)(12.5MiB/1004msec) 00:18:48.917 slat (usec): min=2, max=29670, avg=139.87, stdev=983.45 00:18:48.917 clat (usec): min=950, max=65395, avg=17302.58, stdev=7184.43 00:18:48.917 lat (usec): min=3919, max=65402, avg=17442.45, stdev=7234.11 00:18:48.917 clat percentiles (usec): 00:18:48.917 | 1.00th=[ 4293], 5.00th=[10552], 10.00th=[11600], 20.00th=[12256], 00:18:48.917 | 30.00th=[13566], 40.00th=[15270], 50.00th=[16319], 60.00th=[16909], 00:18:48.917 | 70.00th=[17957], 80.00th=[20055], 90.00th=[26346], 95.00th=[30016], 00:18:48.917 | 99.00th=[47973], 99.50th=[47973], 99.90th=[65274], 99.95th=[65274], 00:18:48.917 | 99.99th=[65274] 00:18:48.917 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:18:48.917 slat (usec): min=3, max=25104, avg=137.97, stdev=917.81 00:18:48.917 clat (usec): min=1744, max=65379, avg=20135.77, stdev=8834.76 00:18:48.917 lat (usec): min=1750, max=65386, avg=20273.74, stdev=8887.04 00:18:48.917 clat percentiles (usec): 00:18:48.917 | 1.00th=[ 4948], 5.00th=[ 9110], 10.00th=[12387], 20.00th=[13960], 00:18:48.917 | 30.00th=[15795], 40.00th=[17433], 50.00th=[17957], 60.00th=[19006], 00:18:48.917 | 70.00th=[20841], 80.00th=[24773], 90.00th=[34341], 95.00th=[36439], 00:18:48.917 | 99.00th=[49546], 99.50th=[49546], 99.90th=[53740], 99.95th=[61604], 00:18:48.917 | 99.99th=[65274] 00:18:48.917 bw ( KiB/s): min=14104, max=14504, per=23.42%, avg=14304.00, stdev=282.84, samples=2 00:18:48.917 iops : min= 3526, max= 3626, avg=3576.00, stdev=70.71, samples=2 00:18:48.917 lat (usec) : 1000=0.01% 00:18:48.917 lat (msec) : 2=0.13%, 4=0.16%, 10=5.00%, 20=67.44%, 50=26.87% 00:18:48.917 lat (msec) : 100=0.37% 00:18:48.917 cpu : usr=4.89%, sys=6.38%, ctx=319, majf=0, minf=9 00:18:48.917 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:18:48.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:48.917 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:48.917 issued rwts: total=3192,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:48.917 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:48.917 job3: (groupid=0, jobs=1): err= 0: pid=3238072: Sun Jul 14 05:32:55 2024 00:18:48.917 read: IOPS=3931, BW=15.4MiB/s (16.1MB/s)(15.5MiB/1008msec) 00:18:48.917 slat (usec): min=2, max=14411, avg=129.86, stdev=916.23 00:18:48.917 clat (usec): min=2727, max=33403, avg=16924.99, stdev=3932.34 00:18:48.917 lat (usec): min=7815, max=33494, avg=17054.85, stdev=4003.72 00:18:48.917 clat percentiles (usec): 00:18:48.917 | 1.00th=[ 8717], 5.00th=[11207], 10.00th=[11994], 20.00th=[13960], 00:18:48.917 | 30.00th=[14746], 40.00th=[15795], 50.00th=[16450], 60.00th=[17433], 00:18:48.917 | 70.00th=[18744], 80.00th=[19792], 90.00th=[21103], 95.00th=[24249], 00:18:48.917 | 99.00th=[30016], 99.50th=[31327], 99.90th=[33424], 99.95th=[33424], 00:18:48.917 | 99.99th=[33424] 00:18:48.917 write: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec); 0 zone resets 00:18:48.917 slat (usec): min=3, max=13997, avg=108.72, stdev=691.12 00:18:48.917 clat (usec): min=847, max=40364, avg=14831.46, stdev=6180.47 00:18:48.917 lat (usec): min=853, max=40386, avg=14940.17, stdev=6211.89 00:18:48.917 clat percentiles (usec): 00:18:48.917 | 1.00th=[ 3425], 5.00th=[ 6652], 10.00th=[ 7898], 20.00th=[ 9765], 00:18:48.917 | 30.00th=[10552], 40.00th=[11863], 50.00th=[14091], 60.00th=[15533], 00:18:48.917 | 70.00th=[17957], 80.00th=[20579], 90.00th=[21890], 95.00th=[23725], 00:18:48.917 | 99.00th=[35914], 99.50th=[36963], 99.90th=[40109], 99.95th=[40109], 00:18:48.917 | 99.99th=[40109] 00:18:48.917 bw ( KiB/s): min=16384, max=16384, per=26.82%, avg=16384.00, stdev= 0.00, samples=2 00:18:48.917 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:18:48.917 lat (usec) : 1000=0.02% 00:18:48.917 lat (msec) : 2=0.06%, 4=0.68%, 10=11.52%, 20=67.63%, 50=20.09% 00:18:48.917 cpu : usr=5.96%, sys=8.24%, ctx=311, majf=0, minf=15 00:18:48.917 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:18:48.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:48.917 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:48.917 issued rwts: total=3963,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:48.917 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:48.917 00:18:48.917 Run status group 0 (all jobs): 00:18:48.918 READ: bw=57.4MiB/s (60.1MB/s), 11.8MiB/s-17.9MiB/s (12.3MB/s-18.8MB/s), io=57.8MiB (60.6MB), run=1003-1008msec 00:18:48.918 WRITE: bw=59.6MiB/s (62.5MB/s), 11.9MiB/s-18.1MiB/s (12.5MB/s-18.9MB/s), io=60.1MiB (63.0MB), run=1003-1008msec 00:18:48.918 00:18:48.918 Disk stats (read/write): 00:18:48.918 nvme0n1: ios=3633/4019, merge=0/0, ticks=21626/24284, in_queue=45910, util=89.68% 00:18:48.918 nvme0n2: ios=2610/3047, merge=0/0, ticks=22994/26762, in_queue=49756, util=91.46% 00:18:48.918 nvme0n3: ios=2807/3072, merge=0/0, ticks=24395/23663, in_queue=48058, util=90.71% 00:18:48.918 nvme0n4: ios=3089/3564, merge=0/0, ticks=46270/47846, in_queue=94116, util=94.11% 00:18:48.918 05:32:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:18:48.918 05:32:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3238206 00:18:48.918 05:32:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:18:48.918 05:32:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:18:48.918 [global] 00:18:48.918 thread=1 00:18:48.918 invalidate=1 00:18:48.918 rw=read 00:18:48.918 time_based=1 00:18:48.918 runtime=10 00:18:48.918 ioengine=libaio 00:18:48.918 direct=1 00:18:48.918 bs=4096 00:18:48.918 iodepth=1 00:18:48.918 norandommap=1 00:18:48.918 numjobs=1 00:18:48.918 00:18:48.918 [job0] 00:18:48.918 filename=/dev/nvme0n1 00:18:48.918 [job1] 00:18:48.918 filename=/dev/nvme0n2 00:18:48.918 [job2] 00:18:48.918 filename=/dev/nvme0n3 00:18:48.918 [job3] 00:18:48.918 filename=/dev/nvme0n4 00:18:48.918 Could not set queue depth (nvme0n1) 00:18:48.918 Could not set queue depth (nvme0n2) 00:18:48.918 Could not set queue depth (nvme0n3) 00:18:48.918 Could not set queue depth (nvme0n4) 00:18:48.918 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:48.918 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:48.918 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:48.918 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:48.918 fio-3.35 00:18:48.918 Starting 4 threads 00:18:52.193 05:32:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:18:52.193 05:32:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:18:52.193 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=6643712, buflen=4096 00:18:52.193 fio: pid=3238303, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:52.193 05:32:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:52.193 05:32:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:18:52.193 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=1048576, buflen=4096 00:18:52.193 fio: pid=3238298, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:52.450 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=28573696, buflen=4096 00:18:52.451 fio: pid=3238294, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:52.451 05:32:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:52.451 05:32:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:18:52.709 05:32:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:52.709 05:32:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:18:52.709 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=6983680, buflen=4096 00:18:52.709 fio: pid=3238295, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:52.709 00:18:52.709 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3238294: Sun Jul 14 05:32:59 2024 00:18:52.709 read: IOPS=2012, BW=8051KiB/s (8244kB/s)(27.2MiB/3466msec) 00:18:52.709 slat (usec): min=5, max=15454, avg=20.64, stdev=286.84 00:18:52.709 clat (usec): min=312, max=41708, avg=471.13, stdev=756.60 00:18:52.709 lat (usec): min=318, max=41723, avg=491.77, stdev=811.16 00:18:52.709 clat percentiles (usec): 00:18:52.709 | 1.00th=[ 322], 5.00th=[ 330], 10.00th=[ 334], 20.00th=[ 343], 00:18:52.709 | 30.00th=[ 355], 40.00th=[ 371], 50.00th=[ 375], 60.00th=[ 383], 00:18:52.709 | 70.00th=[ 408], 80.00th=[ 562], 90.00th=[ 725], 95.00th=[ 906], 00:18:52.709 | 99.00th=[ 1123], 99.50th=[ 1434], 99.90th=[ 1647], 99.95th=[ 1975], 00:18:52.709 | 99.99th=[41681] 00:18:52.709 bw ( KiB/s): min= 5728, max=11048, per=76.51%, avg=8604.00, stdev=1768.94, samples=6 00:18:52.709 iops : min= 1432, max= 2762, avg=2151.00, stdev=442.24, samples=6 00:18:52.709 lat (usec) : 500=77.24%, 750=14.18%, 1000=6.45% 00:18:52.709 lat (msec) : 2=2.08%, 20=0.01%, 50=0.03% 00:18:52.709 cpu : usr=1.90%, sys=4.39%, ctx=6983, majf=0, minf=1 00:18:52.709 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:52.709 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:52.709 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:52.709 issued rwts: total=6977,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:52.709 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:52.709 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3238295: Sun Jul 14 05:32:59 2024 00:18:52.709 read: IOPS=454, BW=1816KiB/s (1859kB/s)(6820KiB/3756msec) 00:18:52.709 slat (usec): min=5, max=15644, avg=52.04, stdev=746.32 00:18:52.709 clat (usec): min=312, max=48770, avg=2133.88, stdev=7935.75 00:18:52.709 lat (usec): min=319, max=56062, avg=2185.94, stdev=8007.22 00:18:52.709 clat percentiles (usec): 00:18:52.709 | 1.00th=[ 322], 5.00th=[ 330], 10.00th=[ 334], 20.00th=[ 347], 00:18:52.709 | 30.00th=[ 363], 40.00th=[ 379], 50.00th=[ 408], 60.00th=[ 482], 00:18:52.709 | 70.00th=[ 537], 80.00th=[ 865], 90.00th=[ 947], 95.00th=[ 1139], 00:18:52.709 | 99.00th=[41157], 99.50th=[41157], 99.90th=[46400], 99.95th=[49021], 00:18:52.709 | 99.99th=[49021] 00:18:52.709 bw ( KiB/s): min= 96, max= 5366, per=15.96%, avg=1795.14, stdev=1880.78, samples=7 00:18:52.709 iops : min= 24, max= 1341, avg=448.71, stdev=470.04, samples=7 00:18:52.709 lat (usec) : 500=63.77%, 750=13.54%, 1000=15.12% 00:18:52.709 lat (msec) : 2=3.40%, 4=0.06%, 10=0.06%, 50=3.99% 00:18:52.709 cpu : usr=0.43%, sys=0.80%, ctx=1711, majf=0, minf=1 00:18:52.709 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:52.709 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:52.709 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:52.709 issued rwts: total=1706,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:52.709 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:52.709 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3238298: Sun Jul 14 05:32:59 2024 00:18:52.709 read: IOPS=79, BW=316KiB/s (324kB/s)(1024KiB/3237msec) 00:18:52.709 slat (nsec): min=9882, max=51479, avg=23298.33, stdev=10012.47 00:18:52.709 clat (usec): min=587, max=42000, avg=12504.32, stdev=18262.42 00:18:52.709 lat (usec): min=602, max=42021, avg=12527.64, stdev=18264.05 00:18:52.709 clat percentiles (usec): 00:18:52.709 | 1.00th=[ 594], 5.00th=[ 693], 10.00th=[ 709], 20.00th=[ 734], 00:18:52.709 | 30.00th=[ 889], 40.00th=[ 947], 50.00th=[ 971], 60.00th=[ 1004], 00:18:52.709 | 70.00th=[ 1188], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:18:52.709 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:18:52.709 | 99.99th=[42206] 00:18:52.709 bw ( KiB/s): min= 96, max= 736, per=2.29%, avg=258.67, stdev=269.72, samples=6 00:18:52.709 iops : min= 24, max= 184, avg=64.67, stdev=67.43, samples=6 00:18:52.709 lat (usec) : 750=22.18%, 1000=37.74% 00:18:52.709 lat (msec) : 2=10.89%, 50=28.79% 00:18:52.709 cpu : usr=0.12%, sys=0.25%, ctx=258, majf=0, minf=1 00:18:52.709 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:52.709 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:52.709 complete : 0=0.4%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:52.709 issued rwts: total=257,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:52.709 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:52.709 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3238303: Sun Jul 14 05:32:59 2024 00:18:52.709 read: IOPS=557, BW=2230KiB/s (2284kB/s)(6488KiB/2909msec) 00:18:52.709 slat (nsec): min=6619, max=70255, avg=13663.91, stdev=7046.88 00:18:52.709 clat (usec): min=387, max=43802, avg=1762.00, stdev=7083.53 00:18:52.709 lat (usec): min=404, max=43818, avg=1775.67, stdev=7085.55 00:18:52.709 clat percentiles (usec): 00:18:52.709 | 1.00th=[ 424], 5.00th=[ 441], 10.00th=[ 445], 20.00th=[ 453], 00:18:52.709 | 30.00th=[ 461], 40.00th=[ 465], 50.00th=[ 474], 60.00th=[ 482], 00:18:52.709 | 70.00th=[ 490], 80.00th=[ 506], 90.00th=[ 537], 95.00th=[ 766], 00:18:52.709 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[43779], 00:18:52.709 | 99.99th=[43779] 00:18:52.709 bw ( KiB/s): min= 96, max= 4144, per=12.26%, avg=1379.20, stdev=1666.48, samples=5 00:18:52.709 iops : min= 24, max= 1036, avg=344.80, stdev=416.62, samples=5 00:18:52.709 lat (usec) : 500=77.08%, 750=17.50%, 1000=2.09% 00:18:52.709 lat (msec) : 2=0.12%, 50=3.14% 00:18:52.709 cpu : usr=0.65%, sys=1.03%, ctx=1623, majf=0, minf=1 00:18:52.709 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:52.709 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:52.709 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:52.709 issued rwts: total=1623,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:52.709 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:52.709 00:18:52.709 Run status group 0 (all jobs): 00:18:52.709 READ: bw=11.0MiB/s (11.5MB/s), 316KiB/s-8051KiB/s (324kB/s-8244kB/s), io=41.2MiB (43.2MB), run=2909-3756msec 00:18:52.709 00:18:52.709 Disk stats (read/write): 00:18:52.709 nvme0n1: ios=6791/0, merge=0/0, ticks=3080/0, in_queue=3080, util=94.62% 00:18:52.709 nvme0n2: ios=1701/0, merge=0/0, ticks=3453/0, in_queue=3453, util=94.77% 00:18:52.709 nvme0n3: ios=253/0, merge=0/0, ticks=3079/0, in_queue=3079, util=96.76% 00:18:52.709 nvme0n4: ios=1549/0, merge=0/0, ticks=2782/0, in_queue=2782, util=96.74% 00:18:52.967 05:33:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:52.967 05:33:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:18:53.224 05:33:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:53.224 05:33:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:18:53.481 05:33:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:53.481 05:33:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:18:53.738 05:33:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:53.738 05:33:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:18:53.996 05:33:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:18:53.996 05:33:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 3238206 00:18:53.996 05:33:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:18:53.996 05:33:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:54.254 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:54.254 05:33:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:54.254 05:33:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1215 -- # local i=0 00:18:54.254 05:33:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:18:54.254 05:33:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:54.254 05:33:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:18:54.254 05:33:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:54.254 05:33:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # return 0 00:18:54.254 05:33:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:18:54.254 05:33:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:18:54.254 nvmf hotplug test: fio failed as expected 00:18:54.254 05:33:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:54.513 05:33:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:18:54.513 05:33:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:18:54.513 05:33:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:18:54.513 05:33:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:18:54.513 05:33:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:18:54.513 05:33:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:54.513 05:33:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:18:54.513 05:33:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:54.513 05:33:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:18:54.513 05:33:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:54.513 05:33:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:54.513 rmmod nvme_tcp 00:18:54.513 rmmod nvme_fabrics 00:18:54.513 rmmod nvme_keyring 00:18:54.513 05:33:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:54.513 05:33:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:18:54.513 05:33:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:18:54.513 05:33:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 3236280 ']' 00:18:54.513 05:33:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 3236280 00:18:54.513 05:33:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@946 -- # '[' -z 3236280 ']' 00:18:54.513 05:33:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@950 -- # kill -0 3236280 00:18:54.513 05:33:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # uname 00:18:54.513 05:33:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:54.513 05:33:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3236280 00:18:54.513 05:33:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:54.513 05:33:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:54.513 05:33:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3236280' 00:18:54.513 killing process with pid 3236280 00:18:54.513 05:33:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@965 -- # kill 3236280 00:18:54.513 05:33:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@970 -- # wait 3236280 00:18:54.772 05:33:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:54.772 05:33:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:54.772 05:33:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:54.772 05:33:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:54.772 05:33:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:54.772 05:33:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:54.772 05:33:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:54.772 05:33:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:57.303 05:33:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:57.303 00:18:57.303 real 0m23.568s 00:18:57.303 user 1m19.824s 00:18:57.303 sys 0m7.230s 00:18:57.303 05:33:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:57.303 05:33:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.303 ************************************ 00:18:57.303 END TEST nvmf_fio_target 00:18:57.303 ************************************ 00:18:57.303 05:33:03 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:18:57.303 05:33:03 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:57.303 05:33:03 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:57.303 05:33:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:57.303 ************************************ 00:18:57.303 START TEST nvmf_bdevio 00:18:57.303 ************************************ 00:18:57.303 05:33:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:18:57.303 * Looking for test storage... 00:18:57.303 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:57.303 05:33:03 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:57.303 05:33:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:18:57.303 05:33:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:57.303 05:33:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:57.303 05:33:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:57.303 05:33:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:57.303 05:33:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:57.303 05:33:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:57.303 05:33:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:57.303 05:33:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:57.303 05:33:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:57.303 05:33:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:57.303 05:33:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:57.303 05:33:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:57.303 05:33:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:57.303 05:33:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:57.303 05:33:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:57.303 05:33:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:57.303 05:33:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:57.303 05:33:03 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:57.303 05:33:03 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:57.303 05:33:03 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:57.303 05:33:03 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.303 05:33:03 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.303 05:33:03 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.303 05:33:03 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:18:57.303 05:33:03 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.303 05:33:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:18:57.303 05:33:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:57.303 05:33:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:57.303 05:33:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:57.303 05:33:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:57.303 05:33:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:57.303 05:33:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:57.303 05:33:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:57.303 05:33:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:57.303 05:33:03 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:57.303 05:33:03 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:57.303 05:33:03 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:18:57.303 05:33:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:57.303 05:33:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:57.303 05:33:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:57.303 05:33:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:57.303 05:33:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:57.303 05:33:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:57.303 05:33:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:57.303 05:33:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:57.303 05:33:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:57.303 05:33:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:57.303 05:33:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:18:57.303 05:33:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:59.243 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:59.243 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:59.243 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:59.243 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:59.243 05:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:59.243 05:33:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:59.243 05:33:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:59.243 05:33:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:59.243 05:33:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:59.243 05:33:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:59.243 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:59.243 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:18:59.243 00:18:59.243 --- 10.0.0.2 ping statistics --- 00:18:59.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:59.243 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:18:59.243 05:33:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:59.243 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:59.243 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.257 ms 00:18:59.243 00:18:59.243 --- 10.0.0.1 ping statistics --- 00:18:59.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:59.243 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:18:59.243 05:33:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:59.243 05:33:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:18:59.243 05:33:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:59.243 05:33:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:59.243 05:33:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:59.243 05:33:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:59.243 05:33:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:59.243 05:33:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:59.243 05:33:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:59.243 05:33:06 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:59.243 05:33:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:59.243 05:33:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:59.243 05:33:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:59.243 05:33:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=3241035 00:18:59.243 05:33:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:18:59.243 05:33:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 3241035 00:18:59.243 05:33:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@827 -- # '[' -z 3241035 ']' 00:18:59.243 05:33:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:59.243 05:33:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:59.243 05:33:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:59.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:59.243 05:33:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:59.243 05:33:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:59.243 [2024-07-14 05:33:06.132960] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:18:59.243 [2024-07-14 05:33:06.133034] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:59.243 EAL: No free 2048 kB hugepages reported on node 1 00:18:59.243 [2024-07-14 05:33:06.198566] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:59.243 [2024-07-14 05:33:06.291542] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:59.243 [2024-07-14 05:33:06.291586] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:59.243 [2024-07-14 05:33:06.291600] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:59.243 [2024-07-14 05:33:06.291611] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:59.243 [2024-07-14 05:33:06.291620] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:59.243 [2024-07-14 05:33:06.291706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:59.243 [2024-07-14 05:33:06.291767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:18:59.243 [2024-07-14 05:33:06.291833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:59.243 [2024-07-14 05:33:06.291831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:18:59.502 05:33:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:59.502 05:33:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@860 -- # return 0 00:18:59.502 05:33:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:59.502 05:33:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:59.502 05:33:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:59.502 05:33:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:59.502 05:33:06 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:59.502 05:33:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.502 05:33:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:59.502 [2024-07-14 05:33:06.431440] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:59.502 05:33:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.502 05:33:06 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:59.502 05:33:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.502 05:33:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:59.502 Malloc0 00:18:59.502 05:33:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.502 05:33:06 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:59.502 05:33:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.502 05:33:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:59.502 05:33:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.502 05:33:06 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:59.502 05:33:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.502 05:33:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:59.502 05:33:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.502 05:33:06 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:59.502 05:33:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.502 05:33:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:59.502 [2024-07-14 05:33:06.482299] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:59.502 05:33:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.502 05:33:06 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:59.502 05:33:06 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:18:59.502 05:33:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:18:59.502 05:33:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:18:59.502 05:33:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:59.502 05:33:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:59.502 { 00:18:59.502 "params": { 00:18:59.502 "name": "Nvme$subsystem", 00:18:59.502 "trtype": "$TEST_TRANSPORT", 00:18:59.502 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:59.502 "adrfam": "ipv4", 00:18:59.502 "trsvcid": "$NVMF_PORT", 00:18:59.502 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:59.502 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:59.502 "hdgst": ${hdgst:-false}, 00:18:59.502 "ddgst": ${ddgst:-false} 00:18:59.502 }, 00:18:59.502 "method": "bdev_nvme_attach_controller" 00:18:59.502 } 00:18:59.502 EOF 00:18:59.502 )") 00:18:59.502 05:33:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:18:59.502 05:33:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:18:59.502 05:33:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:18:59.502 05:33:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:59.502 "params": { 00:18:59.502 "name": "Nvme1", 00:18:59.502 "trtype": "tcp", 00:18:59.502 "traddr": "10.0.0.2", 00:18:59.502 "adrfam": "ipv4", 00:18:59.502 "trsvcid": "4420", 00:18:59.502 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:59.502 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:59.502 "hdgst": false, 00:18:59.502 "ddgst": false 00:18:59.502 }, 00:18:59.502 "method": "bdev_nvme_attach_controller" 00:18:59.502 }' 00:18:59.502 [2024-07-14 05:33:06.526885] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:18:59.502 [2024-07-14 05:33:06.526971] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3241180 ] 00:18:59.502 EAL: No free 2048 kB hugepages reported on node 1 00:18:59.502 [2024-07-14 05:33:06.591316] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:59.761 [2024-07-14 05:33:06.682136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:59.761 [2024-07-14 05:33:06.682188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:59.761 [2024-07-14 05:33:06.682191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:00.019 I/O targets: 00:19:00.019 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:00.019 00:19:00.019 00:19:00.019 CUnit - A unit testing framework for C - Version 2.1-3 00:19:00.019 http://cunit.sourceforge.net/ 00:19:00.019 00:19:00.019 00:19:00.019 Suite: bdevio tests on: Nvme1n1 00:19:00.019 Test: blockdev write read block ...passed 00:19:00.019 Test: blockdev write zeroes read block ...passed 00:19:00.019 Test: blockdev write zeroes read no split ...passed 00:19:00.019 Test: blockdev write zeroes read split ...passed 00:19:00.019 Test: blockdev write zeroes read split partial ...passed 00:19:00.019 Test: blockdev reset ...[2024-07-14 05:33:07.112773] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:00.019 [2024-07-14 05:33:07.112897] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1559f80 (9): Bad file descriptor 00:19:00.276 [2024-07-14 05:33:07.174575] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:00.276 passed 00:19:00.276 Test: blockdev write read 8 blocks ...passed 00:19:00.276 Test: blockdev write read size > 128k ...passed 00:19:00.276 Test: blockdev write read invalid size ...passed 00:19:00.276 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:00.276 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:00.276 Test: blockdev write read max offset ...passed 00:19:00.276 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:00.535 Test: blockdev writev readv 8 blocks ...passed 00:19:00.535 Test: blockdev writev readv 30 x 1block ...passed 00:19:00.535 Test: blockdev writev readv block ...passed 00:19:00.535 Test: blockdev writev readv size > 128k ...passed 00:19:00.535 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:00.536 Test: blockdev comparev and writev ...[2024-07-14 05:33:07.472183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:00.536 [2024-07-14 05:33:07.472220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:00.536 [2024-07-14 05:33:07.472244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:00.536 [2024-07-14 05:33:07.472263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:00.536 [2024-07-14 05:33:07.472697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:00.536 [2024-07-14 05:33:07.472722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:00.536 [2024-07-14 05:33:07.472744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:00.536 [2024-07-14 05:33:07.472761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:00.536 [2024-07-14 05:33:07.473210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:00.536 [2024-07-14 05:33:07.473241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:00.536 [2024-07-14 05:33:07.473264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:00.536 [2024-07-14 05:33:07.473280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:00.536 [2024-07-14 05:33:07.473690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:00.536 [2024-07-14 05:33:07.473716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:00.536 [2024-07-14 05:33:07.473750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:00.536 [2024-07-14 05:33:07.473770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:00.536 passed 00:19:00.536 Test: blockdev nvme passthru rw ...passed 00:19:00.536 Test: blockdev nvme passthru vendor specific ...[2024-07-14 05:33:07.557272] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:00.536 [2024-07-14 05:33:07.557312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:00.536 [2024-07-14 05:33:07.557515] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:00.536 [2024-07-14 05:33:07.557539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:00.536 [2024-07-14 05:33:07.557756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:00.536 [2024-07-14 05:33:07.557779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:00.536 [2024-07-14 05:33:07.557992] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:00.536 [2024-07-14 05:33:07.558015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:00.536 passed 00:19:00.536 Test: blockdev nvme admin passthru ...passed 00:19:00.536 Test: blockdev copy ...passed 00:19:00.536 00:19:00.536 Run Summary: Type Total Ran Passed Failed Inactive 00:19:00.536 suites 1 1 n/a 0 0 00:19:00.536 tests 23 23 23 0 0 00:19:00.536 asserts 152 152 152 0 n/a 00:19:00.536 00:19:00.536 Elapsed time = 1.441 seconds 00:19:00.794 05:33:07 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:00.794 05:33:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.794 05:33:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:00.794 05:33:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.794 05:33:07 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:00.794 05:33:07 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:19:00.794 05:33:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:00.794 05:33:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:19:00.794 05:33:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:00.794 05:33:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:19:00.794 05:33:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:00.794 05:33:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:00.794 rmmod nvme_tcp 00:19:00.794 rmmod nvme_fabrics 00:19:00.794 rmmod nvme_keyring 00:19:00.794 05:33:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:00.794 05:33:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:19:00.794 05:33:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:19:00.794 05:33:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 3241035 ']' 00:19:00.794 05:33:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 3241035 00:19:00.794 05:33:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@946 -- # '[' -z 3241035 ']' 00:19:00.794 05:33:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@950 -- # kill -0 3241035 00:19:00.794 05:33:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # uname 00:19:00.794 05:33:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:00.794 05:33:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3241035 00:19:00.794 05:33:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:19:00.794 05:33:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:19:00.794 05:33:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3241035' 00:19:00.794 killing process with pid 3241035 00:19:00.794 05:33:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@965 -- # kill 3241035 00:19:00.794 05:33:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@970 -- # wait 3241035 00:19:01.053 05:33:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:01.053 05:33:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:01.053 05:33:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:01.053 05:33:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:01.053 05:33:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:01.053 05:33:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:01.053 05:33:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:01.053 05:33:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:03.586 05:33:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:03.586 00:19:03.586 real 0m6.258s 00:19:03.586 user 0m10.380s 00:19:03.586 sys 0m2.011s 00:19:03.586 05:33:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:03.586 05:33:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:03.586 ************************************ 00:19:03.586 END TEST nvmf_bdevio 00:19:03.586 ************************************ 00:19:03.586 05:33:10 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:03.586 05:33:10 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:03.586 05:33:10 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:03.586 05:33:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:03.586 ************************************ 00:19:03.586 START TEST nvmf_auth_target 00:19:03.586 ************************************ 00:19:03.586 05:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:03.586 * Looking for test storage... 00:19:03.586 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:03.586 05:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:03.586 05:33:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:03.586 05:33:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:03.586 05:33:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:03.586 05:33:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:03.586 05:33:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:03.586 05:33:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:03.586 05:33:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:03.586 05:33:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:03.586 05:33:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:03.586 05:33:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:03.586 05:33:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:03.586 05:33:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:03.586 05:33:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:03.586 05:33:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:03.586 05:33:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:03.586 05:33:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:03.586 05:33:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:03.586 05:33:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:03.586 05:33:10 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:03.586 05:33:10 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:03.586 05:33:10 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:03.586 05:33:10 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.586 05:33:10 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.586 05:33:10 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.586 05:33:10 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:03.586 05:33:10 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.586 05:33:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:19:03.586 05:33:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:03.586 05:33:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:03.586 05:33:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:03.586 05:33:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:03.586 05:33:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:03.586 05:33:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:03.586 05:33:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:03.586 05:33:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:03.586 05:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:03.586 05:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:03.586 05:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:03.586 05:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:03.586 05:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:03.586 05:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:03.586 05:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:03.586 05:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:19:03.586 05:33:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:03.586 05:33:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:03.586 05:33:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:03.586 05:33:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:03.586 05:33:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:03.586 05:33:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:03.587 05:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:03.587 05:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:03.587 05:33:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:03.587 05:33:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:03.587 05:33:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:19:03.587 05:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.486 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:05.486 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:19:05.486 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:05.486 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:05.486 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:05.486 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:05.486 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:05.486 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:19:05.486 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:05.486 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:19:05.486 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:19:05.486 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:19:05.486 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:19:05.486 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:19:05.486 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:19:05.486 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:05.486 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:05.486 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:05.486 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:05.486 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:05.486 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:05.486 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:05.486 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:05.486 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:05.486 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:05.486 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:05.486 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:05.486 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:05.486 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:05.486 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:05.486 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:05.486 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:05.486 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:05.486 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:05.486 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:05.486 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:05.486 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:05.486 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:05.486 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:05.486 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:05.486 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:05.486 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:05.486 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:05.486 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:05.486 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:05.486 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:05.486 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:05.486 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:05.486 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:05.486 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:05.486 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:05.486 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:05.486 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:05.486 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:05.486 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:05.486 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:05.486 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:05.486 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:05.486 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:05.486 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:05.486 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:05.486 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:05.486 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:05.486 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:05.486 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:05.486 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:05.486 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:05.486 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:05.486 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:05.486 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:05.486 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:05.486 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:05.486 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:19:05.486 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:05.486 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:05.487 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:05.487 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:05.487 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:05.487 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:05.487 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:05.487 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:05.487 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:05.487 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:05.487 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:05.487 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:05.487 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:05.487 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:05.487 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:05.487 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:05.487 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:05.487 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:05.487 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:05.487 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:05.487 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:05.487 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:05.487 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:05.487 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:05.487 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:19:05.487 00:19:05.487 --- 10.0.0.2 ping statistics --- 00:19:05.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:05.487 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:19:05.487 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:05.487 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:05.487 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:19:05.487 00:19:05.487 --- 10.0.0.1 ping statistics --- 00:19:05.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:05.487 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:19:05.487 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:05.487 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:19:05.487 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:05.487 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:05.487 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:05.487 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:05.487 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:05.487 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:05.487 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:05.487 05:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:19:05.487 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:05.487 05:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:05.487 05:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.487 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3243755 00:19:05.487 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:05.487 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3243755 00:19:05.487 05:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 3243755 ']' 00:19:05.487 05:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:05.487 05:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:05.487 05:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:05.487 05:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:05.487 05:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.745 05:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:05.745 05:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:19:05.745 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:05.745 05:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:05.745 05:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.745 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:05.745 05:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=3243780 00:19:05.745 05:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:05.745 05:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:05.745 05:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:19:05.745 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:05.745 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:05.745 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:05.745 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:19:05.745 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:05.745 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:05.745 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=64dd837b90ada38db540f3c75494939f65bc83e02e769c5b 00:19:05.745 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:19:05.745 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.1Yp 00:19:05.745 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 64dd837b90ada38db540f3c75494939f65bc83e02e769c5b 0 00:19:05.745 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 64dd837b90ada38db540f3c75494939f65bc83e02e769c5b 0 00:19:05.745 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:05.745 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:05.745 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=64dd837b90ada38db540f3c75494939f65bc83e02e769c5b 00:19:05.745 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:19:05.745 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:05.745 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.1Yp 00:19:05.745 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.1Yp 00:19:05.746 05:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.1Yp 00:19:05.746 05:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:19:05.746 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:05.746 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:05.746 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:05.746 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:19:05.746 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:19:05.746 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:05.746 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c323ef285fbd5ad307f2e6f7f200e51bb739203e81a93f76a65b006a8562200b 00:19:05.746 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:05.746 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.b51 00:19:05.746 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c323ef285fbd5ad307f2e6f7f200e51bb739203e81a93f76a65b006a8562200b 3 00:19:05.746 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c323ef285fbd5ad307f2e6f7f200e51bb739203e81a93f76a65b006a8562200b 3 00:19:05.746 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:05.746 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:05.746 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c323ef285fbd5ad307f2e6f7f200e51bb739203e81a93f76a65b006a8562200b 00:19:05.746 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:19:05.746 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:06.005 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.b51 00:19:06.005 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.b51 00:19:06.005 05:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.b51 00:19:06.005 05:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:19:06.005 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:06.005 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:06.005 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:06.005 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:19:06.005 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:19:06.005 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:06.005 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a591d9347d938149c8cff6b79054cad1 00:19:06.005 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:06.005 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.MAD 00:19:06.005 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a591d9347d938149c8cff6b79054cad1 1 00:19:06.005 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a591d9347d938149c8cff6b79054cad1 1 00:19:06.005 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:06.005 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:06.005 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a591d9347d938149c8cff6b79054cad1 00:19:06.005 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:19:06.005 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:06.005 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.MAD 00:19:06.005 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.MAD 00:19:06.005 05:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.MAD 00:19:06.005 05:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:19:06.005 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:06.005 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:06.005 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:06.005 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:19:06.005 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:06.005 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:06.005 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=14ed3ab1cae80aa8036d33f1c77fc30ec0461e34fff59b58 00:19:06.005 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:06.005 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.KyQ 00:19:06.005 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 14ed3ab1cae80aa8036d33f1c77fc30ec0461e34fff59b58 2 00:19:06.005 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 14ed3ab1cae80aa8036d33f1c77fc30ec0461e34fff59b58 2 00:19:06.005 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:06.005 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:06.005 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=14ed3ab1cae80aa8036d33f1c77fc30ec0461e34fff59b58 00:19:06.005 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:19:06.005 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:06.005 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.KyQ 00:19:06.005 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.KyQ 00:19:06.005 05:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.KyQ 00:19:06.005 05:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:19:06.005 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:06.005 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:06.005 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:06.005 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:19:06.005 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:06.005 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:06.005 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=141cb44644a9635bca1d72c64f4f5aaa9799bf7c14fdd7df 00:19:06.005 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:06.005 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.kJz 00:19:06.005 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 141cb44644a9635bca1d72c64f4f5aaa9799bf7c14fdd7df 2 00:19:06.005 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 141cb44644a9635bca1d72c64f4f5aaa9799bf7c14fdd7df 2 00:19:06.005 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:06.005 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:06.005 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=141cb44644a9635bca1d72c64f4f5aaa9799bf7c14fdd7df 00:19:06.005 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:19:06.005 05:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:06.005 05:33:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.kJz 00:19:06.005 05:33:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.kJz 00:19:06.005 05:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.kJz 00:19:06.005 05:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:19:06.005 05:33:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:06.005 05:33:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:06.005 05:33:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:06.005 05:33:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:19:06.005 05:33:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:19:06.005 05:33:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:06.005 05:33:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=711aa89ddc347b5a9f108d199b77559e 00:19:06.005 05:33:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:06.005 05:33:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.MAb 00:19:06.005 05:33:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 711aa89ddc347b5a9f108d199b77559e 1 00:19:06.005 05:33:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 711aa89ddc347b5a9f108d199b77559e 1 00:19:06.005 05:33:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:06.005 05:33:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:06.005 05:33:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=711aa89ddc347b5a9f108d199b77559e 00:19:06.005 05:33:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:19:06.005 05:33:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:06.005 05:33:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.MAb 00:19:06.005 05:33:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.MAb 00:19:06.005 05:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.MAb 00:19:06.005 05:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:19:06.005 05:33:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:06.005 05:33:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:06.005 05:33:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:06.005 05:33:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:19:06.005 05:33:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:19:06.005 05:33:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:06.005 05:33:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=9ed50da0d564bf39e3b87cdaa1f8a2c240f050aa5958284d9cb4d70d586801d5 00:19:06.005 05:33:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:06.005 05:33:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.xax 00:19:06.005 05:33:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 9ed50da0d564bf39e3b87cdaa1f8a2c240f050aa5958284d9cb4d70d586801d5 3 00:19:06.005 05:33:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 9ed50da0d564bf39e3b87cdaa1f8a2c240f050aa5958284d9cb4d70d586801d5 3 00:19:06.005 05:33:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:06.005 05:33:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:06.005 05:33:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=9ed50da0d564bf39e3b87cdaa1f8a2c240f050aa5958284d9cb4d70d586801d5 00:19:06.005 05:33:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:19:06.005 05:33:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:06.264 05:33:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.xax 00:19:06.264 05:33:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.xax 00:19:06.264 05:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.xax 00:19:06.264 05:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:19:06.264 05:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 3243755 00:19:06.264 05:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 3243755 ']' 00:19:06.264 05:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:06.264 05:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:06.264 05:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:06.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:06.264 05:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:06.264 05:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.264 05:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:06.264 05:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:19:06.264 05:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 3243780 /var/tmp/host.sock 00:19:06.264 05:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 3243780 ']' 00:19:06.264 05:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/host.sock 00:19:06.264 05:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:06.264 05:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:06.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:06.264 05:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:06.264 05:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.828 05:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:06.828 05:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:19:06.828 05:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:19:06.828 05:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.828 05:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.828 05:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.828 05:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:06.828 05:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.1Yp 00:19:06.828 05:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.828 05:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.828 05:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.828 05:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.1Yp 00:19:06.828 05:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.1Yp 00:19:07.085 05:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.b51 ]] 00:19:07.085 05:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.b51 00:19:07.086 05:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.086 05:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.086 05:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.086 05:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.b51 00:19:07.086 05:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.b51 00:19:07.343 05:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:07.343 05:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.MAD 00:19:07.343 05:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.343 05:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.343 05:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.343 05:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.MAD 00:19:07.343 05:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.MAD 00:19:07.601 05:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.KyQ ]] 00:19:07.601 05:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.KyQ 00:19:07.601 05:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.601 05:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.601 05:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.601 05:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.KyQ 00:19:07.601 05:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.KyQ 00:19:07.859 05:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:07.859 05:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.kJz 00:19:07.859 05:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.859 05:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.859 05:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.859 05:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.kJz 00:19:07.859 05:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.kJz 00:19:08.117 05:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.MAb ]] 00:19:08.117 05:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.MAb 00:19:08.117 05:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.117 05:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.117 05:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.117 05:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.MAb 00:19:08.117 05:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.MAb 00:19:08.375 05:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:08.375 05:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.xax 00:19:08.375 05:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.375 05:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.375 05:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.375 05:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.xax 00:19:08.375 05:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.xax 00:19:08.632 05:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:19:08.632 05:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:08.632 05:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:08.632 05:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:08.632 05:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:08.632 05:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:08.890 05:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:19:08.890 05:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:08.890 05:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:08.890 05:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:08.890 05:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:08.890 05:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.890 05:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:08.890 05:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.890 05:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.890 05:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.890 05:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:08.891 05:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.149 00:19:09.149 05:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:09.149 05:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:09.149 05:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.407 05:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.407 05:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:09.407 05:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.407 05:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.407 05:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.407 05:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:09.407 { 00:19:09.407 "cntlid": 1, 00:19:09.407 "qid": 0, 00:19:09.407 "state": "enabled", 00:19:09.407 "listen_address": { 00:19:09.407 "trtype": "TCP", 00:19:09.407 "adrfam": "IPv4", 00:19:09.407 "traddr": "10.0.0.2", 00:19:09.407 "trsvcid": "4420" 00:19:09.407 }, 00:19:09.407 "peer_address": { 00:19:09.407 "trtype": "TCP", 00:19:09.407 "adrfam": "IPv4", 00:19:09.407 "traddr": "10.0.0.1", 00:19:09.407 "trsvcid": "44542" 00:19:09.407 }, 00:19:09.407 "auth": { 00:19:09.407 "state": "completed", 00:19:09.407 "digest": "sha256", 00:19:09.407 "dhgroup": "null" 00:19:09.407 } 00:19:09.407 } 00:19:09.407 ]' 00:19:09.407 05:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:09.407 05:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:09.407 05:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:09.407 05:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:09.407 05:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:09.407 05:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:09.407 05:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:09.407 05:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.664 05:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjRkZDgzN2I5MGFkYTM4ZGI1NDBmM2M3NTQ5NDkzOWY2NWJjODNlMDJlNzY5YzVid1MmQg==: --dhchap-ctrl-secret DHHC-1:03:YzMyM2VmMjg1ZmJkNWFkMzA3ZjJlNmY3ZjIwMGU1MWJiNzM5MjAzZTgxYTkzZjc2YTY1YjAwNmE4NTYyMjAwYkNbsVg=: 00:19:10.598 05:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.598 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.598 05:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:10.598 05:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.598 05:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.598 05:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.598 05:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:10.598 05:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:10.598 05:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:10.856 05:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:19:10.856 05:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:10.856 05:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:10.856 05:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:10.856 05:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:10.856 05:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.856 05:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:10.856 05:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.856 05:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.856 05:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.856 05:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:10.856 05:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:11.422 00:19:11.422 05:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:11.422 05:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:11.422 05:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.422 05:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.422 05:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.422 05:33:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.422 05:33:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.422 05:33:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.422 05:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:11.422 { 00:19:11.422 "cntlid": 3, 00:19:11.422 "qid": 0, 00:19:11.422 "state": "enabled", 00:19:11.422 "listen_address": { 00:19:11.422 "trtype": "TCP", 00:19:11.422 "adrfam": "IPv4", 00:19:11.422 "traddr": "10.0.0.2", 00:19:11.422 "trsvcid": "4420" 00:19:11.422 }, 00:19:11.422 "peer_address": { 00:19:11.422 "trtype": "TCP", 00:19:11.422 "adrfam": "IPv4", 00:19:11.422 "traddr": "10.0.0.1", 00:19:11.422 "trsvcid": "50852" 00:19:11.422 }, 00:19:11.422 "auth": { 00:19:11.422 "state": "completed", 00:19:11.422 "digest": "sha256", 00:19:11.422 "dhgroup": "null" 00:19:11.422 } 00:19:11.422 } 00:19:11.422 ]' 00:19:11.422 05:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:11.680 05:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:11.680 05:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:11.680 05:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:11.680 05:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:11.680 05:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.680 05:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.680 05:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.938 05:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTU5MWQ5MzQ3ZDkzODE0OWM4Y2ZmNmI3OTA1NGNhZDGUMbrn: --dhchap-ctrl-secret DHHC-1:02:MTRlZDNhYjFjYWU4MGFhODAzNmQzM2YxYzc3ZmMzMGVjMDQ2MWUzNGZmZjU5YjU4fdti4A==: 00:19:12.899 05:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.899 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.899 05:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:12.899 05:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.899 05:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.899 05:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.900 05:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:12.900 05:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:12.900 05:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:13.159 05:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:19:13.159 05:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:13.159 05:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:13.159 05:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:13.159 05:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:13.159 05:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:13.159 05:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:13.159 05:33:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.159 05:33:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.159 05:33:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.159 05:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:13.159 05:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:13.416 00:19:13.416 05:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:13.416 05:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:13.416 05:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.674 05:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.674 05:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.674 05:33:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.674 05:33:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.674 05:33:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.674 05:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:13.674 { 00:19:13.674 "cntlid": 5, 00:19:13.674 "qid": 0, 00:19:13.674 "state": "enabled", 00:19:13.674 "listen_address": { 00:19:13.674 "trtype": "TCP", 00:19:13.674 "adrfam": "IPv4", 00:19:13.674 "traddr": "10.0.0.2", 00:19:13.674 "trsvcid": "4420" 00:19:13.674 }, 00:19:13.675 "peer_address": { 00:19:13.675 "trtype": "TCP", 00:19:13.675 "adrfam": "IPv4", 00:19:13.675 "traddr": "10.0.0.1", 00:19:13.675 "trsvcid": "50864" 00:19:13.675 }, 00:19:13.675 "auth": { 00:19:13.675 "state": "completed", 00:19:13.675 "digest": "sha256", 00:19:13.675 "dhgroup": "null" 00:19:13.675 } 00:19:13.675 } 00:19:13.675 ]' 00:19:13.675 05:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:13.932 05:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:13.932 05:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:13.932 05:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:13.932 05:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:13.932 05:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.932 05:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.932 05:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.190 05:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MTQxY2I0NDY0NGE5NjM1YmNhMWQ3MmM2NGY0ZjVhYWE5Nzk5YmY3YzE0ZmRkN2RmCb/FQQ==: --dhchap-ctrl-secret DHHC-1:01:NzExYWE4OWRkYzM0N2I1YTlmMTA4ZDE5OWI3NzU1OWWJ7U2z: 00:19:15.122 05:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.122 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.122 05:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:15.122 05:33:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.122 05:33:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.122 05:33:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.122 05:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:15.122 05:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:15.122 05:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:15.379 05:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:19:15.379 05:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:15.379 05:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:15.379 05:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:15.379 05:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:15.379 05:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.379 05:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:15.379 05:33:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.379 05:33:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.379 05:33:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.379 05:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:15.380 05:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:15.637 00:19:15.637 05:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:15.637 05:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.637 05:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:15.895 05:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.895 05:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.895 05:33:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.895 05:33:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.895 05:33:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.895 05:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:15.895 { 00:19:15.895 "cntlid": 7, 00:19:15.895 "qid": 0, 00:19:15.895 "state": "enabled", 00:19:15.895 "listen_address": { 00:19:15.895 "trtype": "TCP", 00:19:15.895 "adrfam": "IPv4", 00:19:15.895 "traddr": "10.0.0.2", 00:19:15.895 "trsvcid": "4420" 00:19:15.895 }, 00:19:15.895 "peer_address": { 00:19:15.895 "trtype": "TCP", 00:19:15.895 "adrfam": "IPv4", 00:19:15.895 "traddr": "10.0.0.1", 00:19:15.895 "trsvcid": "50892" 00:19:15.895 }, 00:19:15.895 "auth": { 00:19:15.895 "state": "completed", 00:19:15.895 "digest": "sha256", 00:19:15.895 "dhgroup": "null" 00:19:15.895 } 00:19:15.895 } 00:19:15.895 ]' 00:19:15.895 05:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:16.167 05:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:16.167 05:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:16.167 05:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:16.167 05:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:16.167 05:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.167 05:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.167 05:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.424 05:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OWVkNTBkYTBkNTY0YmYzOWUzYjg3Y2RhYTFmOGEyYzI0MGYwNTBhYTU5NTgyODRkOWNiNGQ3MGQ1ODY4MDFkNYf5+BY=: 00:19:17.357 05:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.357 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.357 05:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:17.357 05:33:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.357 05:33:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.357 05:33:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.357 05:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:17.357 05:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:17.357 05:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:17.357 05:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:17.614 05:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:19:17.614 05:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:17.614 05:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:17.614 05:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:17.614 05:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:17.614 05:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.614 05:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.614 05:33:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.614 05:33:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.615 05:33:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.615 05:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.615 05:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.872 00:19:17.872 05:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:17.872 05:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:17.872 05:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.130 05:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.130 05:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.130 05:33:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.130 05:33:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.130 05:33:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.130 05:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:18.130 { 00:19:18.130 "cntlid": 9, 00:19:18.130 "qid": 0, 00:19:18.130 "state": "enabled", 00:19:18.130 "listen_address": { 00:19:18.130 "trtype": "TCP", 00:19:18.130 "adrfam": "IPv4", 00:19:18.130 "traddr": "10.0.0.2", 00:19:18.130 "trsvcid": "4420" 00:19:18.130 }, 00:19:18.130 "peer_address": { 00:19:18.130 "trtype": "TCP", 00:19:18.130 "adrfam": "IPv4", 00:19:18.130 "traddr": "10.0.0.1", 00:19:18.130 "trsvcid": "50916" 00:19:18.130 }, 00:19:18.130 "auth": { 00:19:18.130 "state": "completed", 00:19:18.130 "digest": "sha256", 00:19:18.130 "dhgroup": "ffdhe2048" 00:19:18.130 } 00:19:18.130 } 00:19:18.130 ]' 00:19:18.130 05:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:18.130 05:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:18.130 05:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:18.388 05:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:18.388 05:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:18.388 05:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.388 05:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.388 05:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.645 05:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjRkZDgzN2I5MGFkYTM4ZGI1NDBmM2M3NTQ5NDkzOWY2NWJjODNlMDJlNzY5YzVid1MmQg==: --dhchap-ctrl-secret DHHC-1:03:YzMyM2VmMjg1ZmJkNWFkMzA3ZjJlNmY3ZjIwMGU1MWJiNzM5MjAzZTgxYTkzZjc2YTY1YjAwNmE4NTYyMjAwYkNbsVg=: 00:19:19.579 05:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.579 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.579 05:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:19.579 05:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.579 05:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.579 05:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.579 05:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:19.579 05:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:19.579 05:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:19.838 05:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:19:19.838 05:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:19.838 05:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:19.838 05:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:19.838 05:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:19.838 05:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.838 05:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.838 05:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.838 05:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.838 05:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.838 05:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.838 05:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:20.096 00:19:20.096 05:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:20.096 05:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:20.096 05:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.355 05:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.355 05:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.355 05:33:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.355 05:33:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.355 05:33:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.355 05:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:20.355 { 00:19:20.355 "cntlid": 11, 00:19:20.355 "qid": 0, 00:19:20.355 "state": "enabled", 00:19:20.355 "listen_address": { 00:19:20.355 "trtype": "TCP", 00:19:20.355 "adrfam": "IPv4", 00:19:20.355 "traddr": "10.0.0.2", 00:19:20.355 "trsvcid": "4420" 00:19:20.355 }, 00:19:20.355 "peer_address": { 00:19:20.355 "trtype": "TCP", 00:19:20.355 "adrfam": "IPv4", 00:19:20.355 "traddr": "10.0.0.1", 00:19:20.355 "trsvcid": "50944" 00:19:20.355 }, 00:19:20.355 "auth": { 00:19:20.355 "state": "completed", 00:19:20.355 "digest": "sha256", 00:19:20.355 "dhgroup": "ffdhe2048" 00:19:20.355 } 00:19:20.355 } 00:19:20.355 ]' 00:19:20.355 05:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:20.355 05:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:20.355 05:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:20.355 05:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:20.355 05:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:20.614 05:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.615 05:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.615 05:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.873 05:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTU5MWQ5MzQ3ZDkzODE0OWM4Y2ZmNmI3OTA1NGNhZDGUMbrn: --dhchap-ctrl-secret DHHC-1:02:MTRlZDNhYjFjYWU4MGFhODAzNmQzM2YxYzc3ZmMzMGVjMDQ2MWUzNGZmZjU5YjU4fdti4A==: 00:19:21.809 05:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.809 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.809 05:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:21.809 05:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.809 05:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.809 05:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.809 05:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:21.809 05:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:21.809 05:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:22.067 05:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:19:22.067 05:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:22.067 05:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:22.067 05:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:22.067 05:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:22.067 05:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.067 05:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:22.067 05:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.067 05:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.067 05:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.067 05:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:22.067 05:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:22.325 00:19:22.325 05:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:22.325 05:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.325 05:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:22.583 05:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.583 05:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.583 05:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.583 05:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.583 05:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.583 05:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:22.583 { 00:19:22.583 "cntlid": 13, 00:19:22.583 "qid": 0, 00:19:22.583 "state": "enabled", 00:19:22.583 "listen_address": { 00:19:22.583 "trtype": "TCP", 00:19:22.583 "adrfam": "IPv4", 00:19:22.583 "traddr": "10.0.0.2", 00:19:22.583 "trsvcid": "4420" 00:19:22.583 }, 00:19:22.583 "peer_address": { 00:19:22.583 "trtype": "TCP", 00:19:22.583 "adrfam": "IPv4", 00:19:22.583 "traddr": "10.0.0.1", 00:19:22.583 "trsvcid": "43466" 00:19:22.583 }, 00:19:22.583 "auth": { 00:19:22.583 "state": "completed", 00:19:22.583 "digest": "sha256", 00:19:22.583 "dhgroup": "ffdhe2048" 00:19:22.583 } 00:19:22.583 } 00:19:22.583 ]' 00:19:22.583 05:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:22.583 05:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:22.583 05:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:22.583 05:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:22.583 05:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:22.841 05:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.841 05:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.841 05:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.099 05:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MTQxY2I0NDY0NGE5NjM1YmNhMWQ3MmM2NGY0ZjVhYWE5Nzk5YmY3YzE0ZmRkN2RmCb/FQQ==: --dhchap-ctrl-secret DHHC-1:01:NzExYWE4OWRkYzM0N2I1YTlmMTA4ZDE5OWI3NzU1OWWJ7U2z: 00:19:24.034 05:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.034 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.034 05:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:24.034 05:33:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.034 05:33:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.034 05:33:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.034 05:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:24.034 05:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:24.034 05:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:24.292 05:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:19:24.292 05:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:24.292 05:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:24.292 05:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:24.292 05:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:24.292 05:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.292 05:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:24.292 05:33:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.292 05:33:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.292 05:33:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.292 05:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:24.292 05:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:24.549 00:19:24.549 05:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:24.549 05:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:24.550 05:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.807 05:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.807 05:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.807 05:33:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.807 05:33:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.807 05:33:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.807 05:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:24.807 { 00:19:24.807 "cntlid": 15, 00:19:24.807 "qid": 0, 00:19:24.807 "state": "enabled", 00:19:24.807 "listen_address": { 00:19:24.807 "trtype": "TCP", 00:19:24.807 "adrfam": "IPv4", 00:19:24.807 "traddr": "10.0.0.2", 00:19:24.807 "trsvcid": "4420" 00:19:24.807 }, 00:19:24.807 "peer_address": { 00:19:24.807 "trtype": "TCP", 00:19:24.807 "adrfam": "IPv4", 00:19:24.807 "traddr": "10.0.0.1", 00:19:24.807 "trsvcid": "43492" 00:19:24.807 }, 00:19:24.807 "auth": { 00:19:24.807 "state": "completed", 00:19:24.807 "digest": "sha256", 00:19:24.807 "dhgroup": "ffdhe2048" 00:19:24.807 } 00:19:24.807 } 00:19:24.807 ]' 00:19:24.807 05:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:24.807 05:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:24.807 05:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:24.807 05:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:24.807 05:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:25.065 05:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.065 05:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.065 05:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.323 05:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OWVkNTBkYTBkNTY0YmYzOWUzYjg3Y2RhYTFmOGEyYzI0MGYwNTBhYTU5NTgyODRkOWNiNGQ3MGQ1ODY4MDFkNYf5+BY=: 00:19:26.259 05:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:26.259 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:26.259 05:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:26.259 05:33:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.259 05:33:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.259 05:33:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.259 05:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:26.259 05:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:26.259 05:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:26.259 05:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:26.259 05:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:19:26.259 05:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:26.259 05:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:26.259 05:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:26.259 05:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:26.259 05:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.259 05:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.259 05:33:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.259 05:33:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.537 05:33:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.537 05:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.537 05:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.793 00:19:26.793 05:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:26.793 05:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:26.793 05:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.049 05:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.049 05:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.049 05:33:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.049 05:33:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.049 05:33:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.049 05:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:27.049 { 00:19:27.049 "cntlid": 17, 00:19:27.049 "qid": 0, 00:19:27.049 "state": "enabled", 00:19:27.049 "listen_address": { 00:19:27.049 "trtype": "TCP", 00:19:27.049 "adrfam": "IPv4", 00:19:27.049 "traddr": "10.0.0.2", 00:19:27.049 "trsvcid": "4420" 00:19:27.049 }, 00:19:27.049 "peer_address": { 00:19:27.049 "trtype": "TCP", 00:19:27.049 "adrfam": "IPv4", 00:19:27.049 "traddr": "10.0.0.1", 00:19:27.049 "trsvcid": "43512" 00:19:27.049 }, 00:19:27.049 "auth": { 00:19:27.049 "state": "completed", 00:19:27.049 "digest": "sha256", 00:19:27.049 "dhgroup": "ffdhe3072" 00:19:27.049 } 00:19:27.049 } 00:19:27.049 ]' 00:19:27.049 05:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:27.049 05:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:27.049 05:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:27.049 05:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:27.049 05:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:27.049 05:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.049 05:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.049 05:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.306 05:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjRkZDgzN2I5MGFkYTM4ZGI1NDBmM2M3NTQ5NDkzOWY2NWJjODNlMDJlNzY5YzVid1MmQg==: --dhchap-ctrl-secret DHHC-1:03:YzMyM2VmMjg1ZmJkNWFkMzA3ZjJlNmY3ZjIwMGU1MWJiNzM5MjAzZTgxYTkzZjc2YTY1YjAwNmE4NTYyMjAwYkNbsVg=: 00:19:28.239 05:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.239 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.239 05:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:28.239 05:33:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.239 05:33:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.239 05:33:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.239 05:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:28.239 05:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:28.239 05:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:28.498 05:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:19:28.498 05:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:28.498 05:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:28.498 05:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:28.498 05:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:28.498 05:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.498 05:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:28.498 05:33:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.498 05:33:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.498 05:33:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.498 05:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:28.498 05:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:29.064 00:19:29.064 05:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:29.064 05:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:29.064 05:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.323 05:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.323 05:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.323 05:33:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.323 05:33:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.323 05:33:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.323 05:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:29.323 { 00:19:29.323 "cntlid": 19, 00:19:29.323 "qid": 0, 00:19:29.323 "state": "enabled", 00:19:29.323 "listen_address": { 00:19:29.323 "trtype": "TCP", 00:19:29.323 "adrfam": "IPv4", 00:19:29.323 "traddr": "10.0.0.2", 00:19:29.323 "trsvcid": "4420" 00:19:29.323 }, 00:19:29.323 "peer_address": { 00:19:29.323 "trtype": "TCP", 00:19:29.323 "adrfam": "IPv4", 00:19:29.323 "traddr": "10.0.0.1", 00:19:29.323 "trsvcid": "43546" 00:19:29.323 }, 00:19:29.323 "auth": { 00:19:29.323 "state": "completed", 00:19:29.323 "digest": "sha256", 00:19:29.323 "dhgroup": "ffdhe3072" 00:19:29.323 } 00:19:29.323 } 00:19:29.323 ]' 00:19:29.323 05:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:29.323 05:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:29.323 05:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:29.323 05:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:29.323 05:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:29.323 05:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.323 05:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.323 05:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.581 05:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTU5MWQ5MzQ3ZDkzODE0OWM4Y2ZmNmI3OTA1NGNhZDGUMbrn: --dhchap-ctrl-secret DHHC-1:02:MTRlZDNhYjFjYWU4MGFhODAzNmQzM2YxYzc3ZmMzMGVjMDQ2MWUzNGZmZjU5YjU4fdti4A==: 00:19:30.513 05:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.513 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.513 05:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:30.513 05:33:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.513 05:33:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.513 05:33:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.513 05:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:30.513 05:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:30.513 05:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:30.771 05:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:19:30.771 05:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:30.771 05:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:30.771 05:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:30.771 05:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:30.771 05:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.771 05:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.771 05:33:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.771 05:33:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.771 05:33:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.771 05:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.771 05:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.029 00:19:31.029 05:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:31.029 05:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:31.029 05:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.285 05:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.285 05:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.285 05:33:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.285 05:33:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.285 05:33:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.285 05:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:31.285 { 00:19:31.285 "cntlid": 21, 00:19:31.285 "qid": 0, 00:19:31.285 "state": "enabled", 00:19:31.285 "listen_address": { 00:19:31.285 "trtype": "TCP", 00:19:31.285 "adrfam": "IPv4", 00:19:31.285 "traddr": "10.0.0.2", 00:19:31.285 "trsvcid": "4420" 00:19:31.285 }, 00:19:31.285 "peer_address": { 00:19:31.285 "trtype": "TCP", 00:19:31.285 "adrfam": "IPv4", 00:19:31.285 "traddr": "10.0.0.1", 00:19:31.285 "trsvcid": "42894" 00:19:31.285 }, 00:19:31.285 "auth": { 00:19:31.285 "state": "completed", 00:19:31.285 "digest": "sha256", 00:19:31.285 "dhgroup": "ffdhe3072" 00:19:31.285 } 00:19:31.285 } 00:19:31.285 ]' 00:19:31.285 05:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:31.542 05:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:31.542 05:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:31.542 05:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:31.542 05:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:31.542 05:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.542 05:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.542 05:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.800 05:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MTQxY2I0NDY0NGE5NjM1YmNhMWQ3MmM2NGY0ZjVhYWE5Nzk5YmY3YzE0ZmRkN2RmCb/FQQ==: --dhchap-ctrl-secret DHHC-1:01:NzExYWE4OWRkYzM0N2I1YTlmMTA4ZDE5OWI3NzU1OWWJ7U2z: 00:19:32.732 05:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.732 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.732 05:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:32.732 05:33:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.732 05:33:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.732 05:33:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.732 05:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:32.732 05:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:32.732 05:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:32.990 05:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:19:32.990 05:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:32.990 05:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:32.990 05:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:32.990 05:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:32.990 05:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.990 05:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:32.990 05:33:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.990 05:33:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.990 05:33:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.990 05:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:32.990 05:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:33.249 00:19:33.249 05:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:33.249 05:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:33.249 05:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.507 05:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.507 05:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.507 05:33:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.507 05:33:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.507 05:33:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.507 05:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:33.507 { 00:19:33.507 "cntlid": 23, 00:19:33.507 "qid": 0, 00:19:33.507 "state": "enabled", 00:19:33.507 "listen_address": { 00:19:33.507 "trtype": "TCP", 00:19:33.507 "adrfam": "IPv4", 00:19:33.507 "traddr": "10.0.0.2", 00:19:33.507 "trsvcid": "4420" 00:19:33.507 }, 00:19:33.507 "peer_address": { 00:19:33.507 "trtype": "TCP", 00:19:33.507 "adrfam": "IPv4", 00:19:33.507 "traddr": "10.0.0.1", 00:19:33.507 "trsvcid": "42914" 00:19:33.507 }, 00:19:33.507 "auth": { 00:19:33.507 "state": "completed", 00:19:33.507 "digest": "sha256", 00:19:33.507 "dhgroup": "ffdhe3072" 00:19:33.507 } 00:19:33.507 } 00:19:33.507 ]' 00:19:33.507 05:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:33.765 05:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:33.765 05:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:33.765 05:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:33.765 05:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:33.765 05:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.765 05:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.765 05:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.023 05:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OWVkNTBkYTBkNTY0YmYzOWUzYjg3Y2RhYTFmOGEyYzI0MGYwNTBhYTU5NTgyODRkOWNiNGQ3MGQ1ODY4MDFkNYf5+BY=: 00:19:34.956 05:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.956 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.956 05:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:34.956 05:33:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.956 05:33:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.956 05:33:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.956 05:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:34.956 05:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:34.956 05:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:34.956 05:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:35.214 05:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:19:35.214 05:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:35.214 05:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:35.214 05:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:35.214 05:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:35.214 05:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.214 05:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.214 05:33:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.214 05:33:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.214 05:33:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.214 05:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.214 05:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.780 00:19:35.780 05:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:35.780 05:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:35.780 05:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.038 05:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.038 05:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.038 05:33:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.038 05:33:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.038 05:33:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.038 05:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:36.038 { 00:19:36.038 "cntlid": 25, 00:19:36.038 "qid": 0, 00:19:36.038 "state": "enabled", 00:19:36.038 "listen_address": { 00:19:36.038 "trtype": "TCP", 00:19:36.038 "adrfam": "IPv4", 00:19:36.038 "traddr": "10.0.0.2", 00:19:36.038 "trsvcid": "4420" 00:19:36.038 }, 00:19:36.038 "peer_address": { 00:19:36.038 "trtype": "TCP", 00:19:36.038 "adrfam": "IPv4", 00:19:36.038 "traddr": "10.0.0.1", 00:19:36.038 "trsvcid": "42924" 00:19:36.038 }, 00:19:36.038 "auth": { 00:19:36.038 "state": "completed", 00:19:36.038 "digest": "sha256", 00:19:36.038 "dhgroup": "ffdhe4096" 00:19:36.038 } 00:19:36.038 } 00:19:36.038 ]' 00:19:36.038 05:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:36.038 05:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:36.038 05:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:36.038 05:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:36.038 05:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:36.038 05:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.038 05:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.038 05:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.308 05:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjRkZDgzN2I5MGFkYTM4ZGI1NDBmM2M3NTQ5NDkzOWY2NWJjODNlMDJlNzY5YzVid1MmQg==: --dhchap-ctrl-secret DHHC-1:03:YzMyM2VmMjg1ZmJkNWFkMzA3ZjJlNmY3ZjIwMGU1MWJiNzM5MjAzZTgxYTkzZjc2YTY1YjAwNmE4NTYyMjAwYkNbsVg=: 00:19:37.247 05:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.247 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.247 05:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:37.247 05:33:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.247 05:33:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.248 05:33:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.248 05:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:37.248 05:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:37.248 05:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:37.505 05:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:19:37.505 05:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:37.505 05:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:37.505 05:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:37.505 05:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:37.505 05:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.505 05:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.505 05:33:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.505 05:33:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.505 05:33:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.505 05:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.505 05:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.070 00:19:38.070 05:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:38.070 05:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:38.070 05:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.070 05:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.070 05:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.070 05:33:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.070 05:33:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.328 05:33:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.328 05:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:38.328 { 00:19:38.328 "cntlid": 27, 00:19:38.328 "qid": 0, 00:19:38.328 "state": "enabled", 00:19:38.328 "listen_address": { 00:19:38.328 "trtype": "TCP", 00:19:38.328 "adrfam": "IPv4", 00:19:38.328 "traddr": "10.0.0.2", 00:19:38.328 "trsvcid": "4420" 00:19:38.328 }, 00:19:38.328 "peer_address": { 00:19:38.328 "trtype": "TCP", 00:19:38.328 "adrfam": "IPv4", 00:19:38.328 "traddr": "10.0.0.1", 00:19:38.328 "trsvcid": "42946" 00:19:38.328 }, 00:19:38.328 "auth": { 00:19:38.328 "state": "completed", 00:19:38.328 "digest": "sha256", 00:19:38.328 "dhgroup": "ffdhe4096" 00:19:38.328 } 00:19:38.328 } 00:19:38.328 ]' 00:19:38.328 05:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:38.328 05:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:38.328 05:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:38.328 05:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:38.328 05:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:38.328 05:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.328 05:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.328 05:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.585 05:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTU5MWQ5MzQ3ZDkzODE0OWM4Y2ZmNmI3OTA1NGNhZDGUMbrn: --dhchap-ctrl-secret DHHC-1:02:MTRlZDNhYjFjYWU4MGFhODAzNmQzM2YxYzc3ZmMzMGVjMDQ2MWUzNGZmZjU5YjU4fdti4A==: 00:19:39.517 05:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.517 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.517 05:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:39.517 05:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.517 05:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.517 05:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.517 05:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:39.517 05:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:39.517 05:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:39.775 05:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:19:39.775 05:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:39.775 05:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:39.775 05:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:39.775 05:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:39.775 05:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.775 05:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.775 05:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.775 05:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.775 05:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.775 05:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.775 05:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.394 00:19:40.394 05:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:40.394 05:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.394 05:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:40.394 05:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.394 05:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.394 05:33:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.394 05:33:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.394 05:33:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.394 05:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:40.394 { 00:19:40.394 "cntlid": 29, 00:19:40.394 "qid": 0, 00:19:40.394 "state": "enabled", 00:19:40.394 "listen_address": { 00:19:40.394 "trtype": "TCP", 00:19:40.394 "adrfam": "IPv4", 00:19:40.394 "traddr": "10.0.0.2", 00:19:40.394 "trsvcid": "4420" 00:19:40.394 }, 00:19:40.394 "peer_address": { 00:19:40.394 "trtype": "TCP", 00:19:40.394 "adrfam": "IPv4", 00:19:40.394 "traddr": "10.0.0.1", 00:19:40.394 "trsvcid": "42964" 00:19:40.394 }, 00:19:40.394 "auth": { 00:19:40.394 "state": "completed", 00:19:40.394 "digest": "sha256", 00:19:40.394 "dhgroup": "ffdhe4096" 00:19:40.394 } 00:19:40.394 } 00:19:40.394 ]' 00:19:40.394 05:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:40.652 05:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:40.652 05:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:40.652 05:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:40.652 05:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:40.652 05:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.652 05:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.652 05:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.909 05:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MTQxY2I0NDY0NGE5NjM1YmNhMWQ3MmM2NGY0ZjVhYWE5Nzk5YmY3YzE0ZmRkN2RmCb/FQQ==: --dhchap-ctrl-secret DHHC-1:01:NzExYWE4OWRkYzM0N2I1YTlmMTA4ZDE5OWI3NzU1OWWJ7U2z: 00:19:41.842 05:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.842 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.842 05:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:41.842 05:33:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.842 05:33:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.842 05:33:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.842 05:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:41.842 05:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:41.842 05:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:42.099 05:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:19:42.099 05:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:42.099 05:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:42.099 05:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:42.099 05:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:42.099 05:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.099 05:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:42.099 05:33:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.099 05:33:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.099 05:33:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.099 05:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:42.099 05:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:42.357 00:19:42.357 05:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:42.357 05:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:42.357 05:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.615 05:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.615 05:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.615 05:33:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.615 05:33:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.615 05:33:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.615 05:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:42.615 { 00:19:42.615 "cntlid": 31, 00:19:42.615 "qid": 0, 00:19:42.615 "state": "enabled", 00:19:42.615 "listen_address": { 00:19:42.615 "trtype": "TCP", 00:19:42.615 "adrfam": "IPv4", 00:19:42.615 "traddr": "10.0.0.2", 00:19:42.615 "trsvcid": "4420" 00:19:42.615 }, 00:19:42.615 "peer_address": { 00:19:42.615 "trtype": "TCP", 00:19:42.615 "adrfam": "IPv4", 00:19:42.615 "traddr": "10.0.0.1", 00:19:42.615 "trsvcid": "33158" 00:19:42.615 }, 00:19:42.615 "auth": { 00:19:42.615 "state": "completed", 00:19:42.615 "digest": "sha256", 00:19:42.615 "dhgroup": "ffdhe4096" 00:19:42.615 } 00:19:42.615 } 00:19:42.615 ]' 00:19:42.615 05:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:42.615 05:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:42.615 05:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:42.873 05:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:42.873 05:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:42.873 05:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.873 05:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.873 05:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.131 05:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OWVkNTBkYTBkNTY0YmYzOWUzYjg3Y2RhYTFmOGEyYzI0MGYwNTBhYTU5NTgyODRkOWNiNGQ3MGQ1ODY4MDFkNYf5+BY=: 00:19:44.064 05:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.064 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.064 05:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:44.064 05:33:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.064 05:33:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.064 05:33:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.064 05:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:44.064 05:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:44.064 05:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:44.064 05:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:44.323 05:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:19:44.323 05:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:44.323 05:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:44.323 05:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:44.323 05:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:44.323 05:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.323 05:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.323 05:33:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.323 05:33:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.323 05:33:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.323 05:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.323 05:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.889 00:19:44.889 05:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:44.889 05:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:44.889 05:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.147 05:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.147 05:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.147 05:33:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.147 05:33:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.147 05:33:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.147 05:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:45.147 { 00:19:45.147 "cntlid": 33, 00:19:45.147 "qid": 0, 00:19:45.147 "state": "enabled", 00:19:45.147 "listen_address": { 00:19:45.147 "trtype": "TCP", 00:19:45.147 "adrfam": "IPv4", 00:19:45.147 "traddr": "10.0.0.2", 00:19:45.147 "trsvcid": "4420" 00:19:45.147 }, 00:19:45.147 "peer_address": { 00:19:45.147 "trtype": "TCP", 00:19:45.147 "adrfam": "IPv4", 00:19:45.147 "traddr": "10.0.0.1", 00:19:45.147 "trsvcid": "33202" 00:19:45.147 }, 00:19:45.147 "auth": { 00:19:45.147 "state": "completed", 00:19:45.147 "digest": "sha256", 00:19:45.147 "dhgroup": "ffdhe6144" 00:19:45.147 } 00:19:45.147 } 00:19:45.147 ]' 00:19:45.147 05:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:45.147 05:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:45.147 05:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:45.147 05:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:45.147 05:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:45.147 05:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.147 05:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.147 05:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.405 05:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjRkZDgzN2I5MGFkYTM4ZGI1NDBmM2M3NTQ5NDkzOWY2NWJjODNlMDJlNzY5YzVid1MmQg==: --dhchap-ctrl-secret DHHC-1:03:YzMyM2VmMjg1ZmJkNWFkMzA3ZjJlNmY3ZjIwMGU1MWJiNzM5MjAzZTgxYTkzZjc2YTY1YjAwNmE4NTYyMjAwYkNbsVg=: 00:19:46.338 05:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.338 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.338 05:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:46.338 05:33:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.338 05:33:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.338 05:33:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.338 05:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:46.338 05:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:46.338 05:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:46.595 05:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:19:46.595 05:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:46.595 05:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:46.595 05:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:46.595 05:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:46.595 05:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.595 05:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.595 05:33:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.595 05:33:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.595 05:33:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.595 05:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.595 05:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.159 00:19:47.159 05:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:47.159 05:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:47.159 05:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.417 05:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.417 05:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.417 05:33:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.417 05:33:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.417 05:33:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.417 05:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:47.417 { 00:19:47.417 "cntlid": 35, 00:19:47.417 "qid": 0, 00:19:47.417 "state": "enabled", 00:19:47.417 "listen_address": { 00:19:47.417 "trtype": "TCP", 00:19:47.417 "adrfam": "IPv4", 00:19:47.417 "traddr": "10.0.0.2", 00:19:47.417 "trsvcid": "4420" 00:19:47.417 }, 00:19:47.417 "peer_address": { 00:19:47.417 "trtype": "TCP", 00:19:47.417 "adrfam": "IPv4", 00:19:47.417 "traddr": "10.0.0.1", 00:19:47.417 "trsvcid": "33220" 00:19:47.417 }, 00:19:47.417 "auth": { 00:19:47.417 "state": "completed", 00:19:47.417 "digest": "sha256", 00:19:47.417 "dhgroup": "ffdhe6144" 00:19:47.417 } 00:19:47.417 } 00:19:47.417 ]' 00:19:47.417 05:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:47.675 05:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:47.675 05:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:47.675 05:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:47.675 05:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:47.675 05:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.675 05:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.675 05:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.933 05:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTU5MWQ5MzQ3ZDkzODE0OWM4Y2ZmNmI3OTA1NGNhZDGUMbrn: --dhchap-ctrl-secret DHHC-1:02:MTRlZDNhYjFjYWU4MGFhODAzNmQzM2YxYzc3ZmMzMGVjMDQ2MWUzNGZmZjU5YjU4fdti4A==: 00:19:48.864 05:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.864 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.864 05:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:48.864 05:33:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.864 05:33:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.864 05:33:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.864 05:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:48.864 05:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:48.865 05:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:49.122 05:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:19:49.122 05:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:49.122 05:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:49.122 05:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:49.122 05:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:49.122 05:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.122 05:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:49.122 05:33:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.122 05:33:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.122 05:33:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.122 05:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:49.122 05:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:49.689 00:19:49.689 05:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:49.689 05:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:49.689 05:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.947 05:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.947 05:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.947 05:33:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.947 05:33:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.947 05:33:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.947 05:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:49.947 { 00:19:49.947 "cntlid": 37, 00:19:49.947 "qid": 0, 00:19:49.947 "state": "enabled", 00:19:49.947 "listen_address": { 00:19:49.947 "trtype": "TCP", 00:19:49.947 "adrfam": "IPv4", 00:19:49.947 "traddr": "10.0.0.2", 00:19:49.947 "trsvcid": "4420" 00:19:49.947 }, 00:19:49.947 "peer_address": { 00:19:49.947 "trtype": "TCP", 00:19:49.947 "adrfam": "IPv4", 00:19:49.947 "traddr": "10.0.0.1", 00:19:49.947 "trsvcid": "33242" 00:19:49.947 }, 00:19:49.947 "auth": { 00:19:49.947 "state": "completed", 00:19:49.947 "digest": "sha256", 00:19:49.947 "dhgroup": "ffdhe6144" 00:19:49.947 } 00:19:49.947 } 00:19:49.947 ]' 00:19:49.947 05:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:49.947 05:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:49.947 05:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:49.947 05:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:49.947 05:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:49.947 05:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.947 05:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.947 05:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.511 05:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MTQxY2I0NDY0NGE5NjM1YmNhMWQ3MmM2NGY0ZjVhYWE5Nzk5YmY3YzE0ZmRkN2RmCb/FQQ==: --dhchap-ctrl-secret DHHC-1:01:NzExYWE4OWRkYzM0N2I1YTlmMTA4ZDE5OWI3NzU1OWWJ7U2z: 00:19:51.444 05:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.444 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.444 05:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:51.444 05:33:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.444 05:33:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.444 05:33:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.444 05:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:51.444 05:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:51.444 05:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:51.702 05:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:19:51.702 05:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:51.702 05:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:51.702 05:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:51.702 05:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:51.702 05:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.702 05:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:51.702 05:33:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.702 05:33:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.702 05:33:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.702 05:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:51.702 05:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:52.268 00:19:52.268 05:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:52.268 05:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:52.268 05:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.526 05:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.526 05:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.526 05:33:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.526 05:33:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.526 05:33:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.526 05:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:52.526 { 00:19:52.526 "cntlid": 39, 00:19:52.526 "qid": 0, 00:19:52.526 "state": "enabled", 00:19:52.526 "listen_address": { 00:19:52.526 "trtype": "TCP", 00:19:52.526 "adrfam": "IPv4", 00:19:52.526 "traddr": "10.0.0.2", 00:19:52.526 "trsvcid": "4420" 00:19:52.526 }, 00:19:52.526 "peer_address": { 00:19:52.526 "trtype": "TCP", 00:19:52.526 "adrfam": "IPv4", 00:19:52.526 "traddr": "10.0.0.1", 00:19:52.526 "trsvcid": "32888" 00:19:52.526 }, 00:19:52.526 "auth": { 00:19:52.526 "state": "completed", 00:19:52.526 "digest": "sha256", 00:19:52.526 "dhgroup": "ffdhe6144" 00:19:52.526 } 00:19:52.526 } 00:19:52.526 ]' 00:19:52.526 05:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:52.526 05:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:52.526 05:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:52.526 05:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:52.526 05:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:52.526 05:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.526 05:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.526 05:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.784 05:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OWVkNTBkYTBkNTY0YmYzOWUzYjg3Y2RhYTFmOGEyYzI0MGYwNTBhYTU5NTgyODRkOWNiNGQ3MGQ1ODY4MDFkNYf5+BY=: 00:19:53.717 05:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.037 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.037 05:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:54.037 05:34:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.037 05:34:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.037 05:34:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.037 05:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:54.037 05:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:54.037 05:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:54.037 05:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:54.037 05:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:19:54.037 05:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:54.037 05:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:54.037 05:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:54.037 05:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:54.037 05:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.037 05:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:54.037 05:34:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.037 05:34:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.037 05:34:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.037 05:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:54.037 05:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:54.988 00:19:54.988 05:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:54.988 05:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:54.988 05:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.245 05:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.245 05:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.245 05:34:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.245 05:34:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.245 05:34:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.245 05:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:55.245 { 00:19:55.245 "cntlid": 41, 00:19:55.245 "qid": 0, 00:19:55.245 "state": "enabled", 00:19:55.245 "listen_address": { 00:19:55.245 "trtype": "TCP", 00:19:55.245 "adrfam": "IPv4", 00:19:55.245 "traddr": "10.0.0.2", 00:19:55.245 "trsvcid": "4420" 00:19:55.245 }, 00:19:55.245 "peer_address": { 00:19:55.245 "trtype": "TCP", 00:19:55.245 "adrfam": "IPv4", 00:19:55.245 "traddr": "10.0.0.1", 00:19:55.245 "trsvcid": "32912" 00:19:55.245 }, 00:19:55.245 "auth": { 00:19:55.245 "state": "completed", 00:19:55.245 "digest": "sha256", 00:19:55.245 "dhgroup": "ffdhe8192" 00:19:55.245 } 00:19:55.245 } 00:19:55.245 ]' 00:19:55.245 05:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:55.245 05:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:55.245 05:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:55.245 05:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:55.245 05:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:55.502 05:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.502 05:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.502 05:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.760 05:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjRkZDgzN2I5MGFkYTM4ZGI1NDBmM2M3NTQ5NDkzOWY2NWJjODNlMDJlNzY5YzVid1MmQg==: --dhchap-ctrl-secret DHHC-1:03:YzMyM2VmMjg1ZmJkNWFkMzA3ZjJlNmY3ZjIwMGU1MWJiNzM5MjAzZTgxYTkzZjc2YTY1YjAwNmE4NTYyMjAwYkNbsVg=: 00:19:56.694 05:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.694 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.694 05:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:56.694 05:34:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.694 05:34:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.694 05:34:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.694 05:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:56.694 05:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:56.694 05:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:56.951 05:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:19:56.951 05:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:56.951 05:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:56.951 05:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:56.951 05:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:56.951 05:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.951 05:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.951 05:34:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.951 05:34:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.951 05:34:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.951 05:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.951 05:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.887 00:19:57.887 05:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:57.887 05:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:57.887 05:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.145 05:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.145 05:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.145 05:34:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.145 05:34:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.145 05:34:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.145 05:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:58.145 { 00:19:58.145 "cntlid": 43, 00:19:58.145 "qid": 0, 00:19:58.145 "state": "enabled", 00:19:58.145 "listen_address": { 00:19:58.145 "trtype": "TCP", 00:19:58.145 "adrfam": "IPv4", 00:19:58.145 "traddr": "10.0.0.2", 00:19:58.145 "trsvcid": "4420" 00:19:58.145 }, 00:19:58.145 "peer_address": { 00:19:58.145 "trtype": "TCP", 00:19:58.145 "adrfam": "IPv4", 00:19:58.145 "traddr": "10.0.0.1", 00:19:58.145 "trsvcid": "32958" 00:19:58.145 }, 00:19:58.145 "auth": { 00:19:58.145 "state": "completed", 00:19:58.145 "digest": "sha256", 00:19:58.145 "dhgroup": "ffdhe8192" 00:19:58.145 } 00:19:58.145 } 00:19:58.145 ]' 00:19:58.145 05:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:58.145 05:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:58.145 05:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:58.145 05:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:58.145 05:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:58.145 05:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.145 05:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.145 05:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.403 05:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTU5MWQ5MzQ3ZDkzODE0OWM4Y2ZmNmI3OTA1NGNhZDGUMbrn: --dhchap-ctrl-secret DHHC-1:02:MTRlZDNhYjFjYWU4MGFhODAzNmQzM2YxYzc3ZmMzMGVjMDQ2MWUzNGZmZjU5YjU4fdti4A==: 00:19:59.335 05:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.335 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.335 05:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:59.335 05:34:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.335 05:34:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.335 05:34:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.335 05:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:59.335 05:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:59.335 05:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:59.901 05:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:19:59.901 05:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:59.901 05:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:59.901 05:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:59.901 05:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:59.901 05:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.901 05:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.901 05:34:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.901 05:34:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.901 05:34:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.901 05:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.902 05:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.467 00:20:00.724 05:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:00.724 05:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:00.724 05:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.982 05:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.982 05:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.982 05:34:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.982 05:34:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.982 05:34:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.982 05:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:00.982 { 00:20:00.982 "cntlid": 45, 00:20:00.982 "qid": 0, 00:20:00.982 "state": "enabled", 00:20:00.982 "listen_address": { 00:20:00.982 "trtype": "TCP", 00:20:00.982 "adrfam": "IPv4", 00:20:00.982 "traddr": "10.0.0.2", 00:20:00.982 "trsvcid": "4420" 00:20:00.982 }, 00:20:00.982 "peer_address": { 00:20:00.982 "trtype": "TCP", 00:20:00.982 "adrfam": "IPv4", 00:20:00.982 "traddr": "10.0.0.1", 00:20:00.982 "trsvcid": "32976" 00:20:00.982 }, 00:20:00.982 "auth": { 00:20:00.982 "state": "completed", 00:20:00.982 "digest": "sha256", 00:20:00.982 "dhgroup": "ffdhe8192" 00:20:00.982 } 00:20:00.982 } 00:20:00.982 ]' 00:20:00.982 05:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:00.982 05:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:00.982 05:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:00.982 05:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:00.982 05:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:00.982 05:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.982 05:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.982 05:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.239 05:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MTQxY2I0NDY0NGE5NjM1YmNhMWQ3MmM2NGY0ZjVhYWE5Nzk5YmY3YzE0ZmRkN2RmCb/FQQ==: --dhchap-ctrl-secret DHHC-1:01:NzExYWE4OWRkYzM0N2I1YTlmMTA4ZDE5OWI3NzU1OWWJ7U2z: 00:20:02.172 05:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.172 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.172 05:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:02.172 05:34:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.172 05:34:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.172 05:34:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.172 05:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:02.172 05:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:02.172 05:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:02.429 05:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:20:02.429 05:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:02.429 05:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:02.429 05:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:02.429 05:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:02.429 05:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.429 05:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:02.429 05:34:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.429 05:34:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.429 05:34:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.429 05:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:02.429 05:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:03.361 00:20:03.361 05:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:03.361 05:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:03.361 05:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.618 05:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.618 05:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.618 05:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.618 05:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.618 05:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.618 05:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:03.618 { 00:20:03.618 "cntlid": 47, 00:20:03.618 "qid": 0, 00:20:03.618 "state": "enabled", 00:20:03.618 "listen_address": { 00:20:03.618 "trtype": "TCP", 00:20:03.618 "adrfam": "IPv4", 00:20:03.618 "traddr": "10.0.0.2", 00:20:03.618 "trsvcid": "4420" 00:20:03.618 }, 00:20:03.618 "peer_address": { 00:20:03.618 "trtype": "TCP", 00:20:03.618 "adrfam": "IPv4", 00:20:03.618 "traddr": "10.0.0.1", 00:20:03.618 "trsvcid": "47336" 00:20:03.618 }, 00:20:03.618 "auth": { 00:20:03.618 "state": "completed", 00:20:03.618 "digest": "sha256", 00:20:03.618 "dhgroup": "ffdhe8192" 00:20:03.618 } 00:20:03.618 } 00:20:03.618 ]' 00:20:03.618 05:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:03.618 05:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:03.618 05:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:03.618 05:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:03.618 05:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:03.618 05:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.618 05:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.618 05:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.875 05:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OWVkNTBkYTBkNTY0YmYzOWUzYjg3Y2RhYTFmOGEyYzI0MGYwNTBhYTU5NTgyODRkOWNiNGQ3MGQ1ODY4MDFkNYf5+BY=: 00:20:04.805 05:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.805 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.805 05:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:04.805 05:34:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.805 05:34:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.805 05:34:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.805 05:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:20:04.805 05:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:04.805 05:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:04.805 05:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:04.805 05:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:05.062 05:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:20:05.062 05:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:05.062 05:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:05.062 05:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:05.062 05:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:05.062 05:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:05.062 05:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.062 05:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.062 05:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.062 05:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.062 05:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.062 05:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.626 00:20:05.626 05:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:05.626 05:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:05.626 05:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.626 05:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.626 05:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.626 05:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.626 05:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.626 05:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.626 05:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:05.626 { 00:20:05.626 "cntlid": 49, 00:20:05.626 "qid": 0, 00:20:05.626 "state": "enabled", 00:20:05.626 "listen_address": { 00:20:05.626 "trtype": "TCP", 00:20:05.626 "adrfam": "IPv4", 00:20:05.626 "traddr": "10.0.0.2", 00:20:05.626 "trsvcid": "4420" 00:20:05.626 }, 00:20:05.626 "peer_address": { 00:20:05.626 "trtype": "TCP", 00:20:05.626 "adrfam": "IPv4", 00:20:05.626 "traddr": "10.0.0.1", 00:20:05.626 "trsvcid": "47370" 00:20:05.626 }, 00:20:05.626 "auth": { 00:20:05.626 "state": "completed", 00:20:05.626 "digest": "sha384", 00:20:05.626 "dhgroup": "null" 00:20:05.626 } 00:20:05.626 } 00:20:05.626 ]' 00:20:05.626 05:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:05.883 05:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:05.883 05:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:05.883 05:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:05.883 05:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:05.883 05:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.883 05:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.883 05:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.139 05:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjRkZDgzN2I5MGFkYTM4ZGI1NDBmM2M3NTQ5NDkzOWY2NWJjODNlMDJlNzY5YzVid1MmQg==: --dhchap-ctrl-secret DHHC-1:03:YzMyM2VmMjg1ZmJkNWFkMzA3ZjJlNmY3ZjIwMGU1MWJiNzM5MjAzZTgxYTkzZjc2YTY1YjAwNmE4NTYyMjAwYkNbsVg=: 00:20:07.068 05:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.068 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.068 05:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:07.068 05:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.068 05:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.068 05:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.068 05:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:07.068 05:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:07.068 05:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:07.326 05:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:20:07.326 05:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:07.326 05:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:07.326 05:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:07.326 05:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:07.326 05:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.326 05:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.326 05:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.326 05:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.326 05:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.326 05:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.326 05:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.583 00:20:07.583 05:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:07.583 05:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:07.583 05:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.841 05:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.841 05:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.841 05:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.841 05:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.841 05:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.841 05:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:07.841 { 00:20:07.841 "cntlid": 51, 00:20:07.841 "qid": 0, 00:20:07.841 "state": "enabled", 00:20:07.841 "listen_address": { 00:20:07.841 "trtype": "TCP", 00:20:07.841 "adrfam": "IPv4", 00:20:07.841 "traddr": "10.0.0.2", 00:20:07.841 "trsvcid": "4420" 00:20:07.841 }, 00:20:07.841 "peer_address": { 00:20:07.841 "trtype": "TCP", 00:20:07.841 "adrfam": "IPv4", 00:20:07.841 "traddr": "10.0.0.1", 00:20:07.841 "trsvcid": "47408" 00:20:07.841 }, 00:20:07.841 "auth": { 00:20:07.841 "state": "completed", 00:20:07.841 "digest": "sha384", 00:20:07.841 "dhgroup": "null" 00:20:07.841 } 00:20:07.841 } 00:20:07.841 ]' 00:20:07.841 05:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:08.126 05:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:08.126 05:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:08.126 05:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:08.126 05:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:08.126 05:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.126 05:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.126 05:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.391 05:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTU5MWQ5MzQ3ZDkzODE0OWM4Y2ZmNmI3OTA1NGNhZDGUMbrn: --dhchap-ctrl-secret DHHC-1:02:MTRlZDNhYjFjYWU4MGFhODAzNmQzM2YxYzc3ZmMzMGVjMDQ2MWUzNGZmZjU5YjU4fdti4A==: 00:20:09.325 05:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.325 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.325 05:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:09.325 05:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.325 05:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.325 05:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.325 05:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:09.325 05:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:09.325 05:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:09.584 05:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:20:09.584 05:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:09.584 05:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:09.584 05:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:09.584 05:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:09.584 05:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.584 05:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.584 05:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.584 05:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.584 05:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.584 05:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.584 05:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.842 00:20:10.101 05:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:10.101 05:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:10.101 05:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.358 05:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.358 05:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.358 05:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.358 05:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.358 05:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.358 05:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:10.358 { 00:20:10.358 "cntlid": 53, 00:20:10.358 "qid": 0, 00:20:10.358 "state": "enabled", 00:20:10.358 "listen_address": { 00:20:10.358 "trtype": "TCP", 00:20:10.359 "adrfam": "IPv4", 00:20:10.359 "traddr": "10.0.0.2", 00:20:10.359 "trsvcid": "4420" 00:20:10.359 }, 00:20:10.359 "peer_address": { 00:20:10.359 "trtype": "TCP", 00:20:10.359 "adrfam": "IPv4", 00:20:10.359 "traddr": "10.0.0.1", 00:20:10.359 "trsvcid": "47444" 00:20:10.359 }, 00:20:10.359 "auth": { 00:20:10.359 "state": "completed", 00:20:10.359 "digest": "sha384", 00:20:10.359 "dhgroup": "null" 00:20:10.359 } 00:20:10.359 } 00:20:10.359 ]' 00:20:10.359 05:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:10.359 05:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:10.359 05:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:10.359 05:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:10.359 05:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:10.359 05:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.359 05:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.359 05:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.616 05:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MTQxY2I0NDY0NGE5NjM1YmNhMWQ3MmM2NGY0ZjVhYWE5Nzk5YmY3YzE0ZmRkN2RmCb/FQQ==: --dhchap-ctrl-secret DHHC-1:01:NzExYWE4OWRkYzM0N2I1YTlmMTA4ZDE5OWI3NzU1OWWJ7U2z: 00:20:11.549 05:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.549 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.549 05:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:11.549 05:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.549 05:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.549 05:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.549 05:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:11.549 05:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:11.549 05:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:11.807 05:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:20:11.807 05:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:11.807 05:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:11.807 05:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:11.807 05:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:11.807 05:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.807 05:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:11.807 05:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.807 05:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.807 05:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.807 05:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:11.808 05:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:12.373 00:20:12.373 05:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:12.373 05:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:12.373 05:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.373 05:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.373 05:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.373 05:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.373 05:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.373 05:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.373 05:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:12.373 { 00:20:12.373 "cntlid": 55, 00:20:12.373 "qid": 0, 00:20:12.373 "state": "enabled", 00:20:12.373 "listen_address": { 00:20:12.373 "trtype": "TCP", 00:20:12.373 "adrfam": "IPv4", 00:20:12.373 "traddr": "10.0.0.2", 00:20:12.373 "trsvcid": "4420" 00:20:12.373 }, 00:20:12.373 "peer_address": { 00:20:12.373 "trtype": "TCP", 00:20:12.373 "adrfam": "IPv4", 00:20:12.373 "traddr": "10.0.0.1", 00:20:12.373 "trsvcid": "41772" 00:20:12.373 }, 00:20:12.373 "auth": { 00:20:12.373 "state": "completed", 00:20:12.373 "digest": "sha384", 00:20:12.373 "dhgroup": "null" 00:20:12.373 } 00:20:12.373 } 00:20:12.373 ]' 00:20:12.373 05:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:12.630 05:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:12.630 05:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:12.630 05:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:12.631 05:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:12.631 05:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.631 05:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.631 05:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.888 05:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OWVkNTBkYTBkNTY0YmYzOWUzYjg3Y2RhYTFmOGEyYzI0MGYwNTBhYTU5NTgyODRkOWNiNGQ3MGQ1ODY4MDFkNYf5+BY=: 00:20:13.827 05:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.827 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.827 05:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:13.827 05:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.827 05:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.827 05:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.827 05:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:13.827 05:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:13.827 05:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:13.827 05:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:14.085 05:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:20:14.085 05:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:14.085 05:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:14.085 05:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:14.085 05:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:14.085 05:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.085 05:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.085 05:34:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.085 05:34:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.085 05:34:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.085 05:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.085 05:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.343 00:20:14.343 05:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:14.343 05:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:14.343 05:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.601 05:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.601 05:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.601 05:34:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.601 05:34:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.601 05:34:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.601 05:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:14.601 { 00:20:14.601 "cntlid": 57, 00:20:14.601 "qid": 0, 00:20:14.601 "state": "enabled", 00:20:14.601 "listen_address": { 00:20:14.601 "trtype": "TCP", 00:20:14.601 "adrfam": "IPv4", 00:20:14.601 "traddr": "10.0.0.2", 00:20:14.601 "trsvcid": "4420" 00:20:14.601 }, 00:20:14.601 "peer_address": { 00:20:14.601 "trtype": "TCP", 00:20:14.601 "adrfam": "IPv4", 00:20:14.601 "traddr": "10.0.0.1", 00:20:14.602 "trsvcid": "41796" 00:20:14.602 }, 00:20:14.602 "auth": { 00:20:14.602 "state": "completed", 00:20:14.602 "digest": "sha384", 00:20:14.602 "dhgroup": "ffdhe2048" 00:20:14.602 } 00:20:14.602 } 00:20:14.602 ]' 00:20:14.602 05:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:14.860 05:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:14.860 05:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:14.860 05:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:14.860 05:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:14.860 05:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.860 05:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.860 05:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.117 05:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjRkZDgzN2I5MGFkYTM4ZGI1NDBmM2M3NTQ5NDkzOWY2NWJjODNlMDJlNzY5YzVid1MmQg==: --dhchap-ctrl-secret DHHC-1:03:YzMyM2VmMjg1ZmJkNWFkMzA3ZjJlNmY3ZjIwMGU1MWJiNzM5MjAzZTgxYTkzZjc2YTY1YjAwNmE4NTYyMjAwYkNbsVg=: 00:20:16.050 05:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.050 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.050 05:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:16.050 05:34:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.050 05:34:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.050 05:34:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.050 05:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:16.050 05:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:16.050 05:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:16.614 05:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:20:16.614 05:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:16.614 05:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:16.614 05:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:16.614 05:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:16.614 05:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.614 05:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.614 05:34:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.614 05:34:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.614 05:34:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.614 05:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.614 05:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.872 00:20:16.872 05:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:16.872 05:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:16.872 05:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.129 05:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.129 05:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.129 05:34:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.129 05:34:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.129 05:34:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.129 05:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:17.129 { 00:20:17.129 "cntlid": 59, 00:20:17.129 "qid": 0, 00:20:17.129 "state": "enabled", 00:20:17.129 "listen_address": { 00:20:17.129 "trtype": "TCP", 00:20:17.129 "adrfam": "IPv4", 00:20:17.129 "traddr": "10.0.0.2", 00:20:17.129 "trsvcid": "4420" 00:20:17.129 }, 00:20:17.129 "peer_address": { 00:20:17.129 "trtype": "TCP", 00:20:17.129 "adrfam": "IPv4", 00:20:17.129 "traddr": "10.0.0.1", 00:20:17.129 "trsvcid": "41814" 00:20:17.129 }, 00:20:17.129 "auth": { 00:20:17.129 "state": "completed", 00:20:17.129 "digest": "sha384", 00:20:17.129 "dhgroup": "ffdhe2048" 00:20:17.129 } 00:20:17.129 } 00:20:17.129 ]' 00:20:17.129 05:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:17.129 05:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:17.129 05:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:17.129 05:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:17.129 05:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:17.129 05:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.129 05:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.129 05:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.386 05:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTU5MWQ5MzQ3ZDkzODE0OWM4Y2ZmNmI3OTA1NGNhZDGUMbrn: --dhchap-ctrl-secret DHHC-1:02:MTRlZDNhYjFjYWU4MGFhODAzNmQzM2YxYzc3ZmMzMGVjMDQ2MWUzNGZmZjU5YjU4fdti4A==: 00:20:18.320 05:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.320 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.320 05:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:18.320 05:34:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.320 05:34:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.320 05:34:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.320 05:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:18.320 05:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:18.320 05:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:18.577 05:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:20:18.577 05:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:18.577 05:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:18.577 05:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:18.577 05:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:18.577 05:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:18.578 05:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:18.578 05:34:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.578 05:34:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.578 05:34:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.578 05:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:18.578 05:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.143 00:20:19.143 05:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:19.143 05:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:19.143 05:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.143 05:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.143 05:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.143 05:34:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.143 05:34:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.143 05:34:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.401 05:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:19.401 { 00:20:19.401 "cntlid": 61, 00:20:19.401 "qid": 0, 00:20:19.401 "state": "enabled", 00:20:19.401 "listen_address": { 00:20:19.401 "trtype": "TCP", 00:20:19.401 "adrfam": "IPv4", 00:20:19.401 "traddr": "10.0.0.2", 00:20:19.401 "trsvcid": "4420" 00:20:19.401 }, 00:20:19.401 "peer_address": { 00:20:19.401 "trtype": "TCP", 00:20:19.401 "adrfam": "IPv4", 00:20:19.401 "traddr": "10.0.0.1", 00:20:19.401 "trsvcid": "41826" 00:20:19.401 }, 00:20:19.401 "auth": { 00:20:19.401 "state": "completed", 00:20:19.401 "digest": "sha384", 00:20:19.401 "dhgroup": "ffdhe2048" 00:20:19.401 } 00:20:19.401 } 00:20:19.401 ]' 00:20:19.401 05:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:19.401 05:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:19.401 05:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:19.401 05:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:19.401 05:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:19.401 05:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.401 05:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.401 05:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.659 05:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MTQxY2I0NDY0NGE5NjM1YmNhMWQ3MmM2NGY0ZjVhYWE5Nzk5YmY3YzE0ZmRkN2RmCb/FQQ==: --dhchap-ctrl-secret DHHC-1:01:NzExYWE4OWRkYzM0N2I1YTlmMTA4ZDE5OWI3NzU1OWWJ7U2z: 00:20:20.592 05:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.592 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.592 05:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:20.592 05:34:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.592 05:34:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.592 05:34:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.592 05:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:20.592 05:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:20.592 05:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:20.850 05:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:20:20.850 05:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:20.850 05:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:20.850 05:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:20.850 05:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:20.850 05:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.850 05:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:20.850 05:34:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.850 05:34:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.850 05:34:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.850 05:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:20.850 05:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:21.414 00:20:21.414 05:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:21.415 05:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:21.415 05:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.415 05:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.415 05:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.415 05:34:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.415 05:34:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.415 05:34:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.415 05:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:21.415 { 00:20:21.415 "cntlid": 63, 00:20:21.415 "qid": 0, 00:20:21.415 "state": "enabled", 00:20:21.415 "listen_address": { 00:20:21.415 "trtype": "TCP", 00:20:21.415 "adrfam": "IPv4", 00:20:21.415 "traddr": "10.0.0.2", 00:20:21.415 "trsvcid": "4420" 00:20:21.415 }, 00:20:21.415 "peer_address": { 00:20:21.415 "trtype": "TCP", 00:20:21.415 "adrfam": "IPv4", 00:20:21.415 "traddr": "10.0.0.1", 00:20:21.415 "trsvcid": "52726" 00:20:21.415 }, 00:20:21.415 "auth": { 00:20:21.415 "state": "completed", 00:20:21.415 "digest": "sha384", 00:20:21.415 "dhgroup": "ffdhe2048" 00:20:21.415 } 00:20:21.415 } 00:20:21.415 ]' 00:20:21.415 05:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:21.672 05:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:21.672 05:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:21.672 05:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:21.672 05:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:21.672 05:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.672 05:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.672 05:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.937 05:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OWVkNTBkYTBkNTY0YmYzOWUzYjg3Y2RhYTFmOGEyYzI0MGYwNTBhYTU5NTgyODRkOWNiNGQ3MGQ1ODY4MDFkNYf5+BY=: 00:20:22.923 05:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.923 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.923 05:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:22.923 05:34:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.923 05:34:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.923 05:34:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.923 05:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:22.923 05:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:22.923 05:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:22.923 05:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:23.181 05:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:20:23.181 05:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:23.181 05:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:23.181 05:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:23.181 05:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:23.181 05:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.181 05:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:23.181 05:34:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.181 05:34:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.181 05:34:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.181 05:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:23.181 05:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:23.438 00:20:23.438 05:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:23.438 05:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:23.438 05:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.694 05:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.694 05:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.694 05:34:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.694 05:34:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.694 05:34:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.694 05:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:23.694 { 00:20:23.694 "cntlid": 65, 00:20:23.694 "qid": 0, 00:20:23.694 "state": "enabled", 00:20:23.694 "listen_address": { 00:20:23.694 "trtype": "TCP", 00:20:23.694 "adrfam": "IPv4", 00:20:23.694 "traddr": "10.0.0.2", 00:20:23.694 "trsvcid": "4420" 00:20:23.694 }, 00:20:23.694 "peer_address": { 00:20:23.694 "trtype": "TCP", 00:20:23.694 "adrfam": "IPv4", 00:20:23.694 "traddr": "10.0.0.1", 00:20:23.694 "trsvcid": "52748" 00:20:23.694 }, 00:20:23.694 "auth": { 00:20:23.694 "state": "completed", 00:20:23.694 "digest": "sha384", 00:20:23.694 "dhgroup": "ffdhe3072" 00:20:23.694 } 00:20:23.694 } 00:20:23.694 ]' 00:20:23.694 05:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:23.952 05:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:23.952 05:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:23.952 05:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:23.952 05:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:23.952 05:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.952 05:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.952 05:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.209 05:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjRkZDgzN2I5MGFkYTM4ZGI1NDBmM2M3NTQ5NDkzOWY2NWJjODNlMDJlNzY5YzVid1MmQg==: --dhchap-ctrl-secret DHHC-1:03:YzMyM2VmMjg1ZmJkNWFkMzA3ZjJlNmY3ZjIwMGU1MWJiNzM5MjAzZTgxYTkzZjc2YTY1YjAwNmE4NTYyMjAwYkNbsVg=: 00:20:25.142 05:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.142 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.142 05:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:25.142 05:34:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.142 05:34:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.142 05:34:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.142 05:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:25.142 05:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:25.142 05:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:25.399 05:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:20:25.399 05:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:25.399 05:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:25.399 05:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:25.399 05:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:25.399 05:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.399 05:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.399 05:34:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.399 05:34:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.399 05:34:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.399 05:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.399 05:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.963 00:20:25.963 05:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:25.963 05:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:25.964 05:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.221 05:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.221 05:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.221 05:34:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.221 05:34:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.221 05:34:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.221 05:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:26.221 { 00:20:26.221 "cntlid": 67, 00:20:26.221 "qid": 0, 00:20:26.221 "state": "enabled", 00:20:26.221 "listen_address": { 00:20:26.221 "trtype": "TCP", 00:20:26.221 "adrfam": "IPv4", 00:20:26.221 "traddr": "10.0.0.2", 00:20:26.221 "trsvcid": "4420" 00:20:26.221 }, 00:20:26.221 "peer_address": { 00:20:26.221 "trtype": "TCP", 00:20:26.221 "adrfam": "IPv4", 00:20:26.221 "traddr": "10.0.0.1", 00:20:26.221 "trsvcid": "52784" 00:20:26.221 }, 00:20:26.221 "auth": { 00:20:26.221 "state": "completed", 00:20:26.221 "digest": "sha384", 00:20:26.221 "dhgroup": "ffdhe3072" 00:20:26.221 } 00:20:26.221 } 00:20:26.221 ]' 00:20:26.221 05:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:26.221 05:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:26.221 05:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:26.221 05:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:26.221 05:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:26.221 05:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.221 05:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.221 05:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.477 05:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTU5MWQ5MzQ3ZDkzODE0OWM4Y2ZmNmI3OTA1NGNhZDGUMbrn: --dhchap-ctrl-secret DHHC-1:02:MTRlZDNhYjFjYWU4MGFhODAzNmQzM2YxYzc3ZmMzMGVjMDQ2MWUzNGZmZjU5YjU4fdti4A==: 00:20:27.406 05:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.406 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.406 05:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:27.406 05:34:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.406 05:34:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.406 05:34:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.406 05:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:27.406 05:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:27.406 05:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:27.664 05:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:20:27.664 05:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:27.664 05:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:27.664 05:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:27.664 05:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:27.664 05:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.664 05:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.664 05:34:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.664 05:34:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.664 05:34:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.664 05:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.664 05:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.229 00:20:28.229 05:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:28.229 05:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:28.229 05:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.488 05:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.488 05:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.488 05:34:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.488 05:34:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.488 05:34:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.488 05:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:28.488 { 00:20:28.488 "cntlid": 69, 00:20:28.488 "qid": 0, 00:20:28.488 "state": "enabled", 00:20:28.488 "listen_address": { 00:20:28.488 "trtype": "TCP", 00:20:28.488 "adrfam": "IPv4", 00:20:28.488 "traddr": "10.0.0.2", 00:20:28.488 "trsvcid": "4420" 00:20:28.488 }, 00:20:28.488 "peer_address": { 00:20:28.488 "trtype": "TCP", 00:20:28.488 "adrfam": "IPv4", 00:20:28.488 "traddr": "10.0.0.1", 00:20:28.488 "trsvcid": "52796" 00:20:28.488 }, 00:20:28.488 "auth": { 00:20:28.488 "state": "completed", 00:20:28.488 "digest": "sha384", 00:20:28.488 "dhgroup": "ffdhe3072" 00:20:28.488 } 00:20:28.488 } 00:20:28.488 ]' 00:20:28.488 05:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:28.488 05:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:28.488 05:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:28.488 05:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:28.488 05:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:28.488 05:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.488 05:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.488 05:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.746 05:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MTQxY2I0NDY0NGE5NjM1YmNhMWQ3MmM2NGY0ZjVhYWE5Nzk5YmY3YzE0ZmRkN2RmCb/FQQ==: --dhchap-ctrl-secret DHHC-1:01:NzExYWE4OWRkYzM0N2I1YTlmMTA4ZDE5OWI3NzU1OWWJ7U2z: 00:20:30.119 05:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.119 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.119 05:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:30.119 05:34:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.119 05:34:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.119 05:34:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.119 05:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:30.119 05:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:30.119 05:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:30.119 05:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:20:30.119 05:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:30.119 05:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:30.119 05:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:30.119 05:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:30.119 05:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.119 05:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:30.119 05:34:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.119 05:34:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.119 05:34:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.119 05:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:30.119 05:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:30.377 00:20:30.377 05:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:30.377 05:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:30.377 05:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.635 05:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.635 05:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.635 05:34:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.635 05:34:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.635 05:34:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.635 05:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:30.635 { 00:20:30.635 "cntlid": 71, 00:20:30.635 "qid": 0, 00:20:30.635 "state": "enabled", 00:20:30.635 "listen_address": { 00:20:30.635 "trtype": "TCP", 00:20:30.635 "adrfam": "IPv4", 00:20:30.635 "traddr": "10.0.0.2", 00:20:30.635 "trsvcid": "4420" 00:20:30.635 }, 00:20:30.635 "peer_address": { 00:20:30.635 "trtype": "TCP", 00:20:30.635 "adrfam": "IPv4", 00:20:30.635 "traddr": "10.0.0.1", 00:20:30.635 "trsvcid": "52810" 00:20:30.635 }, 00:20:30.635 "auth": { 00:20:30.635 "state": "completed", 00:20:30.635 "digest": "sha384", 00:20:30.635 "dhgroup": "ffdhe3072" 00:20:30.635 } 00:20:30.635 } 00:20:30.635 ]' 00:20:30.635 05:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:30.892 05:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:30.892 05:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:30.893 05:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:30.893 05:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:30.893 05:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.893 05:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.893 05:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.150 05:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OWVkNTBkYTBkNTY0YmYzOWUzYjg3Y2RhYTFmOGEyYzI0MGYwNTBhYTU5NTgyODRkOWNiNGQ3MGQ1ODY4MDFkNYf5+BY=: 00:20:32.081 05:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:32.081 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:32.081 05:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:32.081 05:34:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.081 05:34:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.081 05:34:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.081 05:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:32.081 05:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:32.081 05:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:32.081 05:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:32.339 05:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:20:32.339 05:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:32.339 05:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:32.339 05:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:32.339 05:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:32.339 05:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.339 05:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:32.339 05:34:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.339 05:34:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.339 05:34:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.339 05:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:32.339 05:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:32.907 00:20:32.907 05:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:32.907 05:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:32.907 05:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:33.165 05:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.165 05:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:33.165 05:34:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.165 05:34:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.165 05:34:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.165 05:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:33.165 { 00:20:33.165 "cntlid": 73, 00:20:33.165 "qid": 0, 00:20:33.165 "state": "enabled", 00:20:33.165 "listen_address": { 00:20:33.165 "trtype": "TCP", 00:20:33.165 "adrfam": "IPv4", 00:20:33.165 "traddr": "10.0.0.2", 00:20:33.165 "trsvcid": "4420" 00:20:33.165 }, 00:20:33.165 "peer_address": { 00:20:33.165 "trtype": "TCP", 00:20:33.165 "adrfam": "IPv4", 00:20:33.165 "traddr": "10.0.0.1", 00:20:33.165 "trsvcid": "59146" 00:20:33.165 }, 00:20:33.165 "auth": { 00:20:33.165 "state": "completed", 00:20:33.165 "digest": "sha384", 00:20:33.165 "dhgroup": "ffdhe4096" 00:20:33.165 } 00:20:33.165 } 00:20:33.165 ]' 00:20:33.165 05:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:33.165 05:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:33.165 05:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:33.165 05:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:33.165 05:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:33.165 05:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:33.165 05:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:33.165 05:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.423 05:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjRkZDgzN2I5MGFkYTM4ZGI1NDBmM2M3NTQ5NDkzOWY2NWJjODNlMDJlNzY5YzVid1MmQg==: --dhchap-ctrl-secret DHHC-1:03:YzMyM2VmMjg1ZmJkNWFkMzA3ZjJlNmY3ZjIwMGU1MWJiNzM5MjAzZTgxYTkzZjc2YTY1YjAwNmE4NTYyMjAwYkNbsVg=: 00:20:34.356 05:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.356 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.356 05:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:34.356 05:34:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.356 05:34:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.356 05:34:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.356 05:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:34.356 05:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:34.356 05:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:34.614 05:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:20:34.614 05:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:34.614 05:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:34.614 05:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:34.614 05:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:34.614 05:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:34.614 05:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.614 05:34:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.614 05:34:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.614 05:34:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.614 05:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.614 05:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.871 00:20:34.871 05:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:34.871 05:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.871 05:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:35.128 05:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.128 05:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.128 05:34:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.128 05:34:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.128 05:34:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.128 05:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:35.128 { 00:20:35.128 "cntlid": 75, 00:20:35.128 "qid": 0, 00:20:35.128 "state": "enabled", 00:20:35.128 "listen_address": { 00:20:35.128 "trtype": "TCP", 00:20:35.128 "adrfam": "IPv4", 00:20:35.128 "traddr": "10.0.0.2", 00:20:35.128 "trsvcid": "4420" 00:20:35.128 }, 00:20:35.128 "peer_address": { 00:20:35.128 "trtype": "TCP", 00:20:35.128 "adrfam": "IPv4", 00:20:35.128 "traddr": "10.0.0.1", 00:20:35.128 "trsvcid": "59174" 00:20:35.128 }, 00:20:35.128 "auth": { 00:20:35.128 "state": "completed", 00:20:35.128 "digest": "sha384", 00:20:35.128 "dhgroup": "ffdhe4096" 00:20:35.128 } 00:20:35.128 } 00:20:35.128 ]' 00:20:35.128 05:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:35.384 05:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:35.384 05:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:35.384 05:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:35.384 05:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:35.384 05:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.384 05:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.384 05:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.642 05:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTU5MWQ5MzQ3ZDkzODE0OWM4Y2ZmNmI3OTA1NGNhZDGUMbrn: --dhchap-ctrl-secret DHHC-1:02:MTRlZDNhYjFjYWU4MGFhODAzNmQzM2YxYzc3ZmMzMGVjMDQ2MWUzNGZmZjU5YjU4fdti4A==: 00:20:36.619 05:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.619 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.619 05:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:36.619 05:34:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.619 05:34:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.619 05:34:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.619 05:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:36.619 05:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:36.619 05:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:36.877 05:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:20:36.877 05:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:36.877 05:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:36.877 05:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:36.877 05:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:36.877 05:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.877 05:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:36.877 05:34:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.877 05:34:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.877 05:34:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.877 05:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:36.877 05:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.443 00:20:37.443 05:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:37.443 05:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:37.443 05:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.443 05:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.443 05:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.443 05:34:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.443 05:34:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.443 05:34:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.443 05:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:37.443 { 00:20:37.443 "cntlid": 77, 00:20:37.443 "qid": 0, 00:20:37.443 "state": "enabled", 00:20:37.443 "listen_address": { 00:20:37.443 "trtype": "TCP", 00:20:37.443 "adrfam": "IPv4", 00:20:37.443 "traddr": "10.0.0.2", 00:20:37.443 "trsvcid": "4420" 00:20:37.443 }, 00:20:37.443 "peer_address": { 00:20:37.443 "trtype": "TCP", 00:20:37.443 "adrfam": "IPv4", 00:20:37.443 "traddr": "10.0.0.1", 00:20:37.443 "trsvcid": "59202" 00:20:37.443 }, 00:20:37.443 "auth": { 00:20:37.443 "state": "completed", 00:20:37.443 "digest": "sha384", 00:20:37.443 "dhgroup": "ffdhe4096" 00:20:37.443 } 00:20:37.443 } 00:20:37.443 ]' 00:20:37.443 05:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:37.701 05:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:37.701 05:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:37.701 05:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:37.701 05:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:37.701 05:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.701 05:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.701 05:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.959 05:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MTQxY2I0NDY0NGE5NjM1YmNhMWQ3MmM2NGY0ZjVhYWE5Nzk5YmY3YzE0ZmRkN2RmCb/FQQ==: --dhchap-ctrl-secret DHHC-1:01:NzExYWE4OWRkYzM0N2I1YTlmMTA4ZDE5OWI3NzU1OWWJ7U2z: 00:20:38.893 05:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.893 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.893 05:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:38.893 05:34:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.893 05:34:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.893 05:34:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.893 05:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:38.893 05:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:38.893 05:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:39.151 05:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:20:39.151 05:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:39.151 05:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:39.151 05:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:39.151 05:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:39.151 05:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:39.151 05:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:39.151 05:34:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.151 05:34:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.151 05:34:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.151 05:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:39.151 05:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:39.718 00:20:39.718 05:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:39.718 05:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:39.718 05:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.718 05:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.718 05:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.718 05:34:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.718 05:34:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.718 05:34:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.718 05:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:39.718 { 00:20:39.718 "cntlid": 79, 00:20:39.718 "qid": 0, 00:20:39.718 "state": "enabled", 00:20:39.718 "listen_address": { 00:20:39.718 "trtype": "TCP", 00:20:39.718 "adrfam": "IPv4", 00:20:39.718 "traddr": "10.0.0.2", 00:20:39.718 "trsvcid": "4420" 00:20:39.718 }, 00:20:39.718 "peer_address": { 00:20:39.718 "trtype": "TCP", 00:20:39.718 "adrfam": "IPv4", 00:20:39.718 "traddr": "10.0.0.1", 00:20:39.718 "trsvcid": "59238" 00:20:39.718 }, 00:20:39.718 "auth": { 00:20:39.718 "state": "completed", 00:20:39.718 "digest": "sha384", 00:20:39.718 "dhgroup": "ffdhe4096" 00:20:39.718 } 00:20:39.718 } 00:20:39.718 ]' 00:20:39.718 05:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:39.975 05:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:39.975 05:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:39.975 05:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:39.975 05:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:39.975 05:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.975 05:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.975 05:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:40.233 05:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OWVkNTBkYTBkNTY0YmYzOWUzYjg3Y2RhYTFmOGEyYzI0MGYwNTBhYTU5NTgyODRkOWNiNGQ3MGQ1ODY4MDFkNYf5+BY=: 00:20:41.165 05:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.165 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.166 05:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:41.166 05:34:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.166 05:34:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.166 05:34:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.166 05:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:41.166 05:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:41.166 05:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:41.166 05:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:41.423 05:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:20:41.423 05:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:41.423 05:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:41.423 05:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:41.423 05:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:41.423 05:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.423 05:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.423 05:34:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.423 05:34:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.423 05:34:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.423 05:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.423 05:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.988 00:20:41.988 05:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:41.988 05:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.988 05:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:42.246 05:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.246 05:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.246 05:34:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.246 05:34:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.246 05:34:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.246 05:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:42.246 { 00:20:42.246 "cntlid": 81, 00:20:42.246 "qid": 0, 00:20:42.246 "state": "enabled", 00:20:42.246 "listen_address": { 00:20:42.246 "trtype": "TCP", 00:20:42.246 "adrfam": "IPv4", 00:20:42.246 "traddr": "10.0.0.2", 00:20:42.246 "trsvcid": "4420" 00:20:42.246 }, 00:20:42.246 "peer_address": { 00:20:42.246 "trtype": "TCP", 00:20:42.246 "adrfam": "IPv4", 00:20:42.246 "traddr": "10.0.0.1", 00:20:42.246 "trsvcid": "46914" 00:20:42.246 }, 00:20:42.246 "auth": { 00:20:42.246 "state": "completed", 00:20:42.246 "digest": "sha384", 00:20:42.246 "dhgroup": "ffdhe6144" 00:20:42.246 } 00:20:42.246 } 00:20:42.246 ]' 00:20:42.246 05:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:42.246 05:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:42.246 05:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:42.246 05:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:42.246 05:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:42.246 05:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.246 05:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.246 05:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.504 05:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjRkZDgzN2I5MGFkYTM4ZGI1NDBmM2M3NTQ5NDkzOWY2NWJjODNlMDJlNzY5YzVid1MmQg==: --dhchap-ctrl-secret DHHC-1:03:YzMyM2VmMjg1ZmJkNWFkMzA3ZjJlNmY3ZjIwMGU1MWJiNzM5MjAzZTgxYTkzZjc2YTY1YjAwNmE4NTYyMjAwYkNbsVg=: 00:20:43.435 05:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.692 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.693 05:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:43.693 05:34:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.693 05:34:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.693 05:34:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.693 05:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:43.693 05:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:43.693 05:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:43.950 05:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:20:43.950 05:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:43.950 05:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:43.950 05:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:43.950 05:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:43.950 05:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.950 05:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.950 05:34:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.950 05:34:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.950 05:34:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.950 05:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.950 05:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:44.514 00:20:44.514 05:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:44.514 05:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:44.514 05:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.514 05:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.514 05:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.514 05:34:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.514 05:34:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.514 05:34:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.514 05:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:44.514 { 00:20:44.514 "cntlid": 83, 00:20:44.514 "qid": 0, 00:20:44.514 "state": "enabled", 00:20:44.514 "listen_address": { 00:20:44.514 "trtype": "TCP", 00:20:44.514 "adrfam": "IPv4", 00:20:44.514 "traddr": "10.0.0.2", 00:20:44.514 "trsvcid": "4420" 00:20:44.514 }, 00:20:44.514 "peer_address": { 00:20:44.514 "trtype": "TCP", 00:20:44.514 "adrfam": "IPv4", 00:20:44.514 "traddr": "10.0.0.1", 00:20:44.514 "trsvcid": "46934" 00:20:44.514 }, 00:20:44.514 "auth": { 00:20:44.514 "state": "completed", 00:20:44.514 "digest": "sha384", 00:20:44.514 "dhgroup": "ffdhe6144" 00:20:44.514 } 00:20:44.514 } 00:20:44.514 ]' 00:20:44.514 05:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:44.771 05:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:44.771 05:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:44.771 05:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:44.771 05:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:44.771 05:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.771 05:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.771 05:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.027 05:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTU5MWQ5MzQ3ZDkzODE0OWM4Y2ZmNmI3OTA1NGNhZDGUMbrn: --dhchap-ctrl-secret DHHC-1:02:MTRlZDNhYjFjYWU4MGFhODAzNmQzM2YxYzc3ZmMzMGVjMDQ2MWUzNGZmZjU5YjU4fdti4A==: 00:20:45.958 05:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.958 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.958 05:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:45.958 05:34:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.958 05:34:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.958 05:34:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.958 05:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:45.958 05:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:45.958 05:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:46.215 05:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:20:46.215 05:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:46.215 05:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:46.215 05:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:46.215 05:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:46.215 05:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.215 05:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:46.215 05:34:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.215 05:34:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.215 05:34:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.215 05:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:46.215 05:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:46.780 00:20:46.780 05:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:46.780 05:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:46.780 05:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.037 05:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.037 05:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.037 05:34:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.037 05:34:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.037 05:34:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.037 05:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:47.037 { 00:20:47.037 "cntlid": 85, 00:20:47.037 "qid": 0, 00:20:47.037 "state": "enabled", 00:20:47.037 "listen_address": { 00:20:47.037 "trtype": "TCP", 00:20:47.037 "adrfam": "IPv4", 00:20:47.037 "traddr": "10.0.0.2", 00:20:47.037 "trsvcid": "4420" 00:20:47.037 }, 00:20:47.037 "peer_address": { 00:20:47.037 "trtype": "TCP", 00:20:47.037 "adrfam": "IPv4", 00:20:47.037 "traddr": "10.0.0.1", 00:20:47.037 "trsvcid": "46952" 00:20:47.037 }, 00:20:47.037 "auth": { 00:20:47.037 "state": "completed", 00:20:47.037 "digest": "sha384", 00:20:47.037 "dhgroup": "ffdhe6144" 00:20:47.037 } 00:20:47.037 } 00:20:47.037 ]' 00:20:47.037 05:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:47.037 05:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:47.037 05:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:47.293 05:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:47.293 05:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:47.293 05:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.293 05:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.293 05:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.550 05:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MTQxY2I0NDY0NGE5NjM1YmNhMWQ3MmM2NGY0ZjVhYWE5Nzk5YmY3YzE0ZmRkN2RmCb/FQQ==: --dhchap-ctrl-secret DHHC-1:01:NzExYWE4OWRkYzM0N2I1YTlmMTA4ZDE5OWI3NzU1OWWJ7U2z: 00:20:48.481 05:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.481 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.481 05:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:48.481 05:34:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.481 05:34:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.482 05:34:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.482 05:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:48.482 05:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:48.482 05:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:48.739 05:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:20:48.740 05:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:48.740 05:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:48.740 05:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:48.740 05:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:48.740 05:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.740 05:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:48.740 05:34:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.740 05:34:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.740 05:34:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.740 05:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:48.740 05:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:49.304 00:20:49.304 05:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:49.304 05:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:49.304 05:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.562 05:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.562 05:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.562 05:34:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.562 05:34:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.562 05:34:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.562 05:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:49.562 { 00:20:49.562 "cntlid": 87, 00:20:49.562 "qid": 0, 00:20:49.562 "state": "enabled", 00:20:49.562 "listen_address": { 00:20:49.562 "trtype": "TCP", 00:20:49.562 "adrfam": "IPv4", 00:20:49.562 "traddr": "10.0.0.2", 00:20:49.562 "trsvcid": "4420" 00:20:49.562 }, 00:20:49.562 "peer_address": { 00:20:49.563 "trtype": "TCP", 00:20:49.563 "adrfam": "IPv4", 00:20:49.563 "traddr": "10.0.0.1", 00:20:49.563 "trsvcid": "46986" 00:20:49.563 }, 00:20:49.563 "auth": { 00:20:49.563 "state": "completed", 00:20:49.563 "digest": "sha384", 00:20:49.563 "dhgroup": "ffdhe6144" 00:20:49.563 } 00:20:49.563 } 00:20:49.563 ]' 00:20:49.563 05:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:49.563 05:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:49.563 05:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:49.563 05:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:49.563 05:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:49.563 05:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.563 05:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.563 05:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.885 05:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OWVkNTBkYTBkNTY0YmYzOWUzYjg3Y2RhYTFmOGEyYzI0MGYwNTBhYTU5NTgyODRkOWNiNGQ3MGQ1ODY4MDFkNYf5+BY=: 00:20:50.834 05:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.834 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.834 05:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:50.834 05:34:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.834 05:34:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.834 05:34:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.834 05:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:50.834 05:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:50.835 05:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:50.835 05:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:51.092 05:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:20:51.092 05:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:51.092 05:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:51.092 05:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:51.092 05:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:51.092 05:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:51.092 05:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:51.092 05:34:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.092 05:34:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.092 05:34:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.092 05:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:51.092 05:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.025 00:20:52.025 05:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:52.025 05:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:52.025 05:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.283 05:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.283 05:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.283 05:34:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.284 05:34:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.284 05:34:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.284 05:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:52.284 { 00:20:52.284 "cntlid": 89, 00:20:52.284 "qid": 0, 00:20:52.284 "state": "enabled", 00:20:52.284 "listen_address": { 00:20:52.284 "trtype": "TCP", 00:20:52.284 "adrfam": "IPv4", 00:20:52.284 "traddr": "10.0.0.2", 00:20:52.284 "trsvcid": "4420" 00:20:52.284 }, 00:20:52.284 "peer_address": { 00:20:52.284 "trtype": "TCP", 00:20:52.284 "adrfam": "IPv4", 00:20:52.284 "traddr": "10.0.0.1", 00:20:52.284 "trsvcid": "42044" 00:20:52.284 }, 00:20:52.284 "auth": { 00:20:52.284 "state": "completed", 00:20:52.284 "digest": "sha384", 00:20:52.284 "dhgroup": "ffdhe8192" 00:20:52.284 } 00:20:52.284 } 00:20:52.284 ]' 00:20:52.284 05:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:52.284 05:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:52.284 05:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:52.284 05:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:52.284 05:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:52.284 05:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:52.284 05:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.284 05:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.541 05:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjRkZDgzN2I5MGFkYTM4ZGI1NDBmM2M3NTQ5NDkzOWY2NWJjODNlMDJlNzY5YzVid1MmQg==: --dhchap-ctrl-secret DHHC-1:03:YzMyM2VmMjg1ZmJkNWFkMzA3ZjJlNmY3ZjIwMGU1MWJiNzM5MjAzZTgxYTkzZjc2YTY1YjAwNmE4NTYyMjAwYkNbsVg=: 00:20:53.915 05:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.915 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.915 05:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:53.915 05:35:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.915 05:35:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.915 05:35:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.915 05:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:53.915 05:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:53.915 05:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:53.915 05:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:20:53.915 05:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:53.915 05:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:53.915 05:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:53.915 05:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:53.915 05:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.915 05:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.915 05:35:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.915 05:35:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.915 05:35:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.915 05:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.915 05:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.849 00:20:54.849 05:35:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:54.849 05:35:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:54.849 05:35:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.107 05:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.107 05:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.107 05:35:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.107 05:35:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.107 05:35:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.107 05:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:55.107 { 00:20:55.107 "cntlid": 91, 00:20:55.107 "qid": 0, 00:20:55.107 "state": "enabled", 00:20:55.107 "listen_address": { 00:20:55.107 "trtype": "TCP", 00:20:55.107 "adrfam": "IPv4", 00:20:55.107 "traddr": "10.0.0.2", 00:20:55.107 "trsvcid": "4420" 00:20:55.107 }, 00:20:55.107 "peer_address": { 00:20:55.107 "trtype": "TCP", 00:20:55.107 "adrfam": "IPv4", 00:20:55.107 "traddr": "10.0.0.1", 00:20:55.107 "trsvcid": "42074" 00:20:55.107 }, 00:20:55.107 "auth": { 00:20:55.107 "state": "completed", 00:20:55.107 "digest": "sha384", 00:20:55.107 "dhgroup": "ffdhe8192" 00:20:55.107 } 00:20:55.107 } 00:20:55.107 ]' 00:20:55.107 05:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:55.107 05:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:55.107 05:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:55.107 05:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:55.107 05:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:55.107 05:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:55.107 05:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.107 05:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:55.365 05:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTU5MWQ5MzQ3ZDkzODE0OWM4Y2ZmNmI3OTA1NGNhZDGUMbrn: --dhchap-ctrl-secret DHHC-1:02:MTRlZDNhYjFjYWU4MGFhODAzNmQzM2YxYzc3ZmMzMGVjMDQ2MWUzNGZmZjU5YjU4fdti4A==: 00:20:56.297 05:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.297 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.297 05:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:56.297 05:35:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.297 05:35:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.297 05:35:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.297 05:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:56.297 05:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:56.297 05:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:56.553 05:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:20:56.553 05:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:56.553 05:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:56.553 05:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:56.553 05:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:56.553 05:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:56.553 05:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.553 05:35:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.553 05:35:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.553 05:35:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.553 05:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.553 05:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:57.483 00:20:57.483 05:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:57.483 05:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:57.483 05:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.740 05:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.740 05:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.740 05:35:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.740 05:35:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.740 05:35:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.740 05:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:57.740 { 00:20:57.740 "cntlid": 93, 00:20:57.740 "qid": 0, 00:20:57.740 "state": "enabled", 00:20:57.740 "listen_address": { 00:20:57.740 "trtype": "TCP", 00:20:57.740 "adrfam": "IPv4", 00:20:57.740 "traddr": "10.0.0.2", 00:20:57.740 "trsvcid": "4420" 00:20:57.740 }, 00:20:57.740 "peer_address": { 00:20:57.740 "trtype": "TCP", 00:20:57.740 "adrfam": "IPv4", 00:20:57.740 "traddr": "10.0.0.1", 00:20:57.740 "trsvcid": "42102" 00:20:57.740 }, 00:20:57.740 "auth": { 00:20:57.740 "state": "completed", 00:20:57.740 "digest": "sha384", 00:20:57.740 "dhgroup": "ffdhe8192" 00:20:57.740 } 00:20:57.740 } 00:20:57.740 ]' 00:20:57.740 05:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:57.740 05:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:57.740 05:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:57.997 05:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:57.997 05:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:57.997 05:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.997 05:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.997 05:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.254 05:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MTQxY2I0NDY0NGE5NjM1YmNhMWQ3MmM2NGY0ZjVhYWE5Nzk5YmY3YzE0ZmRkN2RmCb/FQQ==: --dhchap-ctrl-secret DHHC-1:01:NzExYWE4OWRkYzM0N2I1YTlmMTA4ZDE5OWI3NzU1OWWJ7U2z: 00:20:59.187 05:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.187 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.187 05:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:59.187 05:35:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.187 05:35:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.187 05:35:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.187 05:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:59.187 05:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:59.187 05:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:59.444 05:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:20:59.444 05:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:59.444 05:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:59.445 05:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:59.445 05:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:59.445 05:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.445 05:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:59.445 05:35:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.445 05:35:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.445 05:35:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.445 05:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:59.445 05:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:00.378 00:21:00.378 05:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:00.378 05:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:00.378 05:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.636 05:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.636 05:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.636 05:35:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.636 05:35:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.636 05:35:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.636 05:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:00.636 { 00:21:00.636 "cntlid": 95, 00:21:00.636 "qid": 0, 00:21:00.636 "state": "enabled", 00:21:00.636 "listen_address": { 00:21:00.636 "trtype": "TCP", 00:21:00.636 "adrfam": "IPv4", 00:21:00.636 "traddr": "10.0.0.2", 00:21:00.636 "trsvcid": "4420" 00:21:00.636 }, 00:21:00.636 "peer_address": { 00:21:00.636 "trtype": "TCP", 00:21:00.636 "adrfam": "IPv4", 00:21:00.636 "traddr": "10.0.0.1", 00:21:00.636 "trsvcid": "42134" 00:21:00.636 }, 00:21:00.636 "auth": { 00:21:00.636 "state": "completed", 00:21:00.636 "digest": "sha384", 00:21:00.636 "dhgroup": "ffdhe8192" 00:21:00.636 } 00:21:00.636 } 00:21:00.636 ]' 00:21:00.636 05:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:00.636 05:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:00.636 05:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:00.636 05:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:00.636 05:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:00.636 05:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.636 05:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.636 05:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.894 05:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OWVkNTBkYTBkNTY0YmYzOWUzYjg3Y2RhYTFmOGEyYzI0MGYwNTBhYTU5NTgyODRkOWNiNGQ3MGQ1ODY4MDFkNYf5+BY=: 00:21:01.826 05:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.826 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.826 05:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:01.826 05:35:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.826 05:35:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.826 05:35:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.826 05:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:21:01.826 05:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:01.826 05:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:01.826 05:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:01.826 05:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:02.083 05:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:21:02.083 05:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:02.083 05:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:02.083 05:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:02.083 05:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:02.083 05:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:02.083 05:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.083 05:35:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.083 05:35:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.083 05:35:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.083 05:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.083 05:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.645 00:21:02.645 05:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:02.645 05:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:02.645 05:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.904 05:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.904 05:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.904 05:35:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.904 05:35:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.904 05:35:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.904 05:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:02.904 { 00:21:02.904 "cntlid": 97, 00:21:02.904 "qid": 0, 00:21:02.904 "state": "enabled", 00:21:02.904 "listen_address": { 00:21:02.904 "trtype": "TCP", 00:21:02.904 "adrfam": "IPv4", 00:21:02.904 "traddr": "10.0.0.2", 00:21:02.904 "trsvcid": "4420" 00:21:02.904 }, 00:21:02.904 "peer_address": { 00:21:02.904 "trtype": "TCP", 00:21:02.904 "adrfam": "IPv4", 00:21:02.904 "traddr": "10.0.0.1", 00:21:02.904 "trsvcid": "53228" 00:21:02.904 }, 00:21:02.904 "auth": { 00:21:02.904 "state": "completed", 00:21:02.904 "digest": "sha512", 00:21:02.904 "dhgroup": "null" 00:21:02.904 } 00:21:02.904 } 00:21:02.904 ]' 00:21:02.904 05:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:02.904 05:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:02.904 05:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:02.904 05:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:02.904 05:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:02.904 05:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.904 05:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.904 05:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.161 05:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjRkZDgzN2I5MGFkYTM4ZGI1NDBmM2M3NTQ5NDkzOWY2NWJjODNlMDJlNzY5YzVid1MmQg==: --dhchap-ctrl-secret DHHC-1:03:YzMyM2VmMjg1ZmJkNWFkMzA3ZjJlNmY3ZjIwMGU1MWJiNzM5MjAzZTgxYTkzZjc2YTY1YjAwNmE4NTYyMjAwYkNbsVg=: 00:21:04.093 05:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.093 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.093 05:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:04.093 05:35:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.093 05:35:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.093 05:35:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.093 05:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:04.093 05:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:04.093 05:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:04.376 05:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:21:04.376 05:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:04.376 05:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:04.376 05:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:04.376 05:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:04.376 05:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.376 05:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:04.376 05:35:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.376 05:35:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.376 05:35:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.376 05:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:04.376 05:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:04.668 00:21:04.926 05:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:04.926 05:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:04.926 05:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.183 05:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.183 05:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:05.183 05:35:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.183 05:35:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.183 05:35:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.183 05:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:05.183 { 00:21:05.183 "cntlid": 99, 00:21:05.183 "qid": 0, 00:21:05.183 "state": "enabled", 00:21:05.183 "listen_address": { 00:21:05.183 "trtype": "TCP", 00:21:05.183 "adrfam": "IPv4", 00:21:05.183 "traddr": "10.0.0.2", 00:21:05.183 "trsvcid": "4420" 00:21:05.183 }, 00:21:05.183 "peer_address": { 00:21:05.183 "trtype": "TCP", 00:21:05.183 "adrfam": "IPv4", 00:21:05.183 "traddr": "10.0.0.1", 00:21:05.183 "trsvcid": "53238" 00:21:05.183 }, 00:21:05.183 "auth": { 00:21:05.183 "state": "completed", 00:21:05.183 "digest": "sha512", 00:21:05.183 "dhgroup": "null" 00:21:05.183 } 00:21:05.183 } 00:21:05.183 ]' 00:21:05.183 05:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:05.183 05:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:05.183 05:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:05.183 05:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:05.183 05:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:05.183 05:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.183 05:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.183 05:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.444 05:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTU5MWQ5MzQ3ZDkzODE0OWM4Y2ZmNmI3OTA1NGNhZDGUMbrn: --dhchap-ctrl-secret DHHC-1:02:MTRlZDNhYjFjYWU4MGFhODAzNmQzM2YxYzc3ZmMzMGVjMDQ2MWUzNGZmZjU5YjU4fdti4A==: 00:21:06.379 05:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.379 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.379 05:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:06.379 05:35:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.379 05:35:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.379 05:35:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.379 05:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:06.379 05:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:06.379 05:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:06.636 05:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:21:06.636 05:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:06.636 05:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:06.636 05:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:06.636 05:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:06.636 05:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.636 05:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.636 05:35:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.636 05:35:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.636 05:35:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.636 05:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.636 05:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.892 00:21:06.892 05:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:06.892 05:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:06.892 05:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.150 05:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.150 05:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.150 05:35:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.150 05:35:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.150 05:35:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.150 05:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:07.150 { 00:21:07.150 "cntlid": 101, 00:21:07.150 "qid": 0, 00:21:07.150 "state": "enabled", 00:21:07.150 "listen_address": { 00:21:07.150 "trtype": "TCP", 00:21:07.150 "adrfam": "IPv4", 00:21:07.150 "traddr": "10.0.0.2", 00:21:07.150 "trsvcid": "4420" 00:21:07.150 }, 00:21:07.150 "peer_address": { 00:21:07.150 "trtype": "TCP", 00:21:07.150 "adrfam": "IPv4", 00:21:07.150 "traddr": "10.0.0.1", 00:21:07.150 "trsvcid": "53248" 00:21:07.150 }, 00:21:07.150 "auth": { 00:21:07.150 "state": "completed", 00:21:07.150 "digest": "sha512", 00:21:07.150 "dhgroup": "null" 00:21:07.150 } 00:21:07.150 } 00:21:07.150 ]' 00:21:07.150 05:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:07.407 05:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:07.407 05:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:07.407 05:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:07.407 05:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:07.407 05:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.407 05:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.407 05:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.665 05:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MTQxY2I0NDY0NGE5NjM1YmNhMWQ3MmM2NGY0ZjVhYWE5Nzk5YmY3YzE0ZmRkN2RmCb/FQQ==: --dhchap-ctrl-secret DHHC-1:01:NzExYWE4OWRkYzM0N2I1YTlmMTA4ZDE5OWI3NzU1OWWJ7U2z: 00:21:08.598 05:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.598 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.598 05:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:08.598 05:35:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.598 05:35:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.598 05:35:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.598 05:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:08.598 05:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:08.598 05:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:08.856 05:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:21:08.856 05:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:08.856 05:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:08.856 05:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:08.856 05:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:08.856 05:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:08.856 05:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:08.856 05:35:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.856 05:35:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.856 05:35:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.856 05:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:08.856 05:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:09.114 00:21:09.114 05:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:09.114 05:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:09.114 05:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.373 05:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.373 05:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.373 05:35:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.373 05:35:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.373 05:35:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.373 05:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:09.373 { 00:21:09.373 "cntlid": 103, 00:21:09.373 "qid": 0, 00:21:09.373 "state": "enabled", 00:21:09.373 "listen_address": { 00:21:09.373 "trtype": "TCP", 00:21:09.373 "adrfam": "IPv4", 00:21:09.373 "traddr": "10.0.0.2", 00:21:09.373 "trsvcid": "4420" 00:21:09.373 }, 00:21:09.373 "peer_address": { 00:21:09.373 "trtype": "TCP", 00:21:09.373 "adrfam": "IPv4", 00:21:09.373 "traddr": "10.0.0.1", 00:21:09.373 "trsvcid": "53280" 00:21:09.373 }, 00:21:09.373 "auth": { 00:21:09.373 "state": "completed", 00:21:09.373 "digest": "sha512", 00:21:09.373 "dhgroup": "null" 00:21:09.373 } 00:21:09.373 } 00:21:09.373 ]' 00:21:09.373 05:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:09.373 05:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:09.373 05:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:09.373 05:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:09.373 05:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:09.373 05:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.373 05:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.373 05:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.631 05:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OWVkNTBkYTBkNTY0YmYzOWUzYjg3Y2RhYTFmOGEyYzI0MGYwNTBhYTU5NTgyODRkOWNiNGQ3MGQ1ODY4MDFkNYf5+BY=: 00:21:11.003 05:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.003 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.003 05:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:11.003 05:35:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.003 05:35:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.003 05:35:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.003 05:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:11.003 05:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:11.003 05:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:11.003 05:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:11.003 05:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:21:11.003 05:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:11.003 05:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:11.003 05:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:11.003 05:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:11.003 05:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.003 05:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.003 05:35:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.003 05:35:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.003 05:35:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.003 05:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.003 05:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.261 00:21:11.261 05:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:11.261 05:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:11.261 05:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.519 05:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.519 05:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.519 05:35:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.519 05:35:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.519 05:35:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.519 05:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:11.519 { 00:21:11.519 "cntlid": 105, 00:21:11.519 "qid": 0, 00:21:11.519 "state": "enabled", 00:21:11.519 "listen_address": { 00:21:11.519 "trtype": "TCP", 00:21:11.519 "adrfam": "IPv4", 00:21:11.519 "traddr": "10.0.0.2", 00:21:11.519 "trsvcid": "4420" 00:21:11.519 }, 00:21:11.519 "peer_address": { 00:21:11.519 "trtype": "TCP", 00:21:11.519 "adrfam": "IPv4", 00:21:11.519 "traddr": "10.0.0.1", 00:21:11.519 "trsvcid": "38742" 00:21:11.519 }, 00:21:11.519 "auth": { 00:21:11.519 "state": "completed", 00:21:11.519 "digest": "sha512", 00:21:11.519 "dhgroup": "ffdhe2048" 00:21:11.519 } 00:21:11.519 } 00:21:11.519 ]' 00:21:11.519 05:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:11.519 05:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:11.519 05:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:11.775 05:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:11.775 05:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:11.775 05:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.775 05:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.775 05:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.032 05:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjRkZDgzN2I5MGFkYTM4ZGI1NDBmM2M3NTQ5NDkzOWY2NWJjODNlMDJlNzY5YzVid1MmQg==: --dhchap-ctrl-secret DHHC-1:03:YzMyM2VmMjg1ZmJkNWFkMzA3ZjJlNmY3ZjIwMGU1MWJiNzM5MjAzZTgxYTkzZjc2YTY1YjAwNmE4NTYyMjAwYkNbsVg=: 00:21:12.964 05:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.964 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.964 05:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:12.964 05:35:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.964 05:35:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.964 05:35:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.964 05:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:12.964 05:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:12.964 05:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:13.222 05:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:21:13.222 05:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:13.222 05:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:13.222 05:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:13.222 05:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:13.222 05:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.222 05:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:13.222 05:35:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.222 05:35:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.222 05:35:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.222 05:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:13.222 05:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:13.479 00:21:13.479 05:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:13.479 05:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:13.479 05:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.737 05:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.737 05:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.737 05:35:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.737 05:35:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.737 05:35:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.737 05:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:13.737 { 00:21:13.737 "cntlid": 107, 00:21:13.737 "qid": 0, 00:21:13.737 "state": "enabled", 00:21:13.737 "listen_address": { 00:21:13.737 "trtype": "TCP", 00:21:13.737 "adrfam": "IPv4", 00:21:13.737 "traddr": "10.0.0.2", 00:21:13.737 "trsvcid": "4420" 00:21:13.737 }, 00:21:13.737 "peer_address": { 00:21:13.737 "trtype": "TCP", 00:21:13.737 "adrfam": "IPv4", 00:21:13.737 "traddr": "10.0.0.1", 00:21:13.737 "trsvcid": "38766" 00:21:13.737 }, 00:21:13.737 "auth": { 00:21:13.737 "state": "completed", 00:21:13.737 "digest": "sha512", 00:21:13.737 "dhgroup": "ffdhe2048" 00:21:13.737 } 00:21:13.737 } 00:21:13.737 ]' 00:21:13.737 05:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:13.737 05:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:13.737 05:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:13.737 05:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:13.737 05:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:13.995 05:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.995 05:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.995 05:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.253 05:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTU5MWQ5MzQ3ZDkzODE0OWM4Y2ZmNmI3OTA1NGNhZDGUMbrn: --dhchap-ctrl-secret DHHC-1:02:MTRlZDNhYjFjYWU4MGFhODAzNmQzM2YxYzc3ZmMzMGVjMDQ2MWUzNGZmZjU5YjU4fdti4A==: 00:21:15.186 05:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.186 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.186 05:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:15.186 05:35:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.186 05:35:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.186 05:35:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.186 05:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:15.186 05:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:15.186 05:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:15.443 05:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:21:15.443 05:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:15.443 05:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:15.443 05:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:15.443 05:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:15.443 05:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.443 05:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:15.443 05:35:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.443 05:35:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.443 05:35:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.443 05:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:15.443 05:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:15.701 00:21:15.701 05:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:15.701 05:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:15.701 05:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.959 05:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.959 05:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.959 05:35:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.959 05:35:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.959 05:35:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.959 05:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:15.959 { 00:21:15.959 "cntlid": 109, 00:21:15.959 "qid": 0, 00:21:15.959 "state": "enabled", 00:21:15.959 "listen_address": { 00:21:15.959 "trtype": "TCP", 00:21:15.959 "adrfam": "IPv4", 00:21:15.959 "traddr": "10.0.0.2", 00:21:15.959 "trsvcid": "4420" 00:21:15.959 }, 00:21:15.959 "peer_address": { 00:21:15.959 "trtype": "TCP", 00:21:15.959 "adrfam": "IPv4", 00:21:15.959 "traddr": "10.0.0.1", 00:21:15.959 "trsvcid": "38796" 00:21:15.959 }, 00:21:15.959 "auth": { 00:21:15.959 "state": "completed", 00:21:15.959 "digest": "sha512", 00:21:15.959 "dhgroup": "ffdhe2048" 00:21:15.959 } 00:21:15.959 } 00:21:15.959 ]' 00:21:15.959 05:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:15.959 05:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:15.959 05:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:15.959 05:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:15.959 05:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:16.217 05:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.217 05:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.217 05:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.474 05:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MTQxY2I0NDY0NGE5NjM1YmNhMWQ3MmM2NGY0ZjVhYWE5Nzk5YmY3YzE0ZmRkN2RmCb/FQQ==: --dhchap-ctrl-secret DHHC-1:01:NzExYWE4OWRkYzM0N2I1YTlmMTA4ZDE5OWI3NzU1OWWJ7U2z: 00:21:17.407 05:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.407 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.407 05:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:17.407 05:35:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.407 05:35:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.407 05:35:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.407 05:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:17.407 05:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:17.407 05:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:17.664 05:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:21:17.664 05:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:17.664 05:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:17.664 05:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:17.664 05:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:17.664 05:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.664 05:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:17.664 05:35:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.664 05:35:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.664 05:35:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.664 05:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:17.664 05:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:17.924 00:21:17.924 05:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:17.924 05:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.924 05:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:18.214 05:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.214 05:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.214 05:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.214 05:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.214 05:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.214 05:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:18.214 { 00:21:18.214 "cntlid": 111, 00:21:18.214 "qid": 0, 00:21:18.214 "state": "enabled", 00:21:18.214 "listen_address": { 00:21:18.214 "trtype": "TCP", 00:21:18.214 "adrfam": "IPv4", 00:21:18.214 "traddr": "10.0.0.2", 00:21:18.214 "trsvcid": "4420" 00:21:18.214 }, 00:21:18.214 "peer_address": { 00:21:18.214 "trtype": "TCP", 00:21:18.214 "adrfam": "IPv4", 00:21:18.214 "traddr": "10.0.0.1", 00:21:18.214 "trsvcid": "38826" 00:21:18.214 }, 00:21:18.214 "auth": { 00:21:18.214 "state": "completed", 00:21:18.214 "digest": "sha512", 00:21:18.214 "dhgroup": "ffdhe2048" 00:21:18.214 } 00:21:18.214 } 00:21:18.214 ]' 00:21:18.214 05:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:18.214 05:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:18.214 05:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:18.472 05:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:18.473 05:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:18.473 05:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.473 05:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.473 05:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.730 05:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OWVkNTBkYTBkNTY0YmYzOWUzYjg3Y2RhYTFmOGEyYzI0MGYwNTBhYTU5NTgyODRkOWNiNGQ3MGQ1ODY4MDFkNYf5+BY=: 00:21:19.664 05:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.664 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.664 05:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:19.664 05:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.664 05:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.664 05:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.664 05:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:19.664 05:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:19.664 05:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:19.664 05:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:19.922 05:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:21:19.922 05:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:19.922 05:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:19.922 05:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:19.922 05:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:19.922 05:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.922 05:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.922 05:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.922 05:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.922 05:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.922 05:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.922 05:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:20.180 00:21:20.180 05:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:20.180 05:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:20.180 05:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:20.438 05:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.438 05:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:20.438 05:35:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.438 05:35:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.438 05:35:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.438 05:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:20.438 { 00:21:20.438 "cntlid": 113, 00:21:20.438 "qid": 0, 00:21:20.438 "state": "enabled", 00:21:20.438 "listen_address": { 00:21:20.438 "trtype": "TCP", 00:21:20.438 "adrfam": "IPv4", 00:21:20.438 "traddr": "10.0.0.2", 00:21:20.438 "trsvcid": "4420" 00:21:20.438 }, 00:21:20.438 "peer_address": { 00:21:20.438 "trtype": "TCP", 00:21:20.438 "adrfam": "IPv4", 00:21:20.438 "traddr": "10.0.0.1", 00:21:20.438 "trsvcid": "38850" 00:21:20.438 }, 00:21:20.438 "auth": { 00:21:20.438 "state": "completed", 00:21:20.438 "digest": "sha512", 00:21:20.438 "dhgroup": "ffdhe3072" 00:21:20.438 } 00:21:20.438 } 00:21:20.438 ]' 00:21:20.438 05:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:20.438 05:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:20.438 05:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:20.438 05:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:20.438 05:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:20.438 05:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:20.697 05:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:20.697 05:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:20.697 05:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjRkZDgzN2I5MGFkYTM4ZGI1NDBmM2M3NTQ5NDkzOWY2NWJjODNlMDJlNzY5YzVid1MmQg==: --dhchap-ctrl-secret DHHC-1:03:YzMyM2VmMjg1ZmJkNWFkMzA3ZjJlNmY3ZjIwMGU1MWJiNzM5MjAzZTgxYTkzZjc2YTY1YjAwNmE4NTYyMjAwYkNbsVg=: 00:21:21.630 05:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:21.888 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:21.888 05:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:21.888 05:35:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.888 05:35:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.888 05:35:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.888 05:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:21.888 05:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:21.888 05:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:22.146 05:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:21:22.146 05:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:22.146 05:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:22.146 05:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:22.146 05:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:22.146 05:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:22.146 05:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.146 05:35:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.146 05:35:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.146 05:35:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.146 05:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.146 05:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.404 00:21:22.404 05:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:22.404 05:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:22.404 05:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:22.662 05:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.662 05:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:22.662 05:35:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.662 05:35:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.662 05:35:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.662 05:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:22.662 { 00:21:22.662 "cntlid": 115, 00:21:22.662 "qid": 0, 00:21:22.662 "state": "enabled", 00:21:22.662 "listen_address": { 00:21:22.662 "trtype": "TCP", 00:21:22.662 "adrfam": "IPv4", 00:21:22.662 "traddr": "10.0.0.2", 00:21:22.662 "trsvcid": "4420" 00:21:22.662 }, 00:21:22.662 "peer_address": { 00:21:22.662 "trtype": "TCP", 00:21:22.662 "adrfam": "IPv4", 00:21:22.662 "traddr": "10.0.0.1", 00:21:22.662 "trsvcid": "41642" 00:21:22.662 }, 00:21:22.662 "auth": { 00:21:22.662 "state": "completed", 00:21:22.662 "digest": "sha512", 00:21:22.662 "dhgroup": "ffdhe3072" 00:21:22.662 } 00:21:22.662 } 00:21:22.662 ]' 00:21:22.662 05:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:22.662 05:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:22.662 05:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:22.662 05:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:22.662 05:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:22.662 05:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.662 05:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.662 05:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.919 05:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTU5MWQ5MzQ3ZDkzODE0OWM4Y2ZmNmI3OTA1NGNhZDGUMbrn: --dhchap-ctrl-secret DHHC-1:02:MTRlZDNhYjFjYWU4MGFhODAzNmQzM2YxYzc3ZmMzMGVjMDQ2MWUzNGZmZjU5YjU4fdti4A==: 00:21:23.851 05:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.851 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.851 05:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:23.851 05:35:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.851 05:35:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.851 05:35:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.851 05:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:23.851 05:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:23.851 05:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:24.160 05:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:21:24.160 05:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:24.160 05:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:24.160 05:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:24.160 05:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:24.160 05:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:24.160 05:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.160 05:35:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.160 05:35:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.160 05:35:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.160 05:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.160 05:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.418 00:21:24.418 05:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:24.418 05:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:24.418 05:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.676 05:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.676 05:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.676 05:35:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.676 05:35:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.676 05:35:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.676 05:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:24.676 { 00:21:24.676 "cntlid": 117, 00:21:24.676 "qid": 0, 00:21:24.676 "state": "enabled", 00:21:24.676 "listen_address": { 00:21:24.676 "trtype": "TCP", 00:21:24.676 "adrfam": "IPv4", 00:21:24.676 "traddr": "10.0.0.2", 00:21:24.676 "trsvcid": "4420" 00:21:24.676 }, 00:21:24.676 "peer_address": { 00:21:24.676 "trtype": "TCP", 00:21:24.676 "adrfam": "IPv4", 00:21:24.676 "traddr": "10.0.0.1", 00:21:24.676 "trsvcid": "41658" 00:21:24.676 }, 00:21:24.676 "auth": { 00:21:24.676 "state": "completed", 00:21:24.676 "digest": "sha512", 00:21:24.676 "dhgroup": "ffdhe3072" 00:21:24.676 } 00:21:24.676 } 00:21:24.676 ]' 00:21:24.676 05:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:24.933 05:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:24.933 05:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:24.933 05:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:24.933 05:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:24.933 05:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.933 05:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.933 05:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.190 05:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MTQxY2I0NDY0NGE5NjM1YmNhMWQ3MmM2NGY0ZjVhYWE5Nzk5YmY3YzE0ZmRkN2RmCb/FQQ==: --dhchap-ctrl-secret DHHC-1:01:NzExYWE4OWRkYzM0N2I1YTlmMTA4ZDE5OWI3NzU1OWWJ7U2z: 00:21:26.121 05:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.121 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.121 05:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:26.121 05:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.121 05:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.121 05:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.121 05:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:26.121 05:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:26.121 05:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:26.378 05:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:21:26.378 05:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:26.378 05:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:26.378 05:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:26.378 05:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:26.378 05:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.378 05:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:26.378 05:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.378 05:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.378 05:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.378 05:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:26.378 05:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:26.943 00:21:26.943 05:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:26.943 05:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:26.943 05:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.943 05:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.943 05:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.943 05:35:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.943 05:35:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.200 05:35:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.200 05:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:27.200 { 00:21:27.200 "cntlid": 119, 00:21:27.200 "qid": 0, 00:21:27.200 "state": "enabled", 00:21:27.200 "listen_address": { 00:21:27.200 "trtype": "TCP", 00:21:27.200 "adrfam": "IPv4", 00:21:27.200 "traddr": "10.0.0.2", 00:21:27.200 "trsvcid": "4420" 00:21:27.200 }, 00:21:27.200 "peer_address": { 00:21:27.200 "trtype": "TCP", 00:21:27.200 "adrfam": "IPv4", 00:21:27.200 "traddr": "10.0.0.1", 00:21:27.200 "trsvcid": "41678" 00:21:27.200 }, 00:21:27.200 "auth": { 00:21:27.200 "state": "completed", 00:21:27.200 "digest": "sha512", 00:21:27.200 "dhgroup": "ffdhe3072" 00:21:27.200 } 00:21:27.200 } 00:21:27.200 ]' 00:21:27.200 05:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:27.200 05:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:27.200 05:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:27.200 05:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:27.200 05:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:27.200 05:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.200 05:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.200 05:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.457 05:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OWVkNTBkYTBkNTY0YmYzOWUzYjg3Y2RhYTFmOGEyYzI0MGYwNTBhYTU5NTgyODRkOWNiNGQ3MGQ1ODY4MDFkNYf5+BY=: 00:21:28.390 05:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.390 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.390 05:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:28.390 05:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.390 05:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.390 05:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.390 05:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:28.390 05:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:28.390 05:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:28.390 05:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:28.648 05:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:21:28.648 05:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:28.648 05:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:28.648 05:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:28.648 05:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:28.648 05:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.648 05:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.648 05:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.648 05:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.648 05:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.648 05:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.648 05:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:29.213 00:21:29.213 05:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:29.213 05:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:29.213 05:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.213 05:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.472 05:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.472 05:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.472 05:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.472 05:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.472 05:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:29.472 { 00:21:29.472 "cntlid": 121, 00:21:29.472 "qid": 0, 00:21:29.472 "state": "enabled", 00:21:29.472 "listen_address": { 00:21:29.472 "trtype": "TCP", 00:21:29.472 "adrfam": "IPv4", 00:21:29.472 "traddr": "10.0.0.2", 00:21:29.472 "trsvcid": "4420" 00:21:29.472 }, 00:21:29.472 "peer_address": { 00:21:29.472 "trtype": "TCP", 00:21:29.472 "adrfam": "IPv4", 00:21:29.472 "traddr": "10.0.0.1", 00:21:29.472 "trsvcid": "41716" 00:21:29.472 }, 00:21:29.472 "auth": { 00:21:29.472 "state": "completed", 00:21:29.472 "digest": "sha512", 00:21:29.472 "dhgroup": "ffdhe4096" 00:21:29.472 } 00:21:29.472 } 00:21:29.472 ]' 00:21:29.472 05:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:29.472 05:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:29.472 05:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:29.472 05:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:29.472 05:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:29.472 05:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.472 05:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.472 05:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.730 05:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjRkZDgzN2I5MGFkYTM4ZGI1NDBmM2M3NTQ5NDkzOWY2NWJjODNlMDJlNzY5YzVid1MmQg==: --dhchap-ctrl-secret DHHC-1:03:YzMyM2VmMjg1ZmJkNWFkMzA3ZjJlNmY3ZjIwMGU1MWJiNzM5MjAzZTgxYTkzZjc2YTY1YjAwNmE4NTYyMjAwYkNbsVg=: 00:21:30.663 05:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.663 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.664 05:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:30.664 05:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.664 05:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.664 05:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.664 05:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:30.664 05:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:30.664 05:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:30.922 05:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:21:30.922 05:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:30.922 05:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:30.922 05:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:30.922 05:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:30.922 05:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:30.922 05:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.922 05:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.922 05:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.922 05:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.922 05:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.922 05:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.179 00:21:31.437 05:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:31.437 05:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.437 05:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:31.437 05:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.437 05:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.437 05:35:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.437 05:35:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.712 05:35:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.712 05:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:31.712 { 00:21:31.712 "cntlid": 123, 00:21:31.712 "qid": 0, 00:21:31.712 "state": "enabled", 00:21:31.712 "listen_address": { 00:21:31.712 "trtype": "TCP", 00:21:31.712 "adrfam": "IPv4", 00:21:31.712 "traddr": "10.0.0.2", 00:21:31.712 "trsvcid": "4420" 00:21:31.712 }, 00:21:31.712 "peer_address": { 00:21:31.712 "trtype": "TCP", 00:21:31.712 "adrfam": "IPv4", 00:21:31.712 "traddr": "10.0.0.1", 00:21:31.712 "trsvcid": "51366" 00:21:31.712 }, 00:21:31.712 "auth": { 00:21:31.712 "state": "completed", 00:21:31.712 "digest": "sha512", 00:21:31.712 "dhgroup": "ffdhe4096" 00:21:31.712 } 00:21:31.712 } 00:21:31.712 ]' 00:21:31.712 05:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:31.712 05:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:31.712 05:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:31.712 05:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:31.712 05:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:31.712 05:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.712 05:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.712 05:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:31.981 05:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTU5MWQ5MzQ3ZDkzODE0OWM4Y2ZmNmI3OTA1NGNhZDGUMbrn: --dhchap-ctrl-secret DHHC-1:02:MTRlZDNhYjFjYWU4MGFhODAzNmQzM2YxYzc3ZmMzMGVjMDQ2MWUzNGZmZjU5YjU4fdti4A==: 00:21:32.913 05:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:32.913 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:32.913 05:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:32.913 05:35:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.913 05:35:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.913 05:35:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.913 05:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:32.913 05:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:32.913 05:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:33.171 05:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:21:33.171 05:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:33.171 05:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:33.171 05:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:33.171 05:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:33.171 05:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:33.171 05:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:33.171 05:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.171 05:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.171 05:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.171 05:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:33.171 05:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:33.429 00:21:33.429 05:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:33.429 05:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.429 05:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:33.686 05:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.686 05:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.686 05:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.686 05:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.686 05:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.686 05:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:33.686 { 00:21:33.686 "cntlid": 125, 00:21:33.686 "qid": 0, 00:21:33.686 "state": "enabled", 00:21:33.686 "listen_address": { 00:21:33.686 "trtype": "TCP", 00:21:33.686 "adrfam": "IPv4", 00:21:33.686 "traddr": "10.0.0.2", 00:21:33.686 "trsvcid": "4420" 00:21:33.686 }, 00:21:33.686 "peer_address": { 00:21:33.686 "trtype": "TCP", 00:21:33.686 "adrfam": "IPv4", 00:21:33.686 "traddr": "10.0.0.1", 00:21:33.686 "trsvcid": "51388" 00:21:33.686 }, 00:21:33.686 "auth": { 00:21:33.686 "state": "completed", 00:21:33.686 "digest": "sha512", 00:21:33.686 "dhgroup": "ffdhe4096" 00:21:33.686 } 00:21:33.686 } 00:21:33.686 ]' 00:21:33.686 05:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:33.944 05:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:33.944 05:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:33.944 05:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:33.944 05:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:33.944 05:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.944 05:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.944 05:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:34.200 05:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MTQxY2I0NDY0NGE5NjM1YmNhMWQ3MmM2NGY0ZjVhYWE5Nzk5YmY3YzE0ZmRkN2RmCb/FQQ==: --dhchap-ctrl-secret DHHC-1:01:NzExYWE4OWRkYzM0N2I1YTlmMTA4ZDE5OWI3NzU1OWWJ7U2z: 00:21:35.131 05:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.131 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.131 05:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:35.131 05:35:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.131 05:35:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.131 05:35:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.131 05:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:35.131 05:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:35.131 05:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:35.387 05:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:21:35.387 05:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:35.387 05:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:35.387 05:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:35.387 05:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:35.388 05:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:35.388 05:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:35.388 05:35:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.388 05:35:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.388 05:35:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.388 05:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:35.388 05:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:35.645 00:21:35.645 05:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:35.645 05:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:35.645 05:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.902 05:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.902 05:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:35.902 05:35:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.902 05:35:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.902 05:35:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.902 05:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:35.902 { 00:21:35.902 "cntlid": 127, 00:21:35.902 "qid": 0, 00:21:35.902 "state": "enabled", 00:21:35.902 "listen_address": { 00:21:35.902 "trtype": "TCP", 00:21:35.902 "adrfam": "IPv4", 00:21:35.902 "traddr": "10.0.0.2", 00:21:35.902 "trsvcid": "4420" 00:21:35.902 }, 00:21:35.902 "peer_address": { 00:21:35.902 "trtype": "TCP", 00:21:35.902 "adrfam": "IPv4", 00:21:35.902 "traddr": "10.0.0.1", 00:21:35.902 "trsvcid": "51416" 00:21:35.902 }, 00:21:35.902 "auth": { 00:21:35.902 "state": "completed", 00:21:35.902 "digest": "sha512", 00:21:35.902 "dhgroup": "ffdhe4096" 00:21:35.902 } 00:21:35.902 } 00:21:35.902 ]' 00:21:35.902 05:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:36.160 05:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:36.160 05:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:36.160 05:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:36.160 05:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:36.160 05:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:36.160 05:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.160 05:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:36.418 05:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OWVkNTBkYTBkNTY0YmYzOWUzYjg3Y2RhYTFmOGEyYzI0MGYwNTBhYTU5NTgyODRkOWNiNGQ3MGQ1ODY4MDFkNYf5+BY=: 00:21:37.351 05:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:37.351 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:37.351 05:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:37.351 05:35:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.351 05:35:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.351 05:35:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.351 05:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:37.351 05:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:37.351 05:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:37.351 05:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:37.608 05:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:21:37.608 05:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:37.608 05:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:37.608 05:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:37.608 05:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:37.608 05:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:37.608 05:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:37.608 05:35:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.608 05:35:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.608 05:35:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.608 05:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:37.608 05:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.173 00:21:38.173 05:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:38.173 05:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:38.173 05:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.431 05:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.431 05:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:38.431 05:35:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.431 05:35:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.431 05:35:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.431 05:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:38.431 { 00:21:38.431 "cntlid": 129, 00:21:38.431 "qid": 0, 00:21:38.431 "state": "enabled", 00:21:38.431 "listen_address": { 00:21:38.431 "trtype": "TCP", 00:21:38.431 "adrfam": "IPv4", 00:21:38.431 "traddr": "10.0.0.2", 00:21:38.431 "trsvcid": "4420" 00:21:38.431 }, 00:21:38.431 "peer_address": { 00:21:38.431 "trtype": "TCP", 00:21:38.431 "adrfam": "IPv4", 00:21:38.431 "traddr": "10.0.0.1", 00:21:38.431 "trsvcid": "51452" 00:21:38.431 }, 00:21:38.431 "auth": { 00:21:38.431 "state": "completed", 00:21:38.431 "digest": "sha512", 00:21:38.431 "dhgroup": "ffdhe6144" 00:21:38.431 } 00:21:38.431 } 00:21:38.431 ]' 00:21:38.431 05:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:38.431 05:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:38.431 05:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:38.689 05:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:38.689 05:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:38.689 05:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.689 05:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.689 05:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.945 05:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjRkZDgzN2I5MGFkYTM4ZGI1NDBmM2M3NTQ5NDkzOWY2NWJjODNlMDJlNzY5YzVid1MmQg==: --dhchap-ctrl-secret DHHC-1:03:YzMyM2VmMjg1ZmJkNWFkMzA3ZjJlNmY3ZjIwMGU1MWJiNzM5MjAzZTgxYTkzZjc2YTY1YjAwNmE4NTYyMjAwYkNbsVg=: 00:21:39.877 05:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.877 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.877 05:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:39.877 05:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.877 05:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.877 05:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.877 05:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:39.877 05:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:39.877 05:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:40.135 05:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:21:40.135 05:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:40.135 05:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:40.135 05:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:40.135 05:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:40.135 05:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:40.135 05:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:40.135 05:35:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.135 05:35:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.135 05:35:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.135 05:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:40.135 05:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:40.700 00:21:40.700 05:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:40.700 05:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:40.700 05:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.266 05:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.266 05:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:41.266 05:35:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.266 05:35:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.266 05:35:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.266 05:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:41.266 { 00:21:41.266 "cntlid": 131, 00:21:41.266 "qid": 0, 00:21:41.266 "state": "enabled", 00:21:41.266 "listen_address": { 00:21:41.266 "trtype": "TCP", 00:21:41.266 "adrfam": "IPv4", 00:21:41.266 "traddr": "10.0.0.2", 00:21:41.266 "trsvcid": "4420" 00:21:41.266 }, 00:21:41.266 "peer_address": { 00:21:41.266 "trtype": "TCP", 00:21:41.266 "adrfam": "IPv4", 00:21:41.266 "traddr": "10.0.0.1", 00:21:41.266 "trsvcid": "51478" 00:21:41.266 }, 00:21:41.266 "auth": { 00:21:41.266 "state": "completed", 00:21:41.266 "digest": "sha512", 00:21:41.266 "dhgroup": "ffdhe6144" 00:21:41.266 } 00:21:41.266 } 00:21:41.266 ]' 00:21:41.266 05:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:41.266 05:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:41.266 05:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:41.266 05:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:41.266 05:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:41.266 05:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:41.266 05:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:41.266 05:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:41.524 05:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTU5MWQ5MzQ3ZDkzODE0OWM4Y2ZmNmI3OTA1NGNhZDGUMbrn: --dhchap-ctrl-secret DHHC-1:02:MTRlZDNhYjFjYWU4MGFhODAzNmQzM2YxYzc3ZmMzMGVjMDQ2MWUzNGZmZjU5YjU4fdti4A==: 00:21:42.459 05:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:42.459 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:42.459 05:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:42.459 05:35:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.459 05:35:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.459 05:35:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.459 05:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:42.459 05:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:42.459 05:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:42.717 05:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:21:42.717 05:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:42.717 05:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:42.717 05:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:42.717 05:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:42.717 05:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:42.717 05:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:42.717 05:35:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.717 05:35:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.717 05:35:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.717 05:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:42.717 05:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:43.283 00:21:43.283 05:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:43.283 05:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:43.283 05:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.541 05:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.541 05:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:43.541 05:35:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.541 05:35:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.541 05:35:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.541 05:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:43.541 { 00:21:43.541 "cntlid": 133, 00:21:43.541 "qid": 0, 00:21:43.541 "state": "enabled", 00:21:43.541 "listen_address": { 00:21:43.541 "trtype": "TCP", 00:21:43.541 "adrfam": "IPv4", 00:21:43.541 "traddr": "10.0.0.2", 00:21:43.541 "trsvcid": "4420" 00:21:43.541 }, 00:21:43.541 "peer_address": { 00:21:43.541 "trtype": "TCP", 00:21:43.541 "adrfam": "IPv4", 00:21:43.541 "traddr": "10.0.0.1", 00:21:43.541 "trsvcid": "52978" 00:21:43.541 }, 00:21:43.541 "auth": { 00:21:43.541 "state": "completed", 00:21:43.541 "digest": "sha512", 00:21:43.541 "dhgroup": "ffdhe6144" 00:21:43.541 } 00:21:43.541 } 00:21:43.541 ]' 00:21:43.541 05:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:43.541 05:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:43.541 05:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:43.541 05:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:43.799 05:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:43.799 05:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:43.799 05:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:43.799 05:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:44.057 05:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MTQxY2I0NDY0NGE5NjM1YmNhMWQ3MmM2NGY0ZjVhYWE5Nzk5YmY3YzE0ZmRkN2RmCb/FQQ==: --dhchap-ctrl-secret DHHC-1:01:NzExYWE4OWRkYzM0N2I1YTlmMTA4ZDE5OWI3NzU1OWWJ7U2z: 00:21:44.990 05:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:44.990 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:44.990 05:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:44.990 05:35:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.990 05:35:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.990 05:35:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.990 05:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:44.990 05:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:44.990 05:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:45.248 05:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:21:45.248 05:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:45.248 05:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:45.248 05:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:45.248 05:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:45.248 05:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:45.248 05:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:45.248 05:35:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.248 05:35:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.248 05:35:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.248 05:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:45.248 05:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:45.846 00:21:45.846 05:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:45.846 05:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.846 05:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:46.104 05:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.104 05:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:46.104 05:35:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.104 05:35:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.104 05:35:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.104 05:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:46.104 { 00:21:46.104 "cntlid": 135, 00:21:46.104 "qid": 0, 00:21:46.104 "state": "enabled", 00:21:46.104 "listen_address": { 00:21:46.104 "trtype": "TCP", 00:21:46.104 "adrfam": "IPv4", 00:21:46.104 "traddr": "10.0.0.2", 00:21:46.104 "trsvcid": "4420" 00:21:46.104 }, 00:21:46.104 "peer_address": { 00:21:46.104 "trtype": "TCP", 00:21:46.104 "adrfam": "IPv4", 00:21:46.104 "traddr": "10.0.0.1", 00:21:46.104 "trsvcid": "53008" 00:21:46.104 }, 00:21:46.104 "auth": { 00:21:46.104 "state": "completed", 00:21:46.104 "digest": "sha512", 00:21:46.104 "dhgroup": "ffdhe6144" 00:21:46.104 } 00:21:46.104 } 00:21:46.104 ]' 00:21:46.104 05:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:46.104 05:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:46.104 05:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:46.104 05:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:46.104 05:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:46.104 05:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:46.104 05:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:46.104 05:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:46.361 05:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OWVkNTBkYTBkNTY0YmYzOWUzYjg3Y2RhYTFmOGEyYzI0MGYwNTBhYTU5NTgyODRkOWNiNGQ3MGQ1ODY4MDFkNYf5+BY=: 00:21:47.292 05:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:47.292 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:47.292 05:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:47.292 05:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.292 05:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.292 05:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.292 05:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:47.292 05:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:47.292 05:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:47.292 05:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:47.550 05:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:21:47.550 05:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:47.550 05:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:47.550 05:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:47.550 05:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:47.550 05:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:47.550 05:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.550 05:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.550 05:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.550 05:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.550 05:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.550 05:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:48.484 00:21:48.484 05:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:48.484 05:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:48.484 05:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:48.743 05:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:48.743 05:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:48.743 05:35:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.743 05:35:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.001 05:35:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.001 05:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:49.001 { 00:21:49.001 "cntlid": 137, 00:21:49.001 "qid": 0, 00:21:49.001 "state": "enabled", 00:21:49.001 "listen_address": { 00:21:49.001 "trtype": "TCP", 00:21:49.001 "adrfam": "IPv4", 00:21:49.001 "traddr": "10.0.0.2", 00:21:49.001 "trsvcid": "4420" 00:21:49.001 }, 00:21:49.001 "peer_address": { 00:21:49.001 "trtype": "TCP", 00:21:49.001 "adrfam": "IPv4", 00:21:49.001 "traddr": "10.0.0.1", 00:21:49.001 "trsvcid": "53040" 00:21:49.001 }, 00:21:49.001 "auth": { 00:21:49.001 "state": "completed", 00:21:49.001 "digest": "sha512", 00:21:49.001 "dhgroup": "ffdhe8192" 00:21:49.001 } 00:21:49.001 } 00:21:49.001 ]' 00:21:49.001 05:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:49.001 05:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:49.001 05:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:49.001 05:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:49.001 05:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:49.001 05:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:49.001 05:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:49.001 05:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:49.259 05:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjRkZDgzN2I5MGFkYTM4ZGI1NDBmM2M3NTQ5NDkzOWY2NWJjODNlMDJlNzY5YzVid1MmQg==: --dhchap-ctrl-secret DHHC-1:03:YzMyM2VmMjg1ZmJkNWFkMzA3ZjJlNmY3ZjIwMGU1MWJiNzM5MjAzZTgxYTkzZjc2YTY1YjAwNmE4NTYyMjAwYkNbsVg=: 00:21:50.192 05:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:50.192 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:50.192 05:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:50.192 05:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.192 05:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.192 05:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.192 05:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:50.192 05:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:50.192 05:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:50.757 05:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:21:50.757 05:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:50.757 05:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:50.757 05:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:50.757 05:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:50.757 05:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:50.757 05:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:50.757 05:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.757 05:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.757 05:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.757 05:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:50.757 05:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:51.690 00:21:51.690 05:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:51.690 05:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:51.690 05:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:51.690 05:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.690 05:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:51.690 05:35:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.690 05:35:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.690 05:35:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.691 05:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:51.691 { 00:21:51.691 "cntlid": 139, 00:21:51.691 "qid": 0, 00:21:51.691 "state": "enabled", 00:21:51.691 "listen_address": { 00:21:51.691 "trtype": "TCP", 00:21:51.691 "adrfam": "IPv4", 00:21:51.691 "traddr": "10.0.0.2", 00:21:51.691 "trsvcid": "4420" 00:21:51.691 }, 00:21:51.691 "peer_address": { 00:21:51.691 "trtype": "TCP", 00:21:51.691 "adrfam": "IPv4", 00:21:51.691 "traddr": "10.0.0.1", 00:21:51.691 "trsvcid": "43232" 00:21:51.691 }, 00:21:51.691 "auth": { 00:21:51.691 "state": "completed", 00:21:51.691 "digest": "sha512", 00:21:51.691 "dhgroup": "ffdhe8192" 00:21:51.691 } 00:21:51.691 } 00:21:51.691 ]' 00:21:51.691 05:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:51.691 05:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:51.691 05:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:51.691 05:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:51.691 05:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:51.948 05:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:51.948 05:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:51.948 05:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:52.206 05:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTU5MWQ5MzQ3ZDkzODE0OWM4Y2ZmNmI3OTA1NGNhZDGUMbrn: --dhchap-ctrl-secret DHHC-1:02:MTRlZDNhYjFjYWU4MGFhODAzNmQzM2YxYzc3ZmMzMGVjMDQ2MWUzNGZmZjU5YjU4fdti4A==: 00:21:53.139 05:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:53.139 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:53.139 05:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:53.139 05:36:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.139 05:36:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.139 05:36:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.139 05:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:53.139 05:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:53.139 05:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:53.397 05:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:21:53.397 05:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:53.397 05:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:53.397 05:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:53.397 05:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:53.397 05:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:53.397 05:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:53.397 05:36:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.397 05:36:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.397 05:36:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.397 05:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:53.397 05:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:54.330 00:21:54.330 05:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:54.330 05:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:54.330 05:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:54.330 05:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.330 05:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:54.330 05:36:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.330 05:36:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.587 05:36:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.587 05:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:54.587 { 00:21:54.587 "cntlid": 141, 00:21:54.587 "qid": 0, 00:21:54.587 "state": "enabled", 00:21:54.587 "listen_address": { 00:21:54.587 "trtype": "TCP", 00:21:54.587 "adrfam": "IPv4", 00:21:54.587 "traddr": "10.0.0.2", 00:21:54.587 "trsvcid": "4420" 00:21:54.587 }, 00:21:54.587 "peer_address": { 00:21:54.587 "trtype": "TCP", 00:21:54.587 "adrfam": "IPv4", 00:21:54.587 "traddr": "10.0.0.1", 00:21:54.587 "trsvcid": "43258" 00:21:54.587 }, 00:21:54.587 "auth": { 00:21:54.587 "state": "completed", 00:21:54.587 "digest": "sha512", 00:21:54.587 "dhgroup": "ffdhe8192" 00:21:54.587 } 00:21:54.587 } 00:21:54.587 ]' 00:21:54.587 05:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:54.587 05:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:54.587 05:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:54.587 05:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:54.587 05:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:54.587 05:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:54.587 05:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:54.587 05:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:54.844 05:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MTQxY2I0NDY0NGE5NjM1YmNhMWQ3MmM2NGY0ZjVhYWE5Nzk5YmY3YzE0ZmRkN2RmCb/FQQ==: --dhchap-ctrl-secret DHHC-1:01:NzExYWE4OWRkYzM0N2I1YTlmMTA4ZDE5OWI3NzU1OWWJ7U2z: 00:21:55.776 05:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:55.776 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:55.776 05:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:55.776 05:36:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.776 05:36:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.776 05:36:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.776 05:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:55.776 05:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:55.776 05:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:56.034 05:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:21:56.034 05:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:56.034 05:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:56.034 05:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:56.034 05:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:56.034 05:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:56.034 05:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:56.034 05:36:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.034 05:36:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.034 05:36:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.034 05:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:56.034 05:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:56.964 00:21:56.964 05:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:56.964 05:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:56.964 05:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:57.221 05:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.221 05:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:57.221 05:36:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.221 05:36:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.221 05:36:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.221 05:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:57.221 { 00:21:57.221 "cntlid": 143, 00:21:57.221 "qid": 0, 00:21:57.221 "state": "enabled", 00:21:57.221 "listen_address": { 00:21:57.221 "trtype": "TCP", 00:21:57.221 "adrfam": "IPv4", 00:21:57.221 "traddr": "10.0.0.2", 00:21:57.221 "trsvcid": "4420" 00:21:57.221 }, 00:21:57.221 "peer_address": { 00:21:57.221 "trtype": "TCP", 00:21:57.221 "adrfam": "IPv4", 00:21:57.221 "traddr": "10.0.0.1", 00:21:57.221 "trsvcid": "43290" 00:21:57.221 }, 00:21:57.221 "auth": { 00:21:57.221 "state": "completed", 00:21:57.221 "digest": "sha512", 00:21:57.221 "dhgroup": "ffdhe8192" 00:21:57.221 } 00:21:57.221 } 00:21:57.221 ]' 00:21:57.221 05:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:57.221 05:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:57.222 05:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:57.222 05:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:57.222 05:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:57.222 05:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:57.222 05:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:57.222 05:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:57.479 05:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OWVkNTBkYTBkNTY0YmYzOWUzYjg3Y2RhYTFmOGEyYzI0MGYwNTBhYTU5NTgyODRkOWNiNGQ3MGQ1ODY4MDFkNYf5+BY=: 00:21:58.411 05:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:58.411 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:58.411 05:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:58.411 05:36:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.411 05:36:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.411 05:36:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.411 05:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:21:58.411 05:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:21:58.411 05:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:21:58.411 05:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:58.411 05:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:58.411 05:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:58.977 05:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:21:58.977 05:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:58.977 05:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:58.977 05:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:58.977 05:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:58.977 05:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:58.977 05:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:58.977 05:36:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.977 05:36:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.977 05:36:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.977 05:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:58.977 05:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:59.916 00:21:59.916 05:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:59.916 05:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:59.916 05:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:59.916 05:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.916 05:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:59.916 05:36:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.916 05:36:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.916 05:36:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.916 05:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:59.916 { 00:21:59.916 "cntlid": 145, 00:21:59.916 "qid": 0, 00:21:59.916 "state": "enabled", 00:21:59.916 "listen_address": { 00:21:59.916 "trtype": "TCP", 00:21:59.916 "adrfam": "IPv4", 00:21:59.916 "traddr": "10.0.0.2", 00:21:59.916 "trsvcid": "4420" 00:21:59.916 }, 00:21:59.916 "peer_address": { 00:21:59.916 "trtype": "TCP", 00:21:59.916 "adrfam": "IPv4", 00:21:59.916 "traddr": "10.0.0.1", 00:21:59.916 "trsvcid": "43314" 00:21:59.916 }, 00:21:59.916 "auth": { 00:21:59.916 "state": "completed", 00:21:59.916 "digest": "sha512", 00:21:59.916 "dhgroup": "ffdhe8192" 00:21:59.916 } 00:21:59.916 } 00:21:59.916 ]' 00:21:59.916 05:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:59.916 05:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:59.916 05:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:00.190 05:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:00.190 05:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:00.190 05:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:00.190 05:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:00.190 05:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:00.447 05:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjRkZDgzN2I5MGFkYTM4ZGI1NDBmM2M3NTQ5NDkzOWY2NWJjODNlMDJlNzY5YzVid1MmQg==: --dhchap-ctrl-secret DHHC-1:03:YzMyM2VmMjg1ZmJkNWFkMzA3ZjJlNmY3ZjIwMGU1MWJiNzM5MjAzZTgxYTkzZjc2YTY1YjAwNmE4NTYyMjAwYkNbsVg=: 00:22:01.377 05:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:01.377 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:01.377 05:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:01.377 05:36:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.377 05:36:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.377 05:36:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.377 05:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:01.377 05:36:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.377 05:36:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.377 05:36:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.377 05:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:01.377 05:36:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:01.377 05:36:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:01.377 05:36:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:01.377 05:36:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:01.377 05:36:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:01.377 05:36:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:01.377 05:36:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:01.377 05:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:02.307 request: 00:22:02.307 { 00:22:02.307 "name": "nvme0", 00:22:02.307 "trtype": "tcp", 00:22:02.307 "traddr": "10.0.0.2", 00:22:02.307 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:02.307 "adrfam": "ipv4", 00:22:02.307 "trsvcid": "4420", 00:22:02.307 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:02.307 "dhchap_key": "key2", 00:22:02.307 "method": "bdev_nvme_attach_controller", 00:22:02.307 "req_id": 1 00:22:02.307 } 00:22:02.307 Got JSON-RPC error response 00:22:02.307 response: 00:22:02.307 { 00:22:02.307 "code": -5, 00:22:02.307 "message": "Input/output error" 00:22:02.307 } 00:22:02.307 05:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:02.307 05:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:02.307 05:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:02.307 05:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:02.307 05:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:02.307 05:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.307 05:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.307 05:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.307 05:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.307 05:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.307 05:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.307 05:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.307 05:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:02.307 05:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:02.307 05:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:02.307 05:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:02.307 05:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:02.307 05:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:02.307 05:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:02.307 05:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:02.307 05:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:03.238 request: 00:22:03.238 { 00:22:03.238 "name": "nvme0", 00:22:03.238 "trtype": "tcp", 00:22:03.238 "traddr": "10.0.0.2", 00:22:03.238 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:03.238 "adrfam": "ipv4", 00:22:03.238 "trsvcid": "4420", 00:22:03.238 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:03.238 "dhchap_key": "key1", 00:22:03.238 "dhchap_ctrlr_key": "ckey2", 00:22:03.238 "method": "bdev_nvme_attach_controller", 00:22:03.238 "req_id": 1 00:22:03.238 } 00:22:03.238 Got JSON-RPC error response 00:22:03.238 response: 00:22:03.238 { 00:22:03.238 "code": -5, 00:22:03.238 "message": "Input/output error" 00:22:03.238 } 00:22:03.238 05:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:03.238 05:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:03.238 05:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:03.238 05:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:03.238 05:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:03.238 05:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.238 05:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.238 05:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.238 05:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:03.238 05:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.239 05:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.239 05:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.239 05:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:03.239 05:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:03.239 05:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:03.239 05:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:03.239 05:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:03.239 05:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:03.239 05:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:03.239 05:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:03.239 05:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:04.173 request: 00:22:04.173 { 00:22:04.173 "name": "nvme0", 00:22:04.173 "trtype": "tcp", 00:22:04.173 "traddr": "10.0.0.2", 00:22:04.173 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:04.173 "adrfam": "ipv4", 00:22:04.173 "trsvcid": "4420", 00:22:04.173 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:04.173 "dhchap_key": "key1", 00:22:04.173 "dhchap_ctrlr_key": "ckey1", 00:22:04.173 "method": "bdev_nvme_attach_controller", 00:22:04.173 "req_id": 1 00:22:04.173 } 00:22:04.173 Got JSON-RPC error response 00:22:04.173 response: 00:22:04.173 { 00:22:04.173 "code": -5, 00:22:04.173 "message": "Input/output error" 00:22:04.173 } 00:22:04.173 05:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:04.173 05:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:04.173 05:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:04.173 05:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:04.173 05:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:04.173 05:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.173 05:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.173 05:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.173 05:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 3243755 00:22:04.173 05:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 3243755 ']' 00:22:04.173 05:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 3243755 00:22:04.173 05:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:22:04.173 05:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:04.173 05:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3243755 00:22:04.173 05:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:04.173 05:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:04.173 05:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3243755' 00:22:04.173 killing process with pid 3243755 00:22:04.173 05:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 3243755 00:22:04.173 05:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 3243755 00:22:04.173 05:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:04.173 05:36:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:04.173 05:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:04.173 05:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.173 05:36:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3266373 00:22:04.173 05:36:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:04.173 05:36:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3266373 00:22:04.173 05:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 3266373 ']' 00:22:04.173 05:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:04.173 05:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:04.173 05:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:04.173 05:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:04.173 05:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.432 05:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:04.432 05:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:22:04.432 05:36:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:04.432 05:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:04.432 05:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.432 05:36:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:04.432 05:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:04.432 05:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 3266373 00:22:04.432 05:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 3266373 ']' 00:22:04.432 05:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:04.432 05:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:04.432 05:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:04.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:04.432 05:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:04.432 05:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.690 05:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:04.690 05:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:22:04.690 05:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:22:04.690 05:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.690 05:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.948 05:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.948 05:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:22:04.948 05:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:04.948 05:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:04.948 05:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:04.948 05:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:04.948 05:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:04.948 05:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:04.948 05:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.948 05:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.948 05:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.948 05:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:04.948 05:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:05.880 00:22:05.880 05:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:05.880 05:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:05.880 05:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:06.138 05:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.138 05:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:06.138 05:36:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.138 05:36:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.138 05:36:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.138 05:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:06.138 { 00:22:06.138 "cntlid": 1, 00:22:06.138 "qid": 0, 00:22:06.138 "state": "enabled", 00:22:06.138 "listen_address": { 00:22:06.138 "trtype": "TCP", 00:22:06.138 "adrfam": "IPv4", 00:22:06.138 "traddr": "10.0.0.2", 00:22:06.138 "trsvcid": "4420" 00:22:06.138 }, 00:22:06.138 "peer_address": { 00:22:06.138 "trtype": "TCP", 00:22:06.138 "adrfam": "IPv4", 00:22:06.138 "traddr": "10.0.0.1", 00:22:06.138 "trsvcid": "34796" 00:22:06.138 }, 00:22:06.138 "auth": { 00:22:06.138 "state": "completed", 00:22:06.138 "digest": "sha512", 00:22:06.138 "dhgroup": "ffdhe8192" 00:22:06.138 } 00:22:06.138 } 00:22:06.138 ]' 00:22:06.138 05:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:06.138 05:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:06.138 05:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:06.138 05:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:06.138 05:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:06.138 05:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:06.138 05:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:06.138 05:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:06.395 05:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OWVkNTBkYTBkNTY0YmYzOWUzYjg3Y2RhYTFmOGEyYzI0MGYwNTBhYTU5NTgyODRkOWNiNGQ3MGQ1ODY4MDFkNYf5+BY=: 00:22:07.325 05:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:07.325 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:07.325 05:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:07.325 05:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.325 05:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.325 05:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.326 05:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:07.326 05:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.326 05:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.326 05:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.326 05:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:07.326 05:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:07.582 05:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:07.582 05:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:07.582 05:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:07.582 05:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:07.583 05:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:07.583 05:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:07.583 05:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:07.583 05:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:07.583 05:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:07.840 request: 00:22:07.840 { 00:22:07.840 "name": "nvme0", 00:22:07.840 "trtype": "tcp", 00:22:07.840 "traddr": "10.0.0.2", 00:22:07.840 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:07.840 "adrfam": "ipv4", 00:22:07.840 "trsvcid": "4420", 00:22:07.840 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:07.840 "dhchap_key": "key3", 00:22:07.840 "method": "bdev_nvme_attach_controller", 00:22:07.840 "req_id": 1 00:22:07.840 } 00:22:07.840 Got JSON-RPC error response 00:22:07.840 response: 00:22:07.840 { 00:22:07.840 "code": -5, 00:22:07.840 "message": "Input/output error" 00:22:07.840 } 00:22:07.840 05:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:07.840 05:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:07.840 05:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:07.840 05:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:07.840 05:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:22:07.840 05:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:22:07.840 05:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:07.840 05:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:08.098 05:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:08.098 05:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:08.098 05:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:08.098 05:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:08.098 05:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:08.098 05:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:08.098 05:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:08.098 05:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:08.098 05:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:08.356 request: 00:22:08.356 { 00:22:08.356 "name": "nvme0", 00:22:08.356 "trtype": "tcp", 00:22:08.356 "traddr": "10.0.0.2", 00:22:08.356 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:08.356 "adrfam": "ipv4", 00:22:08.356 "trsvcid": "4420", 00:22:08.356 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:08.356 "dhchap_key": "key3", 00:22:08.356 "method": "bdev_nvme_attach_controller", 00:22:08.356 "req_id": 1 00:22:08.356 } 00:22:08.356 Got JSON-RPC error response 00:22:08.356 response: 00:22:08.356 { 00:22:08.356 "code": -5, 00:22:08.356 "message": "Input/output error" 00:22:08.356 } 00:22:08.356 05:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:08.356 05:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:08.356 05:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:08.356 05:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:08.356 05:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:22:08.356 05:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:22:08.356 05:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:22:08.356 05:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:08.356 05:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:08.356 05:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:08.614 05:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:08.614 05:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.614 05:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.614 05:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.614 05:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:08.614 05:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.614 05:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.614 05:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.614 05:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:08.614 05:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:08.614 05:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:08.614 05:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:08.614 05:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:08.614 05:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:08.614 05:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:08.614 05:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:08.614 05:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:08.872 request: 00:22:08.872 { 00:22:08.872 "name": "nvme0", 00:22:08.872 "trtype": "tcp", 00:22:08.872 "traddr": "10.0.0.2", 00:22:08.872 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:08.872 "adrfam": "ipv4", 00:22:08.872 "trsvcid": "4420", 00:22:08.872 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:08.872 "dhchap_key": "key0", 00:22:08.872 "dhchap_ctrlr_key": "key1", 00:22:08.872 "method": "bdev_nvme_attach_controller", 00:22:08.872 "req_id": 1 00:22:08.872 } 00:22:08.872 Got JSON-RPC error response 00:22:08.872 response: 00:22:08.872 { 00:22:08.872 "code": -5, 00:22:08.872 "message": "Input/output error" 00:22:08.872 } 00:22:08.872 05:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:08.872 05:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:08.872 05:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:08.872 05:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:08.872 05:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:08.872 05:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:09.130 00:22:09.130 05:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:22:09.130 05:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:09.130 05:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:22:09.388 05:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.388 05:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:09.388 05:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:09.646 05:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:22:09.646 05:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:22:09.646 05:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3243780 00:22:09.646 05:36:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 3243780 ']' 00:22:09.646 05:36:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 3243780 00:22:09.646 05:36:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:22:09.646 05:36:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:09.646 05:36:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3243780 00:22:09.646 05:36:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:09.646 05:36:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:09.646 05:36:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3243780' 00:22:09.646 killing process with pid 3243780 00:22:09.646 05:36:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 3243780 00:22:09.646 05:36:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 3243780 00:22:10.212 05:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:10.212 05:36:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:10.212 05:36:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:22:10.212 05:36:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:10.212 05:36:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:22:10.212 05:36:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:10.212 05:36:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:10.212 rmmod nvme_tcp 00:22:10.212 rmmod nvme_fabrics 00:22:10.212 rmmod nvme_keyring 00:22:10.212 05:36:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:10.212 05:36:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:22:10.212 05:36:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:22:10.212 05:36:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 3266373 ']' 00:22:10.212 05:36:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 3266373 00:22:10.212 05:36:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 3266373 ']' 00:22:10.212 05:36:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 3266373 00:22:10.212 05:36:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:22:10.212 05:36:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:10.212 05:36:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3266373 00:22:10.212 05:36:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:10.212 05:36:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:10.212 05:36:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3266373' 00:22:10.212 killing process with pid 3266373 00:22:10.212 05:36:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 3266373 00:22:10.212 05:36:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 3266373 00:22:10.470 05:36:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:10.470 05:36:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:10.470 05:36:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:10.470 05:36:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:10.470 05:36:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:10.470 05:36:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:10.470 05:36:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:10.470 05:36:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:12.376 05:36:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:12.376 05:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.1Yp /tmp/spdk.key-sha256.MAD /tmp/spdk.key-sha384.kJz /tmp/spdk.key-sha512.xax /tmp/spdk.key-sha512.b51 /tmp/spdk.key-sha384.KyQ /tmp/spdk.key-sha256.MAb '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:12.376 00:22:12.376 real 3m9.248s 00:22:12.376 user 7m19.640s 00:22:12.376 sys 0m25.671s 00:22:12.376 05:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:12.376 05:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.376 ************************************ 00:22:12.376 END TEST nvmf_auth_target 00:22:12.376 ************************************ 00:22:12.634 05:36:19 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:22:12.634 05:36:19 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:12.634 05:36:19 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:22:12.634 05:36:19 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:12.634 05:36:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:12.634 ************************************ 00:22:12.634 START TEST nvmf_bdevio_no_huge 00:22:12.634 ************************************ 00:22:12.634 05:36:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:12.634 * Looking for test storage... 00:22:12.634 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:12.635 05:36:19 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:12.635 05:36:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:12.635 05:36:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:12.635 05:36:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:12.635 05:36:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:12.635 05:36:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:12.635 05:36:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:12.635 05:36:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:12.635 05:36:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:12.635 05:36:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:12.635 05:36:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:12.635 05:36:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:12.635 05:36:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:12.635 05:36:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:12.635 05:36:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:12.635 05:36:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:12.635 05:36:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:12.635 05:36:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:12.635 05:36:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:12.635 05:36:19 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:12.635 05:36:19 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:12.635 05:36:19 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:12.635 05:36:19 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.635 05:36:19 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.635 05:36:19 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.635 05:36:19 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:12.635 05:36:19 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.635 05:36:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:22:12.635 05:36:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:12.635 05:36:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:12.635 05:36:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:12.635 05:36:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:12.635 05:36:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:12.635 05:36:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:12.635 05:36:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:12.635 05:36:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:12.635 05:36:19 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:12.635 05:36:19 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:12.635 05:36:19 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:12.635 05:36:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:12.635 05:36:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:12.635 05:36:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:12.635 05:36:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:12.635 05:36:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:12.635 05:36:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:12.635 05:36:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:12.635 05:36:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:12.635 05:36:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:12.635 05:36:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:12.635 05:36:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:22:12.635 05:36:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:14.538 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:14.538 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:22:14.538 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:14.538 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:14.538 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:14.538 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:14.538 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:14.538 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:22:14.538 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:14.538 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:22:14.538 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:22:14.538 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:22:14.538 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:22:14.538 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:22:14.538 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:22:14.538 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:14.538 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:14.538 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:14.538 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:14.538 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:14.538 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:14.538 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:14.538 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:14.538 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:14.538 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:14.538 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:14.538 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:14.538 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:14.538 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:14.538 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:14.538 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:14.538 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:14.538 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:14.538 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:14.538 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:14.538 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:14.538 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:14.538 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:14.538 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:14.538 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:14.538 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:14.538 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:14.538 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:14.538 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:14.538 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:14.538 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:14.538 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:14.539 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:14.539 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:14.539 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:14.539 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:14.539 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:14.539 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:14.539 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:14.539 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:14.539 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:14.539 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:14.539 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:14.539 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:14.539 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:14.539 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:14.539 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:14.539 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:14.539 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:14.539 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:14.539 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:14.539 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:14.539 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:14.539 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:14.539 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:14.539 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:14.539 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:14.539 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:22:14.539 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:14.539 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:14.539 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:14.539 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:14.539 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:14.539 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:14.539 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:14.539 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:14.539 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:14.539 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:14.539 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:14.539 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:14.539 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:14.539 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:14.539 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:14.539 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:14.859 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:14.859 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:14.859 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:14.859 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:14.859 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:14.859 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:14.859 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:14.859 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:14.859 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:22:14.859 00:22:14.859 --- 10.0.0.2 ping statistics --- 00:22:14.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:14.859 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:22:14.859 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:14.859 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:14.859 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:22:14.859 00:22:14.859 --- 10.0.0.1 ping statistics --- 00:22:14.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:14.859 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:22:14.859 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:14.860 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:22:14.860 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:14.860 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:14.860 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:14.860 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:14.860 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:14.860 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:14.860 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:14.860 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:14.860 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:14.860 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:14.860 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:14.860 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=3269017 00:22:14.860 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:14.860 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 3269017 00:22:14.860 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@827 -- # '[' -z 3269017 ']' 00:22:14.860 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:14.860 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:14.860 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:14.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:14.860 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:14.860 05:36:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:14.860 [2024-07-14 05:36:21.801789] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:22:14.860 [2024-07-14 05:36:21.801874] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:14.860 [2024-07-14 05:36:21.872010] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:15.119 [2024-07-14 05:36:21.953838] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:15.119 [2024-07-14 05:36:21.953909] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:15.119 [2024-07-14 05:36:21.953932] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:15.119 [2024-07-14 05:36:21.953943] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:15.119 [2024-07-14 05:36:21.953953] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:15.119 [2024-07-14 05:36:21.954013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:22:15.119 [2024-07-14 05:36:21.954073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:22:15.119 [2024-07-14 05:36:21.954147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:22:15.119 [2024-07-14 05:36:21.954150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:15.119 05:36:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:15.119 05:36:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # return 0 00:22:15.119 05:36:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:15.119 05:36:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:15.119 05:36:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:15.119 05:36:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:15.119 05:36:22 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:15.119 05:36:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.119 05:36:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:15.119 [2024-07-14 05:36:22.067663] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:15.119 05:36:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.119 05:36:22 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:15.119 05:36:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.119 05:36:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:15.119 Malloc0 00:22:15.119 05:36:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.119 05:36:22 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:15.119 05:36:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.119 05:36:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:15.119 05:36:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.119 05:36:22 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:15.119 05:36:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.119 05:36:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:15.119 05:36:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.119 05:36:22 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:15.119 05:36:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.119 05:36:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:15.119 [2024-07-14 05:36:22.105486] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:15.119 05:36:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.119 05:36:22 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:15.119 05:36:22 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:15.119 05:36:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:22:15.119 05:36:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:22:15.119 05:36:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:15.119 05:36:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:15.119 { 00:22:15.119 "params": { 00:22:15.119 "name": "Nvme$subsystem", 00:22:15.119 "trtype": "$TEST_TRANSPORT", 00:22:15.119 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.119 "adrfam": "ipv4", 00:22:15.119 "trsvcid": "$NVMF_PORT", 00:22:15.119 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.119 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.119 "hdgst": ${hdgst:-false}, 00:22:15.119 "ddgst": ${ddgst:-false} 00:22:15.119 }, 00:22:15.119 "method": "bdev_nvme_attach_controller" 00:22:15.119 } 00:22:15.119 EOF 00:22:15.119 )") 00:22:15.119 05:36:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:22:15.119 05:36:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:22:15.119 05:36:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:22:15.119 05:36:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:15.119 "params": { 00:22:15.119 "name": "Nvme1", 00:22:15.119 "trtype": "tcp", 00:22:15.119 "traddr": "10.0.0.2", 00:22:15.119 "adrfam": "ipv4", 00:22:15.119 "trsvcid": "4420", 00:22:15.119 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:15.119 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:15.119 "hdgst": false, 00:22:15.119 "ddgst": false 00:22:15.119 }, 00:22:15.119 "method": "bdev_nvme_attach_controller" 00:22:15.119 }' 00:22:15.119 [2024-07-14 05:36:22.149983] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:22:15.119 [2024-07-14 05:36:22.150076] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3269161 ] 00:22:15.119 [2024-07-14 05:36:22.212254] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:15.377 [2024-07-14 05:36:22.296287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:15.377 [2024-07-14 05:36:22.296339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:15.377 [2024-07-14 05:36:22.296342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:15.635 I/O targets: 00:22:15.635 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:15.635 00:22:15.635 00:22:15.635 CUnit - A unit testing framework for C - Version 2.1-3 00:22:15.635 http://cunit.sourceforge.net/ 00:22:15.635 00:22:15.635 00:22:15.635 Suite: bdevio tests on: Nvme1n1 00:22:15.635 Test: blockdev write read block ...passed 00:22:15.635 Test: blockdev write zeroes read block ...passed 00:22:15.635 Test: blockdev write zeroes read no split ...passed 00:22:15.635 Test: blockdev write zeroes read split ...passed 00:22:15.635 Test: blockdev write zeroes read split partial ...passed 00:22:15.635 Test: blockdev reset ...[2024-07-14 05:36:22.710413] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:15.635 [2024-07-14 05:36:22.710529] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e25a00 (9): Bad file descriptor 00:22:15.635 [2024-07-14 05:36:22.730144] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:15.635 passed 00:22:15.635 Test: blockdev write read 8 blocks ...passed 00:22:15.635 Test: blockdev write read size > 128k ...passed 00:22:15.635 Test: blockdev write read invalid size ...passed 00:22:15.892 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:15.892 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:15.892 Test: blockdev write read max offset ...passed 00:22:15.892 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:15.892 Test: blockdev writev readv 8 blocks ...passed 00:22:15.892 Test: blockdev writev readv 30 x 1block ...passed 00:22:15.892 Test: blockdev writev readv block ...passed 00:22:15.892 Test: blockdev writev readv size > 128k ...passed 00:22:15.892 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:15.892 Test: blockdev comparev and writev ...[2024-07-14 05:36:22.989949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:15.892 [2024-07-14 05:36:22.989986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:15.892 [2024-07-14 05:36:22.990010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:15.892 [2024-07-14 05:36:22.990027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:15.892 [2024-07-14 05:36:22.990402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:15.892 [2024-07-14 05:36:22.990426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:15.892 [2024-07-14 05:36:22.990448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:15.892 [2024-07-14 05:36:22.990464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:15.892 [2024-07-14 05:36:22.990837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:15.892 [2024-07-14 05:36:22.990863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:15.892 [2024-07-14 05:36:22.990894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:15.892 [2024-07-14 05:36:22.990911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:15.892 [2024-07-14 05:36:22.991285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:15.892 [2024-07-14 05:36:22.991310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:15.892 [2024-07-14 05:36:22.991332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:15.892 [2024-07-14 05:36:22.991348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:16.150 passed 00:22:16.151 Test: blockdev nvme passthru rw ...passed 00:22:16.151 Test: blockdev nvme passthru vendor specific ...[2024-07-14 05:36:23.075233] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:16.151 [2024-07-14 05:36:23.075260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:16.151 [2024-07-14 05:36:23.075469] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:16.151 [2024-07-14 05:36:23.075493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:16.151 [2024-07-14 05:36:23.075705] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:16.151 [2024-07-14 05:36:23.075728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:16.151 [2024-07-14 05:36:23.075946] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:16.151 [2024-07-14 05:36:23.075982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:16.151 passed 00:22:16.151 Test: blockdev nvme admin passthru ...passed 00:22:16.151 Test: blockdev copy ...passed 00:22:16.151 00:22:16.151 Run Summary: Type Total Ran Passed Failed Inactive 00:22:16.151 suites 1 1 n/a 0 0 00:22:16.151 tests 23 23 23 0 0 00:22:16.151 asserts 152 152 152 0 n/a 00:22:16.151 00:22:16.151 Elapsed time = 1.282 seconds 00:22:16.408 05:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:16.408 05:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.408 05:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:16.408 05:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.408 05:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:16.408 05:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:16.408 05:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:16.408 05:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:22:16.408 05:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:16.408 05:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:22:16.408 05:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:16.408 05:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:16.408 rmmod nvme_tcp 00:22:16.408 rmmod nvme_fabrics 00:22:16.408 rmmod nvme_keyring 00:22:16.408 05:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:16.408 05:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:22:16.408 05:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:22:16.408 05:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 3269017 ']' 00:22:16.408 05:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 3269017 00:22:16.408 05:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@946 -- # '[' -z 3269017 ']' 00:22:16.408 05:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # kill -0 3269017 00:22:16.408 05:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # uname 00:22:16.408 05:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:16.408 05:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3269017 00:22:16.665 05:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:22:16.665 05:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:22:16.665 05:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3269017' 00:22:16.665 killing process with pid 3269017 00:22:16.665 05:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@965 -- # kill 3269017 00:22:16.665 05:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # wait 3269017 00:22:16.923 05:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:16.923 05:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:16.923 05:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:16.923 05:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:16.923 05:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:16.923 05:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:16.923 05:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:16.923 05:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:18.826 05:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:19.084 00:22:19.084 real 0m6.418s 00:22:19.084 user 0m10.299s 00:22:19.084 sys 0m2.543s 00:22:19.084 05:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:19.084 05:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:19.084 ************************************ 00:22:19.084 END TEST nvmf_bdevio_no_huge 00:22:19.084 ************************************ 00:22:19.084 05:36:25 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:19.084 05:36:25 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:19.084 05:36:25 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:19.084 05:36:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:19.084 ************************************ 00:22:19.084 START TEST nvmf_tls 00:22:19.084 ************************************ 00:22:19.084 05:36:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:19.084 * Looking for test storage... 00:22:19.084 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:19.084 05:36:26 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:19.084 05:36:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:19.084 05:36:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:19.084 05:36:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:19.084 05:36:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:19.084 05:36:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:19.084 05:36:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:19.084 05:36:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:19.084 05:36:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:19.084 05:36:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:19.084 05:36:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:19.084 05:36:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:19.084 05:36:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:19.084 05:36:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:19.084 05:36:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:19.084 05:36:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:19.084 05:36:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:19.084 05:36:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:19.084 05:36:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:19.084 05:36:26 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:19.084 05:36:26 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:19.084 05:36:26 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:19.084 05:36:26 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.085 05:36:26 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.085 05:36:26 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.085 05:36:26 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:19.085 05:36:26 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.085 05:36:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:22:19.085 05:36:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:19.085 05:36:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:19.085 05:36:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:19.085 05:36:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:19.085 05:36:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:19.085 05:36:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:19.085 05:36:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:19.085 05:36:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:19.085 05:36:26 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:19.085 05:36:26 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:22:19.085 05:36:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:19.085 05:36:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:19.085 05:36:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:19.085 05:36:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:19.085 05:36:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:19.085 05:36:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:19.085 05:36:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:19.085 05:36:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:19.085 05:36:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:19.085 05:36:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:19.085 05:36:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:22:19.085 05:36:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:20.995 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:20.996 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:20.996 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:20.996 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:20.996 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:20.996 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:21.254 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:21.254 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:21.254 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:21.254 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:21.254 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:21.254 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:21.254 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:21.254 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:21.254 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:22:21.254 00:22:21.254 --- 10.0.0.2 ping statistics --- 00:22:21.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:21.254 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:22:21.254 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:21.254 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:21.254 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:22:21.254 00:22:21.254 --- 10.0.0.1 ping statistics --- 00:22:21.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:21.254 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:22:21.254 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:21.254 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:22:21.254 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:21.254 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:21.254 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:21.254 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:21.254 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:21.254 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:21.254 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:21.254 05:36:28 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:21.254 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:21.254 05:36:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:21.254 05:36:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:21.254 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3271230 00:22:21.254 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:21.254 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3271230 00:22:21.254 05:36:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3271230 ']' 00:22:21.254 05:36:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:21.254 05:36:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:21.254 05:36:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:21.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:21.254 05:36:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:21.254 05:36:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:21.255 [2024-07-14 05:36:28.266612] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:22:21.255 [2024-07-14 05:36:28.266714] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:21.255 EAL: No free 2048 kB hugepages reported on node 1 00:22:21.255 [2024-07-14 05:36:28.338518] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:21.512 [2024-07-14 05:36:28.428360] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:21.512 [2024-07-14 05:36:28.428423] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:21.512 [2024-07-14 05:36:28.428449] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:21.512 [2024-07-14 05:36:28.428463] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:21.512 [2024-07-14 05:36:28.428474] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:21.512 [2024-07-14 05:36:28.428503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:21.512 05:36:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:21.512 05:36:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:21.512 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:21.512 05:36:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:21.512 05:36:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:21.512 05:36:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:21.512 05:36:28 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:22:21.513 05:36:28 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:21.786 true 00:22:21.786 05:36:28 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:21.786 05:36:28 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:22:22.051 05:36:28 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:22:22.051 05:36:28 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:22:22.051 05:36:28 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:22.308 05:36:29 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:22.308 05:36:29 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:22:22.565 05:36:29 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:22:22.565 05:36:29 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:22:22.565 05:36:29 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:22.823 05:36:29 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:22.823 05:36:29 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:22:23.081 05:36:29 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:22:23.081 05:36:29 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:22:23.081 05:36:29 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:23.081 05:36:29 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:22:23.339 05:36:30 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:22:23.339 05:36:30 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:22:23.339 05:36:30 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:23.597 05:36:30 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:23.597 05:36:30 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:22:23.855 05:36:30 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:22:23.855 05:36:30 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:22:23.855 05:36:30 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:24.113 05:36:31 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:24.113 05:36:31 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:22:24.371 05:36:31 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:22:24.371 05:36:31 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:22:24.371 05:36:31 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:24.371 05:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:24.371 05:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:24.371 05:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:24.371 05:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:22:24.371 05:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:24.371 05:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:24.371 05:36:31 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:24.371 05:36:31 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:24.371 05:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:24.371 05:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:24.371 05:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:24.371 05:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:22:24.371 05:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:24.371 05:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:24.371 05:36:31 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:24.371 05:36:31 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:22:24.371 05:36:31 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.pHtpqqHnPU 00:22:24.371 05:36:31 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:24.371 05:36:31 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.KgWnOhT9U2 00:22:24.371 05:36:31 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:24.371 05:36:31 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:24.371 05:36:31 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.pHtpqqHnPU 00:22:24.371 05:36:31 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.KgWnOhT9U2 00:22:24.371 05:36:31 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:24.630 05:36:31 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:25.196 05:36:32 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.pHtpqqHnPU 00:22:25.196 05:36:32 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.pHtpqqHnPU 00:22:25.196 05:36:32 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:25.196 [2024-07-14 05:36:32.254680] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:25.196 05:36:32 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:25.455 05:36:32 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:25.714 [2024-07-14 05:36:32.748047] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:25.714 [2024-07-14 05:36:32.748322] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:25.714 05:36:32 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:25.973 malloc0 00:22:25.973 05:36:33 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:26.231 05:36:33 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.pHtpqqHnPU 00:22:26.489 [2024-07-14 05:36:33.530607] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:26.489 05:36:33 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.pHtpqqHnPU 00:22:26.489 EAL: No free 2048 kB hugepages reported on node 1 00:22:38.712 Initializing NVMe Controllers 00:22:38.712 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:38.712 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:38.712 Initialization complete. Launching workers. 00:22:38.712 ======================================================== 00:22:38.712 Latency(us) 00:22:38.712 Device Information : IOPS MiB/s Average min max 00:22:38.712 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7760.84 30.32 8249.19 1228.41 11631.93 00:22:38.712 ======================================================== 00:22:38.712 Total : 7760.84 30.32 8249.19 1228.41 11631.93 00:22:38.712 00:22:38.712 05:36:43 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pHtpqqHnPU 00:22:38.712 05:36:43 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:38.712 05:36:43 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:38.712 05:36:43 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:38.712 05:36:43 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.pHtpqqHnPU' 00:22:38.712 05:36:43 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:38.712 05:36:43 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3273003 00:22:38.712 05:36:43 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:38.712 05:36:43 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:38.712 05:36:43 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3273003 /var/tmp/bdevperf.sock 00:22:38.712 05:36:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3273003 ']' 00:22:38.712 05:36:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:38.712 05:36:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:38.712 05:36:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:38.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:38.712 05:36:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:38.712 05:36:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:38.712 [2024-07-14 05:36:43.690440] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:22:38.712 [2024-07-14 05:36:43.690510] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3273003 ] 00:22:38.712 EAL: No free 2048 kB hugepages reported on node 1 00:22:38.712 [2024-07-14 05:36:43.749627] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:38.712 [2024-07-14 05:36:43.836179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:38.712 05:36:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:38.712 05:36:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:38.712 05:36:43 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.pHtpqqHnPU 00:22:38.712 [2024-07-14 05:36:44.181411] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:38.712 [2024-07-14 05:36:44.181519] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:38.712 TLSTESTn1 00:22:38.712 05:36:44 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:38.712 Running I/O for 10 seconds... 00:22:48.677 00:22:48.677 Latency(us) 00:22:48.677 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:48.677 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:48.677 Verification LBA range: start 0x0 length 0x2000 00:22:48.677 TLSTESTn1 : 10.08 1643.59 6.42 0.00 0.00 77615.52 5995.33 112624.83 00:22:48.677 =================================================================================================================== 00:22:48.677 Total : 1643.59 6.42 0.00 0.00 77615.52 5995.33 112624.83 00:22:48.677 0 00:22:48.678 05:36:54 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:48.678 05:36:54 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 3273003 00:22:48.678 05:36:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3273003 ']' 00:22:48.678 05:36:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3273003 00:22:48.678 05:36:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:48.678 05:36:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:48.678 05:36:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3273003 00:22:48.678 05:36:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:48.678 05:36:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:48.678 05:36:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3273003' 00:22:48.678 killing process with pid 3273003 00:22:48.678 05:36:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3273003 00:22:48.678 Received shutdown signal, test time was about 10.000000 seconds 00:22:48.678 00:22:48.678 Latency(us) 00:22:48.678 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:48.678 =================================================================================================================== 00:22:48.678 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:48.678 [2024-07-14 05:36:54.526532] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:48.678 05:36:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3273003 00:22:48.678 05:36:54 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.KgWnOhT9U2 00:22:48.678 05:36:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:48.678 05:36:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.KgWnOhT9U2 00:22:48.678 05:36:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:48.678 05:36:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:48.678 05:36:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:48.678 05:36:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:48.678 05:36:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.KgWnOhT9U2 00:22:48.678 05:36:54 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:48.678 05:36:54 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:48.678 05:36:54 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:48.678 05:36:54 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.KgWnOhT9U2' 00:22:48.678 05:36:54 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:48.678 05:36:54 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3274317 00:22:48.678 05:36:54 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:48.678 05:36:54 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:48.678 05:36:54 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3274317 /var/tmp/bdevperf.sock 00:22:48.678 05:36:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3274317 ']' 00:22:48.678 05:36:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:48.678 05:36:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:48.678 05:36:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:48.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:48.678 05:36:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:48.678 05:36:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:48.678 [2024-07-14 05:36:54.783343] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:22:48.678 [2024-07-14 05:36:54.783422] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3274317 ] 00:22:48.678 EAL: No free 2048 kB hugepages reported on node 1 00:22:48.678 [2024-07-14 05:36:54.842690] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:48.678 [2024-07-14 05:36:54.923926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:48.678 05:36:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:48.678 05:36:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:48.678 05:36:55 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.KgWnOhT9U2 00:22:48.678 [2024-07-14 05:36:55.287453] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:48.678 [2024-07-14 05:36:55.287573] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:48.678 [2024-07-14 05:36:55.292954] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:48.678 [2024-07-14 05:36:55.293463] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x222aed0 (107): Transport endpoint is not connected 00:22:48.678 [2024-07-14 05:36:55.294451] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x222aed0 (9): Bad file descriptor 00:22:48.678 [2024-07-14 05:36:55.295449] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:48.678 [2024-07-14 05:36:55.295470] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:48.678 [2024-07-14 05:36:55.295487] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:48.678 request: 00:22:48.678 { 00:22:48.678 "name": "TLSTEST", 00:22:48.678 "trtype": "tcp", 00:22:48.678 "traddr": "10.0.0.2", 00:22:48.678 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:48.678 "adrfam": "ipv4", 00:22:48.678 "trsvcid": "4420", 00:22:48.678 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:48.678 "psk": "/tmp/tmp.KgWnOhT9U2", 00:22:48.678 "method": "bdev_nvme_attach_controller", 00:22:48.678 "req_id": 1 00:22:48.678 } 00:22:48.678 Got JSON-RPC error response 00:22:48.678 response: 00:22:48.678 { 00:22:48.678 "code": -5, 00:22:48.678 "message": "Input/output error" 00:22:48.678 } 00:22:48.678 05:36:55 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3274317 00:22:48.678 05:36:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3274317 ']' 00:22:48.678 05:36:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3274317 00:22:48.678 05:36:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:48.678 05:36:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:48.678 05:36:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3274317 00:22:48.678 05:36:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:48.678 05:36:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:48.678 05:36:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3274317' 00:22:48.678 killing process with pid 3274317 00:22:48.678 05:36:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3274317 00:22:48.678 Received shutdown signal, test time was about 10.000000 seconds 00:22:48.678 00:22:48.678 Latency(us) 00:22:48.678 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:48.678 =================================================================================================================== 00:22:48.678 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:48.678 [2024-07-14 05:36:55.347790] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:48.678 05:36:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3274317 00:22:48.678 05:36:55 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:48.678 05:36:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:48.678 05:36:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:48.678 05:36:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:48.678 05:36:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:48.678 05:36:55 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.pHtpqqHnPU 00:22:48.678 05:36:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:48.678 05:36:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.pHtpqqHnPU 00:22:48.678 05:36:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:48.678 05:36:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:48.678 05:36:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:48.678 05:36:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:48.678 05:36:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.pHtpqqHnPU 00:22:48.678 05:36:55 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:48.678 05:36:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:48.678 05:36:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:22:48.678 05:36:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.pHtpqqHnPU' 00:22:48.678 05:36:55 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:48.678 05:36:55 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3274447 00:22:48.678 05:36:55 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:48.678 05:36:55 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:48.678 05:36:55 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3274447 /var/tmp/bdevperf.sock 00:22:48.678 05:36:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3274447 ']' 00:22:48.678 05:36:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:48.678 05:36:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:48.678 05:36:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:48.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:48.678 05:36:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:48.678 05:36:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:48.678 [2024-07-14 05:36:55.611540] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:22:48.679 [2024-07-14 05:36:55.611626] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3274447 ] 00:22:48.679 EAL: No free 2048 kB hugepages reported on node 1 00:22:48.679 [2024-07-14 05:36:55.676434] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:48.679 [2024-07-14 05:36:55.763264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:48.936 05:36:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:48.936 05:36:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:48.936 05:36:55 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.pHtpqqHnPU 00:22:49.195 [2024-07-14 05:36:56.089055] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:49.195 [2024-07-14 05:36:56.089195] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:49.195 [2024-07-14 05:36:56.094574] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:49.195 [2024-07-14 05:36:56.094610] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:49.195 [2024-07-14 05:36:56.094653] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:49.195 [2024-07-14 05:36:56.095116] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f78ed0 (107): Transport endpoint is not connected 00:22:49.195 [2024-07-14 05:36:56.096103] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f78ed0 (9): Bad file descriptor 00:22:49.195 [2024-07-14 05:36:56.097101] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:49.195 [2024-07-14 05:36:56.097122] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:49.195 [2024-07-14 05:36:56.097140] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:49.195 request: 00:22:49.195 { 00:22:49.195 "name": "TLSTEST", 00:22:49.195 "trtype": "tcp", 00:22:49.195 "traddr": "10.0.0.2", 00:22:49.195 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:49.195 "adrfam": "ipv4", 00:22:49.195 "trsvcid": "4420", 00:22:49.195 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:49.195 "psk": "/tmp/tmp.pHtpqqHnPU", 00:22:49.195 "method": "bdev_nvme_attach_controller", 00:22:49.195 "req_id": 1 00:22:49.195 } 00:22:49.195 Got JSON-RPC error response 00:22:49.195 response: 00:22:49.195 { 00:22:49.195 "code": -5, 00:22:49.195 "message": "Input/output error" 00:22:49.195 } 00:22:49.195 05:36:56 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3274447 00:22:49.195 05:36:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3274447 ']' 00:22:49.195 05:36:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3274447 00:22:49.195 05:36:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:49.195 05:36:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:49.195 05:36:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3274447 00:22:49.195 05:36:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:49.195 05:36:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:49.195 05:36:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3274447' 00:22:49.195 killing process with pid 3274447 00:22:49.195 05:36:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3274447 00:22:49.195 Received shutdown signal, test time was about 10.000000 seconds 00:22:49.195 00:22:49.195 Latency(us) 00:22:49.195 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:49.195 =================================================================================================================== 00:22:49.195 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:49.195 [2024-07-14 05:36:56.150391] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:49.195 05:36:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3274447 00:22:49.453 05:36:56 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:49.454 05:36:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:49.454 05:36:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:49.454 05:36:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:49.454 05:36:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:49.454 05:36:56 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.pHtpqqHnPU 00:22:49.454 05:36:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:49.454 05:36:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.pHtpqqHnPU 00:22:49.454 05:36:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:49.454 05:36:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:49.454 05:36:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:49.454 05:36:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:49.454 05:36:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.pHtpqqHnPU 00:22:49.454 05:36:56 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:49.454 05:36:56 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:22:49.454 05:36:56 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:49.454 05:36:56 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.pHtpqqHnPU' 00:22:49.454 05:36:56 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:49.454 05:36:56 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3274537 00:22:49.454 05:36:56 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:49.454 05:36:56 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:49.454 05:36:56 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3274537 /var/tmp/bdevperf.sock 00:22:49.454 05:36:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3274537 ']' 00:22:49.454 05:36:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:49.454 05:36:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:49.454 05:36:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:49.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:49.454 05:36:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:49.454 05:36:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:49.454 [2024-07-14 05:36:56.411842] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:22:49.454 [2024-07-14 05:36:56.411967] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3274537 ] 00:22:49.454 EAL: No free 2048 kB hugepages reported on node 1 00:22:49.454 [2024-07-14 05:36:56.474553] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:49.454 [2024-07-14 05:36:56.558370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:49.712 05:36:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:49.712 05:36:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:49.712 05:36:56 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.pHtpqqHnPU 00:22:49.971 [2024-07-14 05:36:56.902988] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:49.971 [2024-07-14 05:36:56.903111] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:49.971 [2024-07-14 05:36:56.908481] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:49.971 [2024-07-14 05:36:56.908516] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:49.971 [2024-07-14 05:36:56.908557] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:49.971 [2024-07-14 05:36:56.909045] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2032ed0 (107): Transport endpoint is not connected 00:22:49.971 [2024-07-14 05:36:56.910033] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2032ed0 (9): Bad file descriptor 00:22:49.971 [2024-07-14 05:36:56.911032] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:49.971 [2024-07-14 05:36:56.911053] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:49.971 [2024-07-14 05:36:56.911070] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:49.971 request: 00:22:49.971 { 00:22:49.971 "name": "TLSTEST", 00:22:49.971 "trtype": "tcp", 00:22:49.971 "traddr": "10.0.0.2", 00:22:49.971 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:49.971 "adrfam": "ipv4", 00:22:49.971 "trsvcid": "4420", 00:22:49.971 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:49.971 "psk": "/tmp/tmp.pHtpqqHnPU", 00:22:49.971 "method": "bdev_nvme_attach_controller", 00:22:49.971 "req_id": 1 00:22:49.971 } 00:22:49.971 Got JSON-RPC error response 00:22:49.971 response: 00:22:49.971 { 00:22:49.971 "code": -5, 00:22:49.971 "message": "Input/output error" 00:22:49.971 } 00:22:49.971 05:36:56 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3274537 00:22:49.971 05:36:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3274537 ']' 00:22:49.971 05:36:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3274537 00:22:49.971 05:36:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:49.971 05:36:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:49.971 05:36:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3274537 00:22:49.971 05:36:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:49.971 05:36:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:49.971 05:36:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3274537' 00:22:49.971 killing process with pid 3274537 00:22:49.971 05:36:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3274537 00:22:49.971 Received shutdown signal, test time was about 10.000000 seconds 00:22:49.971 00:22:49.971 Latency(us) 00:22:49.971 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:49.971 =================================================================================================================== 00:22:49.971 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:49.971 [2024-07-14 05:36:56.962973] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:49.971 05:36:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3274537 00:22:50.230 05:36:57 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:50.230 05:36:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:50.230 05:36:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:50.230 05:36:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:50.230 05:36:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:50.230 05:36:57 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:50.230 05:36:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:50.230 05:36:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:50.230 05:36:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:50.230 05:36:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:50.230 05:36:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:50.230 05:36:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:50.230 05:36:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:50.230 05:36:57 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:50.230 05:36:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:50.230 05:36:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:50.230 05:36:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:22:50.230 05:36:57 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:50.230 05:36:57 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3274610 00:22:50.230 05:36:57 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:50.230 05:36:57 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:50.230 05:36:57 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3274610 /var/tmp/bdevperf.sock 00:22:50.230 05:36:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3274610 ']' 00:22:50.230 05:36:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:50.230 05:36:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:50.230 05:36:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:50.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:50.230 05:36:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:50.230 05:36:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:50.230 [2024-07-14 05:36:57.228325] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:22:50.230 [2024-07-14 05:36:57.228418] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3274610 ] 00:22:50.230 EAL: No free 2048 kB hugepages reported on node 1 00:22:50.230 [2024-07-14 05:36:57.286550] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:50.488 [2024-07-14 05:36:57.367942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:50.488 05:36:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:50.488 05:36:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:50.488 05:36:57 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:50.747 [2024-07-14 05:36:57.708204] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:50.747 [2024-07-14 05:36:57.709831] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0a5c0 (9): Bad file descriptor 00:22:50.747 [2024-07-14 05:36:57.710825] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:50.747 [2024-07-14 05:36:57.710846] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:50.747 [2024-07-14 05:36:57.710863] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:50.747 request: 00:22:50.747 { 00:22:50.747 "name": "TLSTEST", 00:22:50.747 "trtype": "tcp", 00:22:50.747 "traddr": "10.0.0.2", 00:22:50.747 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:50.747 "adrfam": "ipv4", 00:22:50.747 "trsvcid": "4420", 00:22:50.747 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:50.747 "method": "bdev_nvme_attach_controller", 00:22:50.747 "req_id": 1 00:22:50.747 } 00:22:50.747 Got JSON-RPC error response 00:22:50.747 response: 00:22:50.747 { 00:22:50.747 "code": -5, 00:22:50.747 "message": "Input/output error" 00:22:50.747 } 00:22:50.747 05:36:57 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3274610 00:22:50.747 05:36:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3274610 ']' 00:22:50.747 05:36:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3274610 00:22:50.747 05:36:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:50.747 05:36:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:50.747 05:36:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3274610 00:22:50.747 05:36:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:50.747 05:36:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:50.747 05:36:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3274610' 00:22:50.747 killing process with pid 3274610 00:22:50.747 05:36:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3274610 00:22:50.747 Received shutdown signal, test time was about 10.000000 seconds 00:22:50.747 00:22:50.747 Latency(us) 00:22:50.747 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:50.747 =================================================================================================================== 00:22:50.747 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:50.747 05:36:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3274610 00:22:51.005 05:36:57 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:51.005 05:36:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:51.005 05:36:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:51.005 05:36:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:51.005 05:36:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:51.005 05:36:57 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 3271230 00:22:51.005 05:36:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3271230 ']' 00:22:51.005 05:36:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3271230 00:22:51.005 05:36:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:51.005 05:36:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:51.005 05:36:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3271230 00:22:51.005 05:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:51.005 05:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:51.005 05:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3271230' 00:22:51.005 killing process with pid 3271230 00:22:51.005 05:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3271230 00:22:51.005 [2024-07-14 05:36:58.008021] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:51.005 05:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3271230 00:22:51.265 05:36:58 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:22:51.265 05:36:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:22:51.265 05:36:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:51.265 05:36:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:51.265 05:36:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:22:51.265 05:36:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:22:51.265 05:36:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:51.265 05:36:58 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:51.265 05:36:58 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:22:51.265 05:36:58 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.GuMjktz3FM 00:22:51.265 05:36:58 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:51.265 05:36:58 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.GuMjktz3FM 00:22:51.265 05:36:58 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:22:51.265 05:36:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:51.265 05:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:51.265 05:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:51.265 05:36:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3274759 00:22:51.265 05:36:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:51.265 05:36:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3274759 00:22:51.265 05:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3274759 ']' 00:22:51.265 05:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:51.265 05:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:51.265 05:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:51.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:51.265 05:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:51.265 05:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:51.265 [2024-07-14 05:36:58.356192] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:22:51.265 [2024-07-14 05:36:58.356288] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:51.524 EAL: No free 2048 kB hugepages reported on node 1 00:22:51.524 [2024-07-14 05:36:58.432080] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.524 [2024-07-14 05:36:58.520223] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:51.524 [2024-07-14 05:36:58.520287] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:51.524 [2024-07-14 05:36:58.520313] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:51.524 [2024-07-14 05:36:58.520327] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:51.524 [2024-07-14 05:36:58.520340] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:51.524 [2024-07-14 05:36:58.520370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:51.782 05:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:51.782 05:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:51.782 05:36:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:51.782 05:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:51.782 05:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:51.782 05:36:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:51.782 05:36:58 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.GuMjktz3FM 00:22:51.782 05:36:58 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.GuMjktz3FM 00:22:51.782 05:36:58 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:52.040 [2024-07-14 05:36:58.933809] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:52.040 05:36:58 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:52.297 05:36:59 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:52.555 [2024-07-14 05:36:59.431126] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:52.555 [2024-07-14 05:36:59.431349] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:52.555 05:36:59 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:52.813 malloc0 00:22:52.813 05:36:59 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:53.071 05:36:59 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.GuMjktz3FM 00:22:53.329 [2024-07-14 05:37:00.247925] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:53.329 05:37:00 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GuMjktz3FM 00:22:53.329 05:37:00 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:53.329 05:37:00 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:53.329 05:37:00 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:53.329 05:37:00 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.GuMjktz3FM' 00:22:53.329 05:37:00 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:53.329 05:37:00 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3275044 00:22:53.329 05:37:00 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:53.329 05:37:00 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:53.329 05:37:00 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3275044 /var/tmp/bdevperf.sock 00:22:53.329 05:37:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3275044 ']' 00:22:53.329 05:37:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:53.329 05:37:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:53.329 05:37:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:53.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:53.329 05:37:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:53.329 05:37:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:53.329 [2024-07-14 05:37:00.313915] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:22:53.329 [2024-07-14 05:37:00.314004] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3275044 ] 00:22:53.330 EAL: No free 2048 kB hugepages reported on node 1 00:22:53.330 [2024-07-14 05:37:00.373320] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:53.588 [2024-07-14 05:37:00.460375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:53.588 05:37:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:53.588 05:37:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:53.588 05:37:00 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.GuMjktz3FM 00:22:53.846 [2024-07-14 05:37:00.843364] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:53.846 [2024-07-14 05:37:00.843475] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:53.846 TLSTESTn1 00:22:53.846 05:37:00 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:54.104 Running I/O for 10 seconds... 00:23:04.098 00:23:04.098 Latency(us) 00:23:04.098 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:04.098 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:04.098 Verification LBA range: start 0x0 length 0x2000 00:23:04.098 TLSTESTn1 : 10.07 1389.99 5.43 0.00 0.00 91803.08 8738.13 157674.76 00:23:04.098 =================================================================================================================== 00:23:04.098 Total : 1389.99 5.43 0.00 0.00 91803.08 8738.13 157674.76 00:23:04.098 0 00:23:04.098 05:37:11 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:04.098 05:37:11 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 3275044 00:23:04.098 05:37:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3275044 ']' 00:23:04.098 05:37:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3275044 00:23:04.098 05:37:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:04.098 05:37:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:04.098 05:37:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3275044 00:23:04.098 05:37:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:04.098 05:37:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:04.098 05:37:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3275044' 00:23:04.098 killing process with pid 3275044 00:23:04.098 05:37:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3275044 00:23:04.098 Received shutdown signal, test time was about 10.000000 seconds 00:23:04.098 00:23:04.098 Latency(us) 00:23:04.098 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:04.098 =================================================================================================================== 00:23:04.098 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:04.098 [2024-07-14 05:37:11.193410] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:04.098 05:37:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3275044 00:23:04.355 05:37:11 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.GuMjktz3FM 00:23:04.356 05:37:11 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GuMjktz3FM 00:23:04.356 05:37:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:04.356 05:37:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GuMjktz3FM 00:23:04.356 05:37:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:04.356 05:37:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:04.356 05:37:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:04.356 05:37:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:04.356 05:37:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GuMjktz3FM 00:23:04.356 05:37:11 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:04.356 05:37:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:04.356 05:37:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:04.356 05:37:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.GuMjktz3FM' 00:23:04.356 05:37:11 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:04.356 05:37:11 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3276356 00:23:04.356 05:37:11 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:04.356 05:37:11 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:04.356 05:37:11 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3276356 /var/tmp/bdevperf.sock 00:23:04.356 05:37:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3276356 ']' 00:23:04.356 05:37:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:04.356 05:37:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:04.356 05:37:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:04.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:04.356 05:37:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:04.356 05:37:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:04.614 [2024-07-14 05:37:11.471012] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:04.614 [2024-07-14 05:37:11.471103] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3276356 ] 00:23:04.614 EAL: No free 2048 kB hugepages reported on node 1 00:23:04.614 [2024-07-14 05:37:11.530392] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:04.614 [2024-07-14 05:37:11.620307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:04.872 05:37:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:04.872 05:37:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:04.872 05:37:11 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.GuMjktz3FM 00:23:04.872 [2024-07-14 05:37:11.960029] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:04.872 [2024-07-14 05:37:11.960123] bdev_nvme.c:6122:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:04.872 [2024-07-14 05:37:11.960138] bdev_nvme.c:6231:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.GuMjktz3FM 00:23:04.872 request: 00:23:04.872 { 00:23:04.872 "name": "TLSTEST", 00:23:04.872 "trtype": "tcp", 00:23:04.872 "traddr": "10.0.0.2", 00:23:04.872 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:04.872 "adrfam": "ipv4", 00:23:04.872 "trsvcid": "4420", 00:23:04.872 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:04.872 "psk": "/tmp/tmp.GuMjktz3FM", 00:23:04.872 "method": "bdev_nvme_attach_controller", 00:23:04.872 "req_id": 1 00:23:04.872 } 00:23:04.872 Got JSON-RPC error response 00:23:04.872 response: 00:23:04.872 { 00:23:04.872 "code": -1, 00:23:04.872 "message": "Operation not permitted" 00:23:04.872 } 00:23:05.130 05:37:11 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3276356 00:23:05.130 05:37:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3276356 ']' 00:23:05.130 05:37:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3276356 00:23:05.130 05:37:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:05.130 05:37:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:05.130 05:37:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3276356 00:23:05.130 05:37:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:05.130 05:37:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:05.130 05:37:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3276356' 00:23:05.130 killing process with pid 3276356 00:23:05.130 05:37:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3276356 00:23:05.130 Received shutdown signal, test time was about 10.000000 seconds 00:23:05.130 00:23:05.130 Latency(us) 00:23:05.130 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:05.130 =================================================================================================================== 00:23:05.130 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:05.130 05:37:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3276356 00:23:05.130 05:37:12 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:05.130 05:37:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:05.130 05:37:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:05.130 05:37:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:05.130 05:37:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:05.130 05:37:12 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 3274759 00:23:05.130 05:37:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3274759 ']' 00:23:05.130 05:37:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3274759 00:23:05.130 05:37:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:05.130 05:37:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:05.130 05:37:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3274759 00:23:05.390 05:37:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:05.390 05:37:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:05.390 05:37:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3274759' 00:23:05.390 killing process with pid 3274759 00:23:05.390 05:37:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3274759 00:23:05.390 [2024-07-14 05:37:12.259746] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:05.390 05:37:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3274759 00:23:05.648 05:37:12 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:23:05.648 05:37:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:05.648 05:37:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:05.648 05:37:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:05.648 05:37:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3276497 00:23:05.648 05:37:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:05.648 05:37:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3276497 00:23:05.648 05:37:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3276497 ']' 00:23:05.648 05:37:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:05.648 05:37:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:05.648 05:37:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:05.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:05.648 05:37:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:05.648 05:37:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:05.648 [2024-07-14 05:37:12.562782] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:05.648 [2024-07-14 05:37:12.562893] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:05.648 EAL: No free 2048 kB hugepages reported on node 1 00:23:05.648 [2024-07-14 05:37:12.632925] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:05.648 [2024-07-14 05:37:12.719387] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:05.648 [2024-07-14 05:37:12.719452] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:05.648 [2024-07-14 05:37:12.719479] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:05.648 [2024-07-14 05:37:12.719492] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:05.648 [2024-07-14 05:37:12.719504] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:05.648 [2024-07-14 05:37:12.719542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:05.907 05:37:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:05.907 05:37:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:05.907 05:37:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:05.907 05:37:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:05.907 05:37:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:05.907 05:37:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:05.907 05:37:12 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.GuMjktz3FM 00:23:05.907 05:37:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:05.907 05:37:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.GuMjktz3FM 00:23:05.907 05:37:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:23:05.907 05:37:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:05.907 05:37:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:23:05.907 05:37:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:05.907 05:37:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.GuMjktz3FM 00:23:05.907 05:37:12 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.GuMjktz3FM 00:23:05.907 05:37:12 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:06.165 [2024-07-14 05:37:13.100472] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:06.165 05:37:13 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:06.422 05:37:13 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:06.679 [2024-07-14 05:37:13.593823] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:06.679 [2024-07-14 05:37:13.594097] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:06.679 05:37:13 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:06.936 malloc0 00:23:06.936 05:37:13 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:07.194 05:37:14 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.GuMjktz3FM 00:23:07.451 [2024-07-14 05:37:14.475846] tcp.c:3575:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:07.451 [2024-07-14 05:37:14.475899] tcp.c:3661:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:23:07.452 [2024-07-14 05:37:14.475939] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:07.452 request: 00:23:07.452 { 00:23:07.452 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:07.452 "host": "nqn.2016-06.io.spdk:host1", 00:23:07.452 "psk": "/tmp/tmp.GuMjktz3FM", 00:23:07.452 "method": "nvmf_subsystem_add_host", 00:23:07.452 "req_id": 1 00:23:07.452 } 00:23:07.452 Got JSON-RPC error response 00:23:07.452 response: 00:23:07.452 { 00:23:07.452 "code": -32603, 00:23:07.452 "message": "Internal error" 00:23:07.452 } 00:23:07.452 05:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:07.452 05:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:07.452 05:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:07.452 05:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:07.452 05:37:14 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 3276497 00:23:07.452 05:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3276497 ']' 00:23:07.452 05:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3276497 00:23:07.452 05:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:07.452 05:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:07.452 05:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3276497 00:23:07.452 05:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:07.452 05:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:07.452 05:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3276497' 00:23:07.452 killing process with pid 3276497 00:23:07.452 05:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3276497 00:23:07.452 05:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3276497 00:23:07.710 05:37:14 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.GuMjktz3FM 00:23:07.710 05:37:14 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:23:07.710 05:37:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:07.710 05:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:07.710 05:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:07.710 05:37:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3276789 00:23:07.710 05:37:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:07.710 05:37:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3276789 00:23:07.710 05:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3276789 ']' 00:23:07.710 05:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:07.710 05:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:07.710 05:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:07.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:07.710 05:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:07.710 05:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:07.969 [2024-07-14 05:37:14.825469] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:07.969 [2024-07-14 05:37:14.825570] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:07.969 EAL: No free 2048 kB hugepages reported on node 1 00:23:07.969 [2024-07-14 05:37:14.895291] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.969 [2024-07-14 05:37:14.984920] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:07.969 [2024-07-14 05:37:14.984995] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:07.969 [2024-07-14 05:37:14.985017] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:07.969 [2024-07-14 05:37:14.985030] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:07.969 [2024-07-14 05:37:14.985042] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:07.969 [2024-07-14 05:37:14.985073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:08.227 05:37:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:08.227 05:37:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:08.227 05:37:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:08.227 05:37:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:08.227 05:37:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:08.227 05:37:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:08.227 05:37:15 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.GuMjktz3FM 00:23:08.227 05:37:15 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.GuMjktz3FM 00:23:08.227 05:37:15 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:08.485 [2024-07-14 05:37:15.340244] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:08.485 05:37:15 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:08.743 05:37:15 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:08.743 [2024-07-14 05:37:15.829614] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:08.743 [2024-07-14 05:37:15.829947] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:08.743 05:37:15 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:09.001 malloc0 00:23:09.001 05:37:16 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:09.259 05:37:16 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.GuMjktz3FM 00:23:09.516 [2024-07-14 05:37:16.543173] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:09.516 05:37:16 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=3276957 00:23:09.516 05:37:16 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:09.516 05:37:16 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:09.516 05:37:16 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 3276957 /var/tmp/bdevperf.sock 00:23:09.516 05:37:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3276957 ']' 00:23:09.516 05:37:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:09.516 05:37:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:09.516 05:37:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:09.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:09.516 05:37:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:09.516 05:37:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:09.516 [2024-07-14 05:37:16.605231] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:09.516 [2024-07-14 05:37:16.605323] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3276957 ] 00:23:09.775 EAL: No free 2048 kB hugepages reported on node 1 00:23:09.775 [2024-07-14 05:37:16.665188] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.775 [2024-07-14 05:37:16.749596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:09.775 05:37:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:09.775 05:37:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:09.775 05:37:16 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.GuMjktz3FM 00:23:10.033 [2024-07-14 05:37:17.082412] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:10.033 [2024-07-14 05:37:17.082521] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:10.291 TLSTESTn1 00:23:10.291 05:37:17 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:10.549 05:37:17 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:23:10.549 "subsystems": [ 00:23:10.549 { 00:23:10.549 "subsystem": "keyring", 00:23:10.549 "config": [] 00:23:10.549 }, 00:23:10.549 { 00:23:10.549 "subsystem": "iobuf", 00:23:10.549 "config": [ 00:23:10.549 { 00:23:10.549 "method": "iobuf_set_options", 00:23:10.549 "params": { 00:23:10.549 "small_pool_count": 8192, 00:23:10.549 "large_pool_count": 1024, 00:23:10.549 "small_bufsize": 8192, 00:23:10.549 "large_bufsize": 135168 00:23:10.549 } 00:23:10.549 } 00:23:10.549 ] 00:23:10.549 }, 00:23:10.549 { 00:23:10.549 "subsystem": "sock", 00:23:10.549 "config": [ 00:23:10.549 { 00:23:10.549 "method": "sock_set_default_impl", 00:23:10.549 "params": { 00:23:10.549 "impl_name": "posix" 00:23:10.549 } 00:23:10.549 }, 00:23:10.549 { 00:23:10.549 "method": "sock_impl_set_options", 00:23:10.549 "params": { 00:23:10.549 "impl_name": "ssl", 00:23:10.549 "recv_buf_size": 4096, 00:23:10.549 "send_buf_size": 4096, 00:23:10.549 "enable_recv_pipe": true, 00:23:10.549 "enable_quickack": false, 00:23:10.549 "enable_placement_id": 0, 00:23:10.549 "enable_zerocopy_send_server": true, 00:23:10.549 "enable_zerocopy_send_client": false, 00:23:10.549 "zerocopy_threshold": 0, 00:23:10.549 "tls_version": 0, 00:23:10.549 "enable_ktls": false 00:23:10.549 } 00:23:10.549 }, 00:23:10.549 { 00:23:10.549 "method": "sock_impl_set_options", 00:23:10.549 "params": { 00:23:10.549 "impl_name": "posix", 00:23:10.549 "recv_buf_size": 2097152, 00:23:10.549 "send_buf_size": 2097152, 00:23:10.549 "enable_recv_pipe": true, 00:23:10.549 "enable_quickack": false, 00:23:10.549 "enable_placement_id": 0, 00:23:10.549 "enable_zerocopy_send_server": true, 00:23:10.549 "enable_zerocopy_send_client": false, 00:23:10.549 "zerocopy_threshold": 0, 00:23:10.549 "tls_version": 0, 00:23:10.549 "enable_ktls": false 00:23:10.549 } 00:23:10.549 } 00:23:10.549 ] 00:23:10.549 }, 00:23:10.549 { 00:23:10.549 "subsystem": "vmd", 00:23:10.549 "config": [] 00:23:10.549 }, 00:23:10.549 { 00:23:10.549 "subsystem": "accel", 00:23:10.550 "config": [ 00:23:10.550 { 00:23:10.550 "method": "accel_set_options", 00:23:10.550 "params": { 00:23:10.550 "small_cache_size": 128, 00:23:10.550 "large_cache_size": 16, 00:23:10.550 "task_count": 2048, 00:23:10.550 "sequence_count": 2048, 00:23:10.550 "buf_count": 2048 00:23:10.550 } 00:23:10.550 } 00:23:10.550 ] 00:23:10.550 }, 00:23:10.550 { 00:23:10.550 "subsystem": "bdev", 00:23:10.550 "config": [ 00:23:10.550 { 00:23:10.550 "method": "bdev_set_options", 00:23:10.550 "params": { 00:23:10.550 "bdev_io_pool_size": 65535, 00:23:10.550 "bdev_io_cache_size": 256, 00:23:10.550 "bdev_auto_examine": true, 00:23:10.550 "iobuf_small_cache_size": 128, 00:23:10.550 "iobuf_large_cache_size": 16 00:23:10.550 } 00:23:10.550 }, 00:23:10.550 { 00:23:10.550 "method": "bdev_raid_set_options", 00:23:10.550 "params": { 00:23:10.550 "process_window_size_kb": 1024 00:23:10.550 } 00:23:10.550 }, 00:23:10.550 { 00:23:10.550 "method": "bdev_iscsi_set_options", 00:23:10.550 "params": { 00:23:10.550 "timeout_sec": 30 00:23:10.550 } 00:23:10.550 }, 00:23:10.550 { 00:23:10.550 "method": "bdev_nvme_set_options", 00:23:10.550 "params": { 00:23:10.550 "action_on_timeout": "none", 00:23:10.550 "timeout_us": 0, 00:23:10.550 "timeout_admin_us": 0, 00:23:10.550 "keep_alive_timeout_ms": 10000, 00:23:10.550 "arbitration_burst": 0, 00:23:10.550 "low_priority_weight": 0, 00:23:10.550 "medium_priority_weight": 0, 00:23:10.550 "high_priority_weight": 0, 00:23:10.550 "nvme_adminq_poll_period_us": 10000, 00:23:10.550 "nvme_ioq_poll_period_us": 0, 00:23:10.550 "io_queue_requests": 0, 00:23:10.550 "delay_cmd_submit": true, 00:23:10.550 "transport_retry_count": 4, 00:23:10.550 "bdev_retry_count": 3, 00:23:10.550 "transport_ack_timeout": 0, 00:23:10.550 "ctrlr_loss_timeout_sec": 0, 00:23:10.550 "reconnect_delay_sec": 0, 00:23:10.550 "fast_io_fail_timeout_sec": 0, 00:23:10.550 "disable_auto_failback": false, 00:23:10.550 "generate_uuids": false, 00:23:10.550 "transport_tos": 0, 00:23:10.550 "nvme_error_stat": false, 00:23:10.550 "rdma_srq_size": 0, 00:23:10.550 "io_path_stat": false, 00:23:10.550 "allow_accel_sequence": false, 00:23:10.550 "rdma_max_cq_size": 0, 00:23:10.550 "rdma_cm_event_timeout_ms": 0, 00:23:10.550 "dhchap_digests": [ 00:23:10.550 "sha256", 00:23:10.550 "sha384", 00:23:10.550 "sha512" 00:23:10.550 ], 00:23:10.550 "dhchap_dhgroups": [ 00:23:10.550 "null", 00:23:10.550 "ffdhe2048", 00:23:10.550 "ffdhe3072", 00:23:10.550 "ffdhe4096", 00:23:10.550 "ffdhe6144", 00:23:10.550 "ffdhe8192" 00:23:10.550 ] 00:23:10.550 } 00:23:10.550 }, 00:23:10.550 { 00:23:10.550 "method": "bdev_nvme_set_hotplug", 00:23:10.550 "params": { 00:23:10.550 "period_us": 100000, 00:23:10.550 "enable": false 00:23:10.550 } 00:23:10.550 }, 00:23:10.550 { 00:23:10.550 "method": "bdev_malloc_create", 00:23:10.550 "params": { 00:23:10.550 "name": "malloc0", 00:23:10.550 "num_blocks": 8192, 00:23:10.550 "block_size": 4096, 00:23:10.550 "physical_block_size": 4096, 00:23:10.550 "uuid": "64b4c308-b2df-4b76-ba58-b716e980f4ee", 00:23:10.550 "optimal_io_boundary": 0 00:23:10.550 } 00:23:10.550 }, 00:23:10.550 { 00:23:10.550 "method": "bdev_wait_for_examine" 00:23:10.550 } 00:23:10.550 ] 00:23:10.550 }, 00:23:10.550 { 00:23:10.550 "subsystem": "nbd", 00:23:10.550 "config": [] 00:23:10.550 }, 00:23:10.550 { 00:23:10.550 "subsystem": "scheduler", 00:23:10.550 "config": [ 00:23:10.550 { 00:23:10.550 "method": "framework_set_scheduler", 00:23:10.550 "params": { 00:23:10.550 "name": "static" 00:23:10.550 } 00:23:10.550 } 00:23:10.550 ] 00:23:10.550 }, 00:23:10.550 { 00:23:10.550 "subsystem": "nvmf", 00:23:10.550 "config": [ 00:23:10.550 { 00:23:10.550 "method": "nvmf_set_config", 00:23:10.550 "params": { 00:23:10.550 "discovery_filter": "match_any", 00:23:10.550 "admin_cmd_passthru": { 00:23:10.550 "identify_ctrlr": false 00:23:10.550 } 00:23:10.550 } 00:23:10.550 }, 00:23:10.550 { 00:23:10.550 "method": "nvmf_set_max_subsystems", 00:23:10.550 "params": { 00:23:10.550 "max_subsystems": 1024 00:23:10.550 } 00:23:10.550 }, 00:23:10.550 { 00:23:10.550 "method": "nvmf_set_crdt", 00:23:10.550 "params": { 00:23:10.550 "crdt1": 0, 00:23:10.550 "crdt2": 0, 00:23:10.550 "crdt3": 0 00:23:10.550 } 00:23:10.550 }, 00:23:10.550 { 00:23:10.550 "method": "nvmf_create_transport", 00:23:10.550 "params": { 00:23:10.550 "trtype": "TCP", 00:23:10.550 "max_queue_depth": 128, 00:23:10.550 "max_io_qpairs_per_ctrlr": 127, 00:23:10.550 "in_capsule_data_size": 4096, 00:23:10.550 "max_io_size": 131072, 00:23:10.550 "io_unit_size": 131072, 00:23:10.550 "max_aq_depth": 128, 00:23:10.550 "num_shared_buffers": 511, 00:23:10.550 "buf_cache_size": 4294967295, 00:23:10.550 "dif_insert_or_strip": false, 00:23:10.550 "zcopy": false, 00:23:10.550 "c2h_success": false, 00:23:10.550 "sock_priority": 0, 00:23:10.550 "abort_timeout_sec": 1, 00:23:10.550 "ack_timeout": 0, 00:23:10.550 "data_wr_pool_size": 0 00:23:10.550 } 00:23:10.550 }, 00:23:10.550 { 00:23:10.550 "method": "nvmf_create_subsystem", 00:23:10.550 "params": { 00:23:10.550 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:10.550 "allow_any_host": false, 00:23:10.550 "serial_number": "SPDK00000000000001", 00:23:10.550 "model_number": "SPDK bdev Controller", 00:23:10.550 "max_namespaces": 10, 00:23:10.550 "min_cntlid": 1, 00:23:10.550 "max_cntlid": 65519, 00:23:10.550 "ana_reporting": false 00:23:10.550 } 00:23:10.550 }, 00:23:10.550 { 00:23:10.550 "method": "nvmf_subsystem_add_host", 00:23:10.550 "params": { 00:23:10.550 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:10.550 "host": "nqn.2016-06.io.spdk:host1", 00:23:10.550 "psk": "/tmp/tmp.GuMjktz3FM" 00:23:10.550 } 00:23:10.550 }, 00:23:10.550 { 00:23:10.550 "method": "nvmf_subsystem_add_ns", 00:23:10.550 "params": { 00:23:10.550 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:10.550 "namespace": { 00:23:10.550 "nsid": 1, 00:23:10.550 "bdev_name": "malloc0", 00:23:10.550 "nguid": "64B4C308B2DF4B76BA58B716E980F4EE", 00:23:10.550 "uuid": "64b4c308-b2df-4b76-ba58-b716e980f4ee", 00:23:10.550 "no_auto_visible": false 00:23:10.550 } 00:23:10.550 } 00:23:10.550 }, 00:23:10.550 { 00:23:10.550 "method": "nvmf_subsystem_add_listener", 00:23:10.550 "params": { 00:23:10.550 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:10.550 "listen_address": { 00:23:10.550 "trtype": "TCP", 00:23:10.550 "adrfam": "IPv4", 00:23:10.550 "traddr": "10.0.0.2", 00:23:10.550 "trsvcid": "4420" 00:23:10.550 }, 00:23:10.550 "secure_channel": true 00:23:10.550 } 00:23:10.550 } 00:23:10.550 ] 00:23:10.550 } 00:23:10.550 ] 00:23:10.550 }' 00:23:10.550 05:37:17 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:10.809 05:37:17 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:23:10.809 "subsystems": [ 00:23:10.809 { 00:23:10.809 "subsystem": "keyring", 00:23:10.809 "config": [] 00:23:10.809 }, 00:23:10.809 { 00:23:10.809 "subsystem": "iobuf", 00:23:10.809 "config": [ 00:23:10.809 { 00:23:10.809 "method": "iobuf_set_options", 00:23:10.809 "params": { 00:23:10.809 "small_pool_count": 8192, 00:23:10.809 "large_pool_count": 1024, 00:23:10.809 "small_bufsize": 8192, 00:23:10.809 "large_bufsize": 135168 00:23:10.809 } 00:23:10.809 } 00:23:10.809 ] 00:23:10.809 }, 00:23:10.809 { 00:23:10.809 "subsystem": "sock", 00:23:10.809 "config": [ 00:23:10.809 { 00:23:10.809 "method": "sock_set_default_impl", 00:23:10.809 "params": { 00:23:10.809 "impl_name": "posix" 00:23:10.809 } 00:23:10.809 }, 00:23:10.809 { 00:23:10.809 "method": "sock_impl_set_options", 00:23:10.809 "params": { 00:23:10.809 "impl_name": "ssl", 00:23:10.809 "recv_buf_size": 4096, 00:23:10.809 "send_buf_size": 4096, 00:23:10.809 "enable_recv_pipe": true, 00:23:10.809 "enable_quickack": false, 00:23:10.809 "enable_placement_id": 0, 00:23:10.809 "enable_zerocopy_send_server": true, 00:23:10.809 "enable_zerocopy_send_client": false, 00:23:10.809 "zerocopy_threshold": 0, 00:23:10.809 "tls_version": 0, 00:23:10.809 "enable_ktls": false 00:23:10.809 } 00:23:10.809 }, 00:23:10.809 { 00:23:10.809 "method": "sock_impl_set_options", 00:23:10.809 "params": { 00:23:10.809 "impl_name": "posix", 00:23:10.809 "recv_buf_size": 2097152, 00:23:10.809 "send_buf_size": 2097152, 00:23:10.809 "enable_recv_pipe": true, 00:23:10.809 "enable_quickack": false, 00:23:10.809 "enable_placement_id": 0, 00:23:10.809 "enable_zerocopy_send_server": true, 00:23:10.809 "enable_zerocopy_send_client": false, 00:23:10.809 "zerocopy_threshold": 0, 00:23:10.809 "tls_version": 0, 00:23:10.809 "enable_ktls": false 00:23:10.809 } 00:23:10.809 } 00:23:10.809 ] 00:23:10.809 }, 00:23:10.809 { 00:23:10.809 "subsystem": "vmd", 00:23:10.809 "config": [] 00:23:10.809 }, 00:23:10.809 { 00:23:10.809 "subsystem": "accel", 00:23:10.809 "config": [ 00:23:10.809 { 00:23:10.809 "method": "accel_set_options", 00:23:10.809 "params": { 00:23:10.809 "small_cache_size": 128, 00:23:10.809 "large_cache_size": 16, 00:23:10.809 "task_count": 2048, 00:23:10.809 "sequence_count": 2048, 00:23:10.809 "buf_count": 2048 00:23:10.809 } 00:23:10.809 } 00:23:10.809 ] 00:23:10.809 }, 00:23:10.809 { 00:23:10.809 "subsystem": "bdev", 00:23:10.809 "config": [ 00:23:10.809 { 00:23:10.809 "method": "bdev_set_options", 00:23:10.809 "params": { 00:23:10.809 "bdev_io_pool_size": 65535, 00:23:10.809 "bdev_io_cache_size": 256, 00:23:10.809 "bdev_auto_examine": true, 00:23:10.809 "iobuf_small_cache_size": 128, 00:23:10.809 "iobuf_large_cache_size": 16 00:23:10.810 } 00:23:10.810 }, 00:23:10.810 { 00:23:10.810 "method": "bdev_raid_set_options", 00:23:10.810 "params": { 00:23:10.810 "process_window_size_kb": 1024 00:23:10.810 } 00:23:10.810 }, 00:23:10.810 { 00:23:10.810 "method": "bdev_iscsi_set_options", 00:23:10.810 "params": { 00:23:10.810 "timeout_sec": 30 00:23:10.810 } 00:23:10.810 }, 00:23:10.810 { 00:23:10.810 "method": "bdev_nvme_set_options", 00:23:10.810 "params": { 00:23:10.810 "action_on_timeout": "none", 00:23:10.810 "timeout_us": 0, 00:23:10.810 "timeout_admin_us": 0, 00:23:10.810 "keep_alive_timeout_ms": 10000, 00:23:10.810 "arbitration_burst": 0, 00:23:10.810 "low_priority_weight": 0, 00:23:10.810 "medium_priority_weight": 0, 00:23:10.810 "high_priority_weight": 0, 00:23:10.810 "nvme_adminq_poll_period_us": 10000, 00:23:10.810 "nvme_ioq_poll_period_us": 0, 00:23:10.810 "io_queue_requests": 512, 00:23:10.810 "delay_cmd_submit": true, 00:23:10.810 "transport_retry_count": 4, 00:23:10.810 "bdev_retry_count": 3, 00:23:10.810 "transport_ack_timeout": 0, 00:23:10.810 "ctrlr_loss_timeout_sec": 0, 00:23:10.810 "reconnect_delay_sec": 0, 00:23:10.810 "fast_io_fail_timeout_sec": 0, 00:23:10.810 "disable_auto_failback": false, 00:23:10.810 "generate_uuids": false, 00:23:10.810 "transport_tos": 0, 00:23:10.810 "nvme_error_stat": false, 00:23:10.810 "rdma_srq_size": 0, 00:23:10.810 "io_path_stat": false, 00:23:10.810 "allow_accel_sequence": false, 00:23:10.810 "rdma_max_cq_size": 0, 00:23:10.810 "rdma_cm_event_timeout_ms": 0, 00:23:10.810 "dhchap_digests": [ 00:23:10.810 "sha256", 00:23:10.810 "sha384", 00:23:10.810 "sha512" 00:23:10.810 ], 00:23:10.810 "dhchap_dhgroups": [ 00:23:10.810 "null", 00:23:10.810 "ffdhe2048", 00:23:10.810 "ffdhe3072", 00:23:10.810 "ffdhe4096", 00:23:10.810 "ffdhe6144", 00:23:10.810 "ffdhe8192" 00:23:10.810 ] 00:23:10.810 } 00:23:10.810 }, 00:23:10.810 { 00:23:10.810 "method": "bdev_nvme_attach_controller", 00:23:10.810 "params": { 00:23:10.810 "name": "TLSTEST", 00:23:10.810 "trtype": "TCP", 00:23:10.810 "adrfam": "IPv4", 00:23:10.810 "traddr": "10.0.0.2", 00:23:10.810 "trsvcid": "4420", 00:23:10.810 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:10.810 "prchk_reftag": false, 00:23:10.810 "prchk_guard": false, 00:23:10.810 "ctrlr_loss_timeout_sec": 0, 00:23:10.810 "reconnect_delay_sec": 0, 00:23:10.810 "fast_io_fail_timeout_sec": 0, 00:23:10.810 "psk": "/tmp/tmp.GuMjktz3FM", 00:23:10.810 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:10.810 "hdgst": false, 00:23:10.810 "ddgst": false 00:23:10.810 } 00:23:10.810 }, 00:23:10.810 { 00:23:10.810 "method": "bdev_nvme_set_hotplug", 00:23:10.810 "params": { 00:23:10.810 "period_us": 100000, 00:23:10.810 "enable": false 00:23:10.810 } 00:23:10.810 }, 00:23:10.810 { 00:23:10.810 "method": "bdev_wait_for_examine" 00:23:10.810 } 00:23:10.810 ] 00:23:10.810 }, 00:23:10.810 { 00:23:10.810 "subsystem": "nbd", 00:23:10.810 "config": [] 00:23:10.810 } 00:23:10.810 ] 00:23:10.810 }' 00:23:10.810 05:37:17 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 3276957 00:23:10.810 05:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3276957 ']' 00:23:10.810 05:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3276957 00:23:10.810 05:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:10.810 05:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:10.810 05:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3276957 00:23:10.810 05:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:10.810 05:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:10.810 05:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3276957' 00:23:10.810 killing process with pid 3276957 00:23:10.810 05:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3276957 00:23:10.810 Received shutdown signal, test time was about 10.000000 seconds 00:23:10.810 00:23:10.810 Latency(us) 00:23:10.810 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:10.810 =================================================================================================================== 00:23:10.810 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:10.810 [2024-07-14 05:37:17.828329] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:10.810 05:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3276957 00:23:11.068 05:37:18 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 3276789 00:23:11.068 05:37:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3276789 ']' 00:23:11.068 05:37:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3276789 00:23:11.068 05:37:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:11.068 05:37:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:11.068 05:37:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3276789 00:23:11.068 05:37:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:11.068 05:37:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:11.068 05:37:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3276789' 00:23:11.068 killing process with pid 3276789 00:23:11.068 05:37:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3276789 00:23:11.068 [2024-07-14 05:37:18.069834] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:11.068 05:37:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3276789 00:23:11.327 05:37:18 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:11.327 05:37:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:11.327 05:37:18 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:23:11.327 "subsystems": [ 00:23:11.327 { 00:23:11.327 "subsystem": "keyring", 00:23:11.327 "config": [] 00:23:11.327 }, 00:23:11.327 { 00:23:11.327 "subsystem": "iobuf", 00:23:11.327 "config": [ 00:23:11.327 { 00:23:11.327 "method": "iobuf_set_options", 00:23:11.327 "params": { 00:23:11.327 "small_pool_count": 8192, 00:23:11.327 "large_pool_count": 1024, 00:23:11.327 "small_bufsize": 8192, 00:23:11.327 "large_bufsize": 135168 00:23:11.327 } 00:23:11.327 } 00:23:11.327 ] 00:23:11.327 }, 00:23:11.327 { 00:23:11.327 "subsystem": "sock", 00:23:11.327 "config": [ 00:23:11.327 { 00:23:11.327 "method": "sock_set_default_impl", 00:23:11.327 "params": { 00:23:11.327 "impl_name": "posix" 00:23:11.327 } 00:23:11.327 }, 00:23:11.327 { 00:23:11.327 "method": "sock_impl_set_options", 00:23:11.327 "params": { 00:23:11.327 "impl_name": "ssl", 00:23:11.327 "recv_buf_size": 4096, 00:23:11.327 "send_buf_size": 4096, 00:23:11.327 "enable_recv_pipe": true, 00:23:11.327 "enable_quickack": false, 00:23:11.327 "enable_placement_id": 0, 00:23:11.327 "enable_zerocopy_send_server": true, 00:23:11.327 "enable_zerocopy_send_client": false, 00:23:11.327 "zerocopy_threshold": 0, 00:23:11.327 "tls_version": 0, 00:23:11.327 "enable_ktls": false 00:23:11.327 } 00:23:11.327 }, 00:23:11.327 { 00:23:11.327 "method": "sock_impl_set_options", 00:23:11.327 "params": { 00:23:11.327 "impl_name": "posix", 00:23:11.327 "recv_buf_size": 2097152, 00:23:11.327 "send_buf_size": 2097152, 00:23:11.327 "enable_recv_pipe": true, 00:23:11.327 "enable_quickack": false, 00:23:11.327 "enable_placement_id": 0, 00:23:11.327 "enable_zerocopy_send_server": true, 00:23:11.327 "enable_zerocopy_send_client": false, 00:23:11.327 "zerocopy_threshold": 0, 00:23:11.327 "tls_version": 0, 00:23:11.327 "enable_ktls": false 00:23:11.327 } 00:23:11.327 } 00:23:11.327 ] 00:23:11.327 }, 00:23:11.327 { 00:23:11.327 "subsystem": "vmd", 00:23:11.327 "config": [] 00:23:11.327 }, 00:23:11.327 { 00:23:11.327 "subsystem": "accel", 00:23:11.327 "config": [ 00:23:11.327 { 00:23:11.327 "method": "accel_set_options", 00:23:11.327 "params": { 00:23:11.327 "small_cache_size": 128, 00:23:11.327 "large_cache_size": 16, 00:23:11.327 "task_count": 2048, 00:23:11.327 "sequence_count": 2048, 00:23:11.327 "buf_count": 2048 00:23:11.327 } 00:23:11.327 } 00:23:11.327 ] 00:23:11.327 }, 00:23:11.327 { 00:23:11.327 "subsystem": "bdev", 00:23:11.327 "config": [ 00:23:11.327 { 00:23:11.327 "method": "bdev_set_options", 00:23:11.327 "params": { 00:23:11.327 "bdev_io_pool_size": 65535, 00:23:11.327 "bdev_io_cache_size": 256, 00:23:11.327 "bdev_auto_examine": true, 00:23:11.327 "iobuf_small_cache_size": 128, 00:23:11.327 "iobuf_large_cache_size": 16 00:23:11.327 } 00:23:11.327 }, 00:23:11.327 { 00:23:11.327 "method": "bdev_raid_set_options", 00:23:11.327 "params": { 00:23:11.327 "process_window_size_kb": 1024 00:23:11.327 } 00:23:11.327 }, 00:23:11.327 { 00:23:11.327 "method": "bdev_iscsi_set_options", 00:23:11.327 "params": { 00:23:11.327 "timeout_sec": 30 00:23:11.327 } 00:23:11.327 }, 00:23:11.327 { 00:23:11.327 "method": "bdev_nvme_set_options", 00:23:11.327 "params": { 00:23:11.327 "action_on_timeout": "none", 00:23:11.327 "timeout_us": 0, 00:23:11.327 "timeout_admin_us": 0, 00:23:11.327 "keep_alive_timeout_ms": 10000, 00:23:11.327 "arbitration_burst": 0, 00:23:11.327 "low_priority_weight": 0, 00:23:11.327 "medium_priority_weight": 0, 00:23:11.327 "high_priority_weight": 0, 00:23:11.327 "nvme_adminq_poll_period_us": 10000, 00:23:11.327 "nvme_ioq_poll_period_us": 0, 00:23:11.327 "io_queue_requests": 0, 00:23:11.327 "delay_cmd_submit": true, 00:23:11.327 "transport_retry_count": 4, 00:23:11.327 "bdev_retry_count": 3, 00:23:11.327 "transport_ack_timeout": 0, 00:23:11.327 "ctrlr_loss_timeout_sec": 0, 00:23:11.327 "reconnect_delay_sec": 0, 00:23:11.327 "fast_io_fail_timeout_sec": 0, 00:23:11.327 "disable_auto_failback": false, 00:23:11.327 "generate_uuids": false, 00:23:11.327 "transport_tos": 0, 00:23:11.327 "nvme_error_stat": false, 00:23:11.327 "rdma_srq_size": 0, 00:23:11.327 "io_path_stat": false, 00:23:11.327 "allow_accel_sequence": false, 00:23:11.327 "rdma_max_cq_size": 0, 00:23:11.327 "rdma_cm_event_timeout_ms": 0, 00:23:11.327 "dhchap_digests": [ 00:23:11.327 "sha256", 00:23:11.327 "sha384", 00:23:11.327 "sha512" 00:23:11.327 ], 00:23:11.327 "dhchap_dhgroups": [ 00:23:11.327 "null", 00:23:11.327 "ffdhe2048", 00:23:11.327 "ffdhe3072", 00:23:11.327 "ffdhe4096", 00:23:11.327 "ffdhe 05:37:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:11.327 6144", 00:23:11.327 "ffdhe8192" 00:23:11.327 ] 00:23:11.327 } 00:23:11.327 }, 00:23:11.327 { 00:23:11.327 "method": "bdev_nvme_set_hotplug", 00:23:11.327 "params": { 00:23:11.327 "period_us": 100000, 00:23:11.327 "enable": false 00:23:11.327 } 00:23:11.327 }, 00:23:11.327 { 00:23:11.327 "method": "bdev_malloc_create", 00:23:11.327 "params": { 00:23:11.327 "name": "malloc0", 00:23:11.327 "num_blocks": 8192, 00:23:11.327 "block_size": 4096, 00:23:11.327 "physical_block_size": 4096, 00:23:11.327 "uuid": "64b4c308-b2df-4b76-ba58-b716e980f4ee", 00:23:11.327 "optimal_io_boundary": 0 00:23:11.327 } 00:23:11.327 }, 00:23:11.327 { 00:23:11.327 "method": "bdev_wait_for_examine" 00:23:11.327 } 00:23:11.327 ] 00:23:11.327 }, 00:23:11.327 { 00:23:11.327 "subsystem": "nbd", 00:23:11.327 "config": [] 00:23:11.327 }, 00:23:11.327 { 00:23:11.327 "subsystem": "scheduler", 00:23:11.327 "config": [ 00:23:11.327 { 00:23:11.327 "method": "framework_set_scheduler", 00:23:11.327 "params": { 00:23:11.327 "name": "static" 00:23:11.327 } 00:23:11.327 } 00:23:11.327 ] 00:23:11.327 }, 00:23:11.327 { 00:23:11.327 "subsystem": "nvmf", 00:23:11.327 "config": [ 00:23:11.327 { 00:23:11.327 "method": "nvmf_set_config", 00:23:11.327 "params": { 00:23:11.327 "discovery_filter": "match_any", 00:23:11.327 "admin_cmd_passthru": { 00:23:11.327 "identify_ctrlr": false 00:23:11.327 } 00:23:11.327 } 00:23:11.327 }, 00:23:11.327 { 00:23:11.327 "method": "nvmf_set_max_subsystems", 00:23:11.327 "params": { 00:23:11.327 "max_subsystems": 1024 00:23:11.327 } 00:23:11.327 }, 00:23:11.327 { 00:23:11.327 "method": "nvmf_set_crdt", 00:23:11.327 "params": { 00:23:11.327 "crdt1": 0, 00:23:11.327 "crdt2": 0, 00:23:11.327 "crdt3": 0 00:23:11.327 } 00:23:11.327 }, 00:23:11.327 { 00:23:11.327 "method": "nvmf_create_transport", 00:23:11.327 "params": { 00:23:11.327 "trtype": "TCP", 00:23:11.327 "max_queue_depth": 128, 00:23:11.327 "max_io_qpairs_per_ctrlr": 127, 00:23:11.327 "in_capsule_data_size": 4096, 00:23:11.327 "max_io_size": 131072, 00:23:11.327 "io_unit_size": 131072, 00:23:11.327 "max_aq_depth": 128, 00:23:11.327 "num_shared_buffers": 511, 00:23:11.327 "buf_cache_size": 4294967295, 00:23:11.327 "dif_insert_or_strip": false, 00:23:11.327 "zcopy": false, 00:23:11.327 "c2h_success": false, 00:23:11.327 "sock_priority": 0, 00:23:11.328 "abort_timeout_sec": 1, 00:23:11.328 "ack_timeout": 0, 00:23:11.328 "data_wr_pool_size": 0 00:23:11.328 } 00:23:11.328 }, 00:23:11.328 { 00:23:11.328 "method": "nvmf_create_subsystem", 00:23:11.328 "params": { 00:23:11.328 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:11.328 "allow_any_host": false, 00:23:11.328 "serial_number": "SPDK00000000000001", 00:23:11.328 "model_number": "SPDK bdev Controller", 00:23:11.328 "max_namespaces": 10, 00:23:11.328 "min_cntlid": 1, 00:23:11.328 "max_cntlid": 65519, 00:23:11.328 "ana_reporting": false 00:23:11.328 } 00:23:11.328 }, 00:23:11.328 { 00:23:11.328 "method": "nvmf_subsystem_add_host", 00:23:11.328 "params": { 00:23:11.328 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:11.328 "host": "nqn.2016-06.io.spdk:host1", 00:23:11.328 "psk": "/tmp/tmp.GuMjktz3FM" 00:23:11.328 } 00:23:11.328 }, 00:23:11.328 { 00:23:11.328 "method": "nvmf_subsystem_add_ns", 00:23:11.328 "params": { 00:23:11.328 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:11.328 "namespace": { 00:23:11.328 "nsid": 1, 00:23:11.328 "bdev_name": "malloc0", 00:23:11.328 "nguid": "64B4C308B2DF4B76BA58B716E980F4EE", 00:23:11.328 "uuid": "64b4c308-b2df-4b76-ba58-b716e980f4ee", 00:23:11.328 "no_auto_visible": false 00:23:11.328 } 00:23:11.328 } 00:23:11.328 }, 00:23:11.328 { 00:23:11.328 "method": "nvmf_subsystem_add_listener", 00:23:11.328 "params": { 00:23:11.328 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:11.328 "listen_address": { 00:23:11.328 "trtype": "TCP", 00:23:11.328 "adrfam": "IPv4", 00:23:11.328 "traddr": "10.0.0.2", 00:23:11.328 "trsvcid": "4420" 00:23:11.328 }, 00:23:11.328 "secure_channel": true 00:23:11.328 } 00:23:11.328 } 00:23:11.328 ] 00:23:11.328 } 00:23:11.328 ] 00:23:11.328 }' 00:23:11.328 05:37:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:11.328 05:37:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3277230 00:23:11.328 05:37:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:11.328 05:37:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3277230 00:23:11.328 05:37:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3277230 ']' 00:23:11.328 05:37:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:11.328 05:37:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:11.328 05:37:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:11.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:11.328 05:37:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:11.328 05:37:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:11.328 [2024-07-14 05:37:18.373598] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:11.328 [2024-07-14 05:37:18.373701] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:11.328 EAL: No free 2048 kB hugepages reported on node 1 00:23:11.586 [2024-07-14 05:37:18.445077] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:11.586 [2024-07-14 05:37:18.534135] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:11.586 [2024-07-14 05:37:18.534201] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:11.586 [2024-07-14 05:37:18.534228] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:11.586 [2024-07-14 05:37:18.534242] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:11.586 [2024-07-14 05:37:18.534254] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:11.586 [2024-07-14 05:37:18.534348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:11.844 [2024-07-14 05:37:18.761894] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:11.844 [2024-07-14 05:37:18.777827] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:11.844 [2024-07-14 05:37:18.793896] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:11.844 [2024-07-14 05:37:18.813057] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:12.411 05:37:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:12.411 05:37:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:12.411 05:37:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:12.411 05:37:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:12.411 05:37:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:12.411 05:37:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:12.411 05:37:19 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=3277384 00:23:12.411 05:37:19 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 3277384 /var/tmp/bdevperf.sock 00:23:12.411 05:37:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3277384 ']' 00:23:12.411 05:37:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:12.411 05:37:19 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:12.411 05:37:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:12.411 05:37:19 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:23:12.411 "subsystems": [ 00:23:12.411 { 00:23:12.411 "subsystem": "keyring", 00:23:12.411 "config": [] 00:23:12.411 }, 00:23:12.411 { 00:23:12.411 "subsystem": "iobuf", 00:23:12.411 "config": [ 00:23:12.411 { 00:23:12.411 "method": "iobuf_set_options", 00:23:12.411 "params": { 00:23:12.411 "small_pool_count": 8192, 00:23:12.411 "large_pool_count": 1024, 00:23:12.411 "small_bufsize": 8192, 00:23:12.411 "large_bufsize": 135168 00:23:12.411 } 00:23:12.411 } 00:23:12.411 ] 00:23:12.411 }, 00:23:12.411 { 00:23:12.411 "subsystem": "sock", 00:23:12.411 "config": [ 00:23:12.411 { 00:23:12.411 "method": "sock_set_default_impl", 00:23:12.411 "params": { 00:23:12.411 "impl_name": "posix" 00:23:12.411 } 00:23:12.411 }, 00:23:12.411 { 00:23:12.411 "method": "sock_impl_set_options", 00:23:12.411 "params": { 00:23:12.411 "impl_name": "ssl", 00:23:12.411 "recv_buf_size": 4096, 00:23:12.411 "send_buf_size": 4096, 00:23:12.411 "enable_recv_pipe": true, 00:23:12.411 "enable_quickack": false, 00:23:12.411 "enable_placement_id": 0, 00:23:12.411 "enable_zerocopy_send_server": true, 00:23:12.411 "enable_zerocopy_send_client": false, 00:23:12.411 "zerocopy_threshold": 0, 00:23:12.411 "tls_version": 0, 00:23:12.411 "enable_ktls": false 00:23:12.411 } 00:23:12.411 }, 00:23:12.411 { 00:23:12.411 "method": "sock_impl_set_options", 00:23:12.411 "params": { 00:23:12.411 "impl_name": "posix", 00:23:12.411 "recv_buf_size": 2097152, 00:23:12.411 "send_buf_size": 2097152, 00:23:12.411 "enable_recv_pipe": true, 00:23:12.411 "enable_quickack": false, 00:23:12.411 "enable_placement_id": 0, 00:23:12.411 "enable_zerocopy_send_server": true, 00:23:12.411 "enable_zerocopy_send_client": false, 00:23:12.411 "zerocopy_threshold": 0, 00:23:12.411 "tls_version": 0, 00:23:12.411 "enable_ktls": false 00:23:12.411 } 00:23:12.411 } 00:23:12.411 ] 00:23:12.411 }, 00:23:12.411 { 00:23:12.411 "subsystem": "vmd", 00:23:12.411 "config": [] 00:23:12.411 }, 00:23:12.411 { 00:23:12.411 "subsystem": "accel", 00:23:12.411 "config": [ 00:23:12.411 { 00:23:12.411 "method": "accel_set_options", 00:23:12.411 "params": { 00:23:12.411 "small_cache_size": 128, 00:23:12.411 "large_cache_size": 16, 00:23:12.411 "task_count": 2048, 00:23:12.411 "sequence_count": 2048, 00:23:12.411 "buf_count": 2048 00:23:12.411 } 00:23:12.411 } 00:23:12.411 ] 00:23:12.411 }, 00:23:12.411 { 00:23:12.411 "subsystem": "bdev", 00:23:12.411 "config": [ 00:23:12.411 { 00:23:12.411 "method": "bdev_set_options", 00:23:12.411 "params": { 00:23:12.411 "bdev_io_pool_size": 65535, 00:23:12.411 "bdev_io_cache_size": 256, 00:23:12.411 "bdev_auto_examine": true, 00:23:12.411 "iobuf_small_cache_size": 128, 00:23:12.411 "iobuf_large_cache_size": 16 00:23:12.411 } 00:23:12.412 }, 00:23:12.412 { 00:23:12.412 "method": "bdev_raid_set_options", 00:23:12.412 "params": { 00:23:12.412 "process_window_size_kb": 1024 00:23:12.412 } 00:23:12.412 }, 00:23:12.412 { 00:23:12.412 "method": "bdev_iscsi_set_options", 00:23:12.412 "params": { 00:23:12.412 "timeout_sec": 30 00:23:12.412 } 00:23:12.412 }, 00:23:12.412 { 00:23:12.412 "method": "bdev_nvme_set_options", 00:23:12.412 "params": { 00:23:12.412 "action_on_timeout": "none", 00:23:12.412 "timeout_us": 0, 00:23:12.412 "timeout_admin_us": 0, 00:23:12.412 "keep_alive_timeout_ms": 10000, 00:23:12.412 "arbitration_burst": 0, 00:23:12.412 "low_priority_weight": 0, 00:23:12.412 "medium_priority_weight": 0, 00:23:12.412 "high_priority_weight": 0, 00:23:12.412 "nvme_adminq_poll_period_us": 10000, 00:23:12.412 "nvme_ioq_poll_period_us": 0, 00:23:12.412 "io_queue_requests": 512, 00:23:12.412 "delay_cmd_submit": true, 00:23:12.412 "transport_retry_count": 4, 00:23:12.412 "bdev_retry_count": 3, 00:23:12.412 "transport_ack_timeout": 0, 00:23:12.412 "ctrlr_loss_timeout_sec": 0, 00:23:12.412 "reconnect_delay_sec": 0, 00:23:12.412 "fast_io_fail_timeout_sec": 0, 00:23:12.412 "disable_auto_failback": false, 00:23:12.412 "generate_uuids": false, 00:23:12.412 "transport_tos": 0, 00:23:12.412 "nvme_error_stat": false, 00:23:12.412 "rdma_srq_size": 0, 00:23:12.412 "io_path_stat": false, 00:23:12.412 "allow_accel_sequence": false, 00:23:12.412 "rdma_max_cq_size": 0, 00:23:12.412 "rdma_cm_event_timeout_ms": 0, 00:23:12.412 "dhchap_digests": [ 00:23:12.412 "sha256", 00:23:12.412 "sha384", 00:23:12.412 "sha512" 00:23:12.412 ], 00:23:12.412 "dhchap_dhgroups": [ 00:23:12.412 "null", 00:23:12.412 "ffdhe2048", 00:23:12.412 "ffdhe3072", 00:23:12.412 "ffdhe4096", 00:23:12.412 "ffdhe6144", 00:23:12.412 "ffdhe8192" 00:23:12.412 ] 00:23:12.412 } 00:23:12.412 }, 00:23:12.412 { 00:23:12.412 "method": "bdev_nvme_attach_controller", 00:23:12.412 "params": { 00:23:12.412 "name": "TLSTEST", 00:23:12.412 "trtype": "TCP", 00:23:12.412 "adrfam": "IPv4", 00:23:12.412 "traddr": "10.0.0.2", 00:23:12.412 "trsvcid": "4420", 00:23:12.412 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:12.412 "prchk_reftag": false, 00:23:12.412 "prchk_guard": false, 00:23:12.412 "ctrlr_loss_timeout_sec": 0, 00:23:12.412 "reconnect_delay_sec": 0, 00:23:12.412 "fast_io_fail_timeout_sec": 0, 00:23:12.412 "psk": "/tmp/tmp.GuMjktz3FM", 00:23:12.412 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:12.412 "hdgst": false, 00:23:12.412 "ddgst": false 00:23:12.412 } 00:23:12.412 }, 00:23:12.412 { 00:23:12.412 "method": "bdev_nvme_set_hotplug", 00:23:12.412 "params": { 00:23:12.412 "period_us": 100000, 00:23:12.412 "enable": false 00:23:12.412 } 00:23:12.412 }, 00:23:12.412 { 00:23:12.412 "method": "bdev_wait_for_examine" 00:23:12.412 } 00:23:12.412 ] 00:23:12.412 }, 00:23:12.412 { 00:23:12.412 "subsystem": "nbd", 00:23:12.412 "config": [] 00:23:12.412 } 00:23:12.412 ] 00:23:12.412 }' 00:23:12.412 05:37:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:12.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:12.412 05:37:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:12.412 05:37:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:12.412 [2024-07-14 05:37:19.420446] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:12.412 [2024-07-14 05:37:19.420537] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3277384 ] 00:23:12.412 EAL: No free 2048 kB hugepages reported on node 1 00:23:12.412 [2024-07-14 05:37:19.478720] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:12.670 [2024-07-14 05:37:19.563349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:12.670 [2024-07-14 05:37:19.731626] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:12.670 [2024-07-14 05:37:19.731748] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:13.604 05:37:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:13.604 05:37:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:13.604 05:37:20 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:13.604 Running I/O for 10 seconds... 00:23:23.603 00:23:23.603 Latency(us) 00:23:23.603 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:23.603 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:23.603 Verification LBA range: start 0x0 length 0x2000 00:23:23.603 TLSTESTn1 : 10.07 1674.54 6.54 0.00 0.00 76197.05 6359.42 109517.94 00:23:23.603 =================================================================================================================== 00:23:23.603 Total : 1674.54 6.54 0.00 0.00 76197.05 6359.42 109517.94 00:23:23.603 0 00:23:23.603 05:37:30 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:23.603 05:37:30 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 3277384 00:23:23.603 05:37:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3277384 ']' 00:23:23.603 05:37:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3277384 00:23:23.603 05:37:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:23.603 05:37:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:23.603 05:37:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3277384 00:23:23.603 05:37:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:23.603 05:37:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:23.603 05:37:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3277384' 00:23:23.603 killing process with pid 3277384 00:23:23.603 05:37:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3277384 00:23:23.603 Received shutdown signal, test time was about 10.000000 seconds 00:23:23.603 00:23:23.603 Latency(us) 00:23:23.603 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:23.603 =================================================================================================================== 00:23:23.603 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:23.603 [2024-07-14 05:37:30.608344] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:23.603 05:37:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3277384 00:23:23.860 05:37:30 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 3277230 00:23:23.860 05:37:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3277230 ']' 00:23:23.860 05:37:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3277230 00:23:23.860 05:37:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:23.860 05:37:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:23.860 05:37:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3277230 00:23:23.860 05:37:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:23.860 05:37:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:23.860 05:37:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3277230' 00:23:23.860 killing process with pid 3277230 00:23:23.860 05:37:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3277230 00:23:23.860 [2024-07-14 05:37:30.863525] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:23.860 05:37:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3277230 00:23:24.117 05:37:31 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:23:24.117 05:37:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:24.117 05:37:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:24.117 05:37:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:24.117 05:37:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3278708 00:23:24.117 05:37:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:24.117 05:37:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3278708 00:23:24.117 05:37:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3278708 ']' 00:23:24.117 05:37:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:24.117 05:37:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:24.117 05:37:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:24.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:24.117 05:37:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:24.117 05:37:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:24.117 [2024-07-14 05:37:31.174077] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:24.117 [2024-07-14 05:37:31.174182] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:24.117 EAL: No free 2048 kB hugepages reported on node 1 00:23:24.374 [2024-07-14 05:37:31.245117] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.374 [2024-07-14 05:37:31.335415] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:24.374 [2024-07-14 05:37:31.335479] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:24.374 [2024-07-14 05:37:31.335508] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:24.374 [2024-07-14 05:37:31.335522] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:24.374 [2024-07-14 05:37:31.335533] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:24.374 [2024-07-14 05:37:31.335569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:24.374 05:37:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:24.374 05:37:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:24.374 05:37:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:24.374 05:37:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:24.374 05:37:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:24.374 05:37:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:24.374 05:37:31 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.GuMjktz3FM 00:23:24.374 05:37:31 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.GuMjktz3FM 00:23:24.374 05:37:31 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:24.630 [2024-07-14 05:37:31.687122] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:24.630 05:37:31 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:24.887 05:37:31 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:25.143 [2024-07-14 05:37:32.164393] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:25.143 [2024-07-14 05:37:32.164643] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:25.143 05:37:32 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:25.400 malloc0 00:23:25.400 05:37:32 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:25.656 05:37:32 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.GuMjktz3FM 00:23:25.913 [2024-07-14 05:37:32.926975] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:25.913 05:37:32 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=3278992 00:23:25.913 05:37:32 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:25.913 05:37:32 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:25.914 05:37:32 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 3278992 /var/tmp/bdevperf.sock 00:23:25.914 05:37:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3278992 ']' 00:23:25.914 05:37:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:25.914 05:37:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:25.914 05:37:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:25.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:25.914 05:37:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:25.914 05:37:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:25.914 [2024-07-14 05:37:32.992935] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:25.914 [2024-07-14 05:37:32.993029] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3278992 ] 00:23:26.171 EAL: No free 2048 kB hugepages reported on node 1 00:23:26.171 [2024-07-14 05:37:33.053043] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.171 [2024-07-14 05:37:33.142466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:26.171 05:37:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:26.171 05:37:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:26.171 05:37:33 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.GuMjktz3FM 00:23:26.428 05:37:33 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:26.685 [2024-07-14 05:37:33.722435] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:26.943 nvme0n1 00:23:26.943 05:37:33 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:26.943 Running I/O for 1 seconds... 00:23:28.315 00:23:28.315 Latency(us) 00:23:28.315 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:28.315 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:28.315 Verification LBA range: start 0x0 length 0x2000 00:23:28.315 nvme0n1 : 1.06 1632.41 6.38 0.00 0.00 76424.14 6262.33 118838.61 00:23:28.315 =================================================================================================================== 00:23:28.315 Total : 1632.41 6.38 0.00 0.00 76424.14 6262.33 118838.61 00:23:28.315 0 00:23:28.315 05:37:34 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 3278992 00:23:28.315 05:37:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3278992 ']' 00:23:28.315 05:37:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3278992 00:23:28.315 05:37:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:28.315 05:37:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:28.315 05:37:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3278992 00:23:28.315 05:37:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:28.315 05:37:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:28.315 05:37:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3278992' 00:23:28.315 killing process with pid 3278992 00:23:28.315 05:37:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3278992 00:23:28.315 Received shutdown signal, test time was about 1.000000 seconds 00:23:28.315 00:23:28.315 Latency(us) 00:23:28.315 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:28.315 =================================================================================================================== 00:23:28.315 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:28.315 05:37:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3278992 00:23:28.315 05:37:35 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 3278708 00:23:28.315 05:37:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3278708 ']' 00:23:28.315 05:37:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3278708 00:23:28.315 05:37:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:28.315 05:37:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:28.315 05:37:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3278708 00:23:28.315 05:37:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:28.315 05:37:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:28.315 05:37:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3278708' 00:23:28.315 killing process with pid 3278708 00:23:28.315 05:37:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3278708 00:23:28.315 [2024-07-14 05:37:35.258567] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:28.315 05:37:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3278708 00:23:28.573 05:37:35 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:23:28.573 05:37:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:28.573 05:37:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:28.573 05:37:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:28.573 05:37:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3279277 00:23:28.573 05:37:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:28.573 05:37:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3279277 00:23:28.573 05:37:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3279277 ']' 00:23:28.573 05:37:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:28.573 05:37:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:28.573 05:37:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:28.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:28.573 05:37:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:28.573 05:37:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:28.573 [2024-07-14 05:37:35.531531] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:28.573 [2024-07-14 05:37:35.531619] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:28.573 EAL: No free 2048 kB hugepages reported on node 1 00:23:28.573 [2024-07-14 05:37:35.599859] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.831 [2024-07-14 05:37:35.695860] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:28.831 [2024-07-14 05:37:35.695927] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:28.831 [2024-07-14 05:37:35.695944] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:28.831 [2024-07-14 05:37:35.695957] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:28.831 [2024-07-14 05:37:35.695969] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:28.831 [2024-07-14 05:37:35.696000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:28.831 05:37:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:28.831 05:37:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:28.831 05:37:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:28.831 05:37:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:28.831 05:37:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:28.831 05:37:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:28.831 05:37:35 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:23:28.831 05:37:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.831 05:37:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:28.831 [2024-07-14 05:37:35.848368] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:28.831 malloc0 00:23:28.831 [2024-07-14 05:37:35.880964] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:28.831 [2024-07-14 05:37:35.881267] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:28.831 05:37:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.831 05:37:35 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=3279298 00:23:28.831 05:37:35 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:28.831 05:37:35 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 3279298 /var/tmp/bdevperf.sock 00:23:28.831 05:37:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3279298 ']' 00:23:28.831 05:37:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:28.831 05:37:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:28.831 05:37:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:28.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:28.831 05:37:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:28.831 05:37:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:29.089 [2024-07-14 05:37:35.948657] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:29.090 [2024-07-14 05:37:35.948719] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3279298 ] 00:23:29.090 EAL: No free 2048 kB hugepages reported on node 1 00:23:29.090 [2024-07-14 05:37:36.011075] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:29.090 [2024-07-14 05:37:36.103720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:29.347 05:37:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:29.347 05:37:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:29.347 05:37:36 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.GuMjktz3FM 00:23:29.605 05:37:36 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:29.605 [2024-07-14 05:37:36.701460] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:29.862 nvme0n1 00:23:29.862 05:37:36 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:29.862 Running I/O for 1 seconds... 00:23:31.233 00:23:31.233 Latency(us) 00:23:31.233 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:31.233 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:31.233 Verification LBA range: start 0x0 length 0x2000 00:23:31.233 nvme0n1 : 1.07 1605.92 6.27 0.00 0.00 77734.65 6796.33 112624.83 00:23:31.233 =================================================================================================================== 00:23:31.233 Total : 1605.92 6.27 0.00 0.00 77734.65 6796.33 112624.83 00:23:31.233 0 00:23:31.233 05:37:37 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:23:31.233 05:37:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.233 05:37:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:31.233 05:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.233 05:37:38 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:23:31.233 "subsystems": [ 00:23:31.233 { 00:23:31.234 "subsystem": "keyring", 00:23:31.234 "config": [ 00:23:31.234 { 00:23:31.234 "method": "keyring_file_add_key", 00:23:31.234 "params": { 00:23:31.234 "name": "key0", 00:23:31.234 "path": "/tmp/tmp.GuMjktz3FM" 00:23:31.234 } 00:23:31.234 } 00:23:31.234 ] 00:23:31.234 }, 00:23:31.234 { 00:23:31.234 "subsystem": "iobuf", 00:23:31.234 "config": [ 00:23:31.234 { 00:23:31.234 "method": "iobuf_set_options", 00:23:31.234 "params": { 00:23:31.234 "small_pool_count": 8192, 00:23:31.234 "large_pool_count": 1024, 00:23:31.234 "small_bufsize": 8192, 00:23:31.234 "large_bufsize": 135168 00:23:31.234 } 00:23:31.234 } 00:23:31.234 ] 00:23:31.234 }, 00:23:31.234 { 00:23:31.234 "subsystem": "sock", 00:23:31.234 "config": [ 00:23:31.234 { 00:23:31.234 "method": "sock_set_default_impl", 00:23:31.234 "params": { 00:23:31.234 "impl_name": "posix" 00:23:31.234 } 00:23:31.234 }, 00:23:31.234 { 00:23:31.234 "method": "sock_impl_set_options", 00:23:31.234 "params": { 00:23:31.234 "impl_name": "ssl", 00:23:31.234 "recv_buf_size": 4096, 00:23:31.234 "send_buf_size": 4096, 00:23:31.234 "enable_recv_pipe": true, 00:23:31.234 "enable_quickack": false, 00:23:31.234 "enable_placement_id": 0, 00:23:31.234 "enable_zerocopy_send_server": true, 00:23:31.234 "enable_zerocopy_send_client": false, 00:23:31.234 "zerocopy_threshold": 0, 00:23:31.234 "tls_version": 0, 00:23:31.234 "enable_ktls": false 00:23:31.234 } 00:23:31.234 }, 00:23:31.234 { 00:23:31.234 "method": "sock_impl_set_options", 00:23:31.234 "params": { 00:23:31.234 "impl_name": "posix", 00:23:31.234 "recv_buf_size": 2097152, 00:23:31.234 "send_buf_size": 2097152, 00:23:31.234 "enable_recv_pipe": true, 00:23:31.234 "enable_quickack": false, 00:23:31.234 "enable_placement_id": 0, 00:23:31.234 "enable_zerocopy_send_server": true, 00:23:31.234 "enable_zerocopy_send_client": false, 00:23:31.234 "zerocopy_threshold": 0, 00:23:31.234 "tls_version": 0, 00:23:31.234 "enable_ktls": false 00:23:31.234 } 00:23:31.234 } 00:23:31.234 ] 00:23:31.234 }, 00:23:31.234 { 00:23:31.234 "subsystem": "vmd", 00:23:31.234 "config": [] 00:23:31.234 }, 00:23:31.234 { 00:23:31.234 "subsystem": "accel", 00:23:31.234 "config": [ 00:23:31.234 { 00:23:31.234 "method": "accel_set_options", 00:23:31.234 "params": { 00:23:31.234 "small_cache_size": 128, 00:23:31.234 "large_cache_size": 16, 00:23:31.234 "task_count": 2048, 00:23:31.234 "sequence_count": 2048, 00:23:31.234 "buf_count": 2048 00:23:31.234 } 00:23:31.234 } 00:23:31.234 ] 00:23:31.234 }, 00:23:31.234 { 00:23:31.234 "subsystem": "bdev", 00:23:31.234 "config": [ 00:23:31.234 { 00:23:31.234 "method": "bdev_set_options", 00:23:31.234 "params": { 00:23:31.234 "bdev_io_pool_size": 65535, 00:23:31.234 "bdev_io_cache_size": 256, 00:23:31.234 "bdev_auto_examine": true, 00:23:31.234 "iobuf_small_cache_size": 128, 00:23:31.234 "iobuf_large_cache_size": 16 00:23:31.234 } 00:23:31.234 }, 00:23:31.234 { 00:23:31.234 "method": "bdev_raid_set_options", 00:23:31.234 "params": { 00:23:31.234 "process_window_size_kb": 1024 00:23:31.234 } 00:23:31.234 }, 00:23:31.234 { 00:23:31.234 "method": "bdev_iscsi_set_options", 00:23:31.234 "params": { 00:23:31.234 "timeout_sec": 30 00:23:31.234 } 00:23:31.234 }, 00:23:31.234 { 00:23:31.234 "method": "bdev_nvme_set_options", 00:23:31.234 "params": { 00:23:31.234 "action_on_timeout": "none", 00:23:31.234 "timeout_us": 0, 00:23:31.234 "timeout_admin_us": 0, 00:23:31.234 "keep_alive_timeout_ms": 10000, 00:23:31.234 "arbitration_burst": 0, 00:23:31.234 "low_priority_weight": 0, 00:23:31.234 "medium_priority_weight": 0, 00:23:31.234 "high_priority_weight": 0, 00:23:31.234 "nvme_adminq_poll_period_us": 10000, 00:23:31.234 "nvme_ioq_poll_period_us": 0, 00:23:31.234 "io_queue_requests": 0, 00:23:31.234 "delay_cmd_submit": true, 00:23:31.234 "transport_retry_count": 4, 00:23:31.234 "bdev_retry_count": 3, 00:23:31.234 "transport_ack_timeout": 0, 00:23:31.234 "ctrlr_loss_timeout_sec": 0, 00:23:31.234 "reconnect_delay_sec": 0, 00:23:31.234 "fast_io_fail_timeout_sec": 0, 00:23:31.234 "disable_auto_failback": false, 00:23:31.234 "generate_uuids": false, 00:23:31.234 "transport_tos": 0, 00:23:31.234 "nvme_error_stat": false, 00:23:31.234 "rdma_srq_size": 0, 00:23:31.234 "io_path_stat": false, 00:23:31.234 "allow_accel_sequence": false, 00:23:31.234 "rdma_max_cq_size": 0, 00:23:31.234 "rdma_cm_event_timeout_ms": 0, 00:23:31.234 "dhchap_digests": [ 00:23:31.234 "sha256", 00:23:31.234 "sha384", 00:23:31.234 "sha512" 00:23:31.234 ], 00:23:31.234 "dhchap_dhgroups": [ 00:23:31.234 "null", 00:23:31.234 "ffdhe2048", 00:23:31.234 "ffdhe3072", 00:23:31.234 "ffdhe4096", 00:23:31.234 "ffdhe6144", 00:23:31.234 "ffdhe8192" 00:23:31.234 ] 00:23:31.234 } 00:23:31.234 }, 00:23:31.234 { 00:23:31.234 "method": "bdev_nvme_set_hotplug", 00:23:31.234 "params": { 00:23:31.234 "period_us": 100000, 00:23:31.234 "enable": false 00:23:31.234 } 00:23:31.234 }, 00:23:31.234 { 00:23:31.234 "method": "bdev_malloc_create", 00:23:31.234 "params": { 00:23:31.234 "name": "malloc0", 00:23:31.234 "num_blocks": 8192, 00:23:31.234 "block_size": 4096, 00:23:31.234 "physical_block_size": 4096, 00:23:31.234 "uuid": "22dd721b-5619-4077-8ed5-036587284c71", 00:23:31.234 "optimal_io_boundary": 0 00:23:31.234 } 00:23:31.234 }, 00:23:31.234 { 00:23:31.234 "method": "bdev_wait_for_examine" 00:23:31.234 } 00:23:31.234 ] 00:23:31.234 }, 00:23:31.234 { 00:23:31.234 "subsystem": "nbd", 00:23:31.234 "config": [] 00:23:31.234 }, 00:23:31.234 { 00:23:31.234 "subsystem": "scheduler", 00:23:31.234 "config": [ 00:23:31.234 { 00:23:31.234 "method": "framework_set_scheduler", 00:23:31.234 "params": { 00:23:31.234 "name": "static" 00:23:31.234 } 00:23:31.234 } 00:23:31.234 ] 00:23:31.234 }, 00:23:31.234 { 00:23:31.234 "subsystem": "nvmf", 00:23:31.234 "config": [ 00:23:31.234 { 00:23:31.234 "method": "nvmf_set_config", 00:23:31.234 "params": { 00:23:31.234 "discovery_filter": "match_any", 00:23:31.234 "admin_cmd_passthru": { 00:23:31.234 "identify_ctrlr": false 00:23:31.234 } 00:23:31.234 } 00:23:31.234 }, 00:23:31.234 { 00:23:31.234 "method": "nvmf_set_max_subsystems", 00:23:31.234 "params": { 00:23:31.234 "max_subsystems": 1024 00:23:31.234 } 00:23:31.234 }, 00:23:31.234 { 00:23:31.234 "method": "nvmf_set_crdt", 00:23:31.234 "params": { 00:23:31.234 "crdt1": 0, 00:23:31.234 "crdt2": 0, 00:23:31.234 "crdt3": 0 00:23:31.234 } 00:23:31.234 }, 00:23:31.234 { 00:23:31.234 "method": "nvmf_create_transport", 00:23:31.234 "params": { 00:23:31.234 "trtype": "TCP", 00:23:31.234 "max_queue_depth": 128, 00:23:31.234 "max_io_qpairs_per_ctrlr": 127, 00:23:31.234 "in_capsule_data_size": 4096, 00:23:31.234 "max_io_size": 131072, 00:23:31.234 "io_unit_size": 131072, 00:23:31.234 "max_aq_depth": 128, 00:23:31.234 "num_shared_buffers": 511, 00:23:31.234 "buf_cache_size": 4294967295, 00:23:31.234 "dif_insert_or_strip": false, 00:23:31.234 "zcopy": false, 00:23:31.234 "c2h_success": false, 00:23:31.234 "sock_priority": 0, 00:23:31.234 "abort_timeout_sec": 1, 00:23:31.234 "ack_timeout": 0, 00:23:31.234 "data_wr_pool_size": 0 00:23:31.234 } 00:23:31.234 }, 00:23:31.234 { 00:23:31.234 "method": "nvmf_create_subsystem", 00:23:31.234 "params": { 00:23:31.234 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:31.234 "allow_any_host": false, 00:23:31.234 "serial_number": "00000000000000000000", 00:23:31.234 "model_number": "SPDK bdev Controller", 00:23:31.234 "max_namespaces": 32, 00:23:31.234 "min_cntlid": 1, 00:23:31.234 "max_cntlid": 65519, 00:23:31.234 "ana_reporting": false 00:23:31.234 } 00:23:31.234 }, 00:23:31.234 { 00:23:31.234 "method": "nvmf_subsystem_add_host", 00:23:31.234 "params": { 00:23:31.234 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:31.234 "host": "nqn.2016-06.io.spdk:host1", 00:23:31.234 "psk": "key0" 00:23:31.234 } 00:23:31.234 }, 00:23:31.234 { 00:23:31.234 "method": "nvmf_subsystem_add_ns", 00:23:31.234 "params": { 00:23:31.234 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:31.234 "namespace": { 00:23:31.234 "nsid": 1, 00:23:31.234 "bdev_name": "malloc0", 00:23:31.234 "nguid": "22DD721B561940778ED5036587284C71", 00:23:31.234 "uuid": "22dd721b-5619-4077-8ed5-036587284c71", 00:23:31.234 "no_auto_visible": false 00:23:31.234 } 00:23:31.234 } 00:23:31.234 }, 00:23:31.234 { 00:23:31.234 "method": "nvmf_subsystem_add_listener", 00:23:31.234 "params": { 00:23:31.234 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:31.234 "listen_address": { 00:23:31.234 "trtype": "TCP", 00:23:31.234 "adrfam": "IPv4", 00:23:31.234 "traddr": "10.0.0.2", 00:23:31.234 "trsvcid": "4420" 00:23:31.234 }, 00:23:31.234 "secure_channel": true 00:23:31.234 } 00:23:31.234 } 00:23:31.234 ] 00:23:31.234 } 00:23:31.234 ] 00:23:31.234 }' 00:23:31.234 05:37:38 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:31.493 05:37:38 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:23:31.493 "subsystems": [ 00:23:31.493 { 00:23:31.493 "subsystem": "keyring", 00:23:31.493 "config": [ 00:23:31.493 { 00:23:31.493 "method": "keyring_file_add_key", 00:23:31.493 "params": { 00:23:31.493 "name": "key0", 00:23:31.493 "path": "/tmp/tmp.GuMjktz3FM" 00:23:31.493 } 00:23:31.493 } 00:23:31.493 ] 00:23:31.493 }, 00:23:31.493 { 00:23:31.493 "subsystem": "iobuf", 00:23:31.493 "config": [ 00:23:31.493 { 00:23:31.493 "method": "iobuf_set_options", 00:23:31.493 "params": { 00:23:31.493 "small_pool_count": 8192, 00:23:31.493 "large_pool_count": 1024, 00:23:31.493 "small_bufsize": 8192, 00:23:31.493 "large_bufsize": 135168 00:23:31.493 } 00:23:31.493 } 00:23:31.493 ] 00:23:31.493 }, 00:23:31.493 { 00:23:31.493 "subsystem": "sock", 00:23:31.493 "config": [ 00:23:31.493 { 00:23:31.493 "method": "sock_set_default_impl", 00:23:31.493 "params": { 00:23:31.493 "impl_name": "posix" 00:23:31.493 } 00:23:31.493 }, 00:23:31.493 { 00:23:31.493 "method": "sock_impl_set_options", 00:23:31.493 "params": { 00:23:31.493 "impl_name": "ssl", 00:23:31.493 "recv_buf_size": 4096, 00:23:31.493 "send_buf_size": 4096, 00:23:31.493 "enable_recv_pipe": true, 00:23:31.493 "enable_quickack": false, 00:23:31.493 "enable_placement_id": 0, 00:23:31.493 "enable_zerocopy_send_server": true, 00:23:31.493 "enable_zerocopy_send_client": false, 00:23:31.493 "zerocopy_threshold": 0, 00:23:31.493 "tls_version": 0, 00:23:31.493 "enable_ktls": false 00:23:31.493 } 00:23:31.493 }, 00:23:31.493 { 00:23:31.493 "method": "sock_impl_set_options", 00:23:31.493 "params": { 00:23:31.493 "impl_name": "posix", 00:23:31.493 "recv_buf_size": 2097152, 00:23:31.493 "send_buf_size": 2097152, 00:23:31.493 "enable_recv_pipe": true, 00:23:31.493 "enable_quickack": false, 00:23:31.493 "enable_placement_id": 0, 00:23:31.493 "enable_zerocopy_send_server": true, 00:23:31.493 "enable_zerocopy_send_client": false, 00:23:31.493 "zerocopy_threshold": 0, 00:23:31.493 "tls_version": 0, 00:23:31.493 "enable_ktls": false 00:23:31.493 } 00:23:31.493 } 00:23:31.493 ] 00:23:31.493 }, 00:23:31.493 { 00:23:31.493 "subsystem": "vmd", 00:23:31.493 "config": [] 00:23:31.493 }, 00:23:31.493 { 00:23:31.493 "subsystem": "accel", 00:23:31.493 "config": [ 00:23:31.493 { 00:23:31.493 "method": "accel_set_options", 00:23:31.493 "params": { 00:23:31.493 "small_cache_size": 128, 00:23:31.493 "large_cache_size": 16, 00:23:31.493 "task_count": 2048, 00:23:31.493 "sequence_count": 2048, 00:23:31.493 "buf_count": 2048 00:23:31.493 } 00:23:31.493 } 00:23:31.493 ] 00:23:31.493 }, 00:23:31.493 { 00:23:31.493 "subsystem": "bdev", 00:23:31.493 "config": [ 00:23:31.493 { 00:23:31.493 "method": "bdev_set_options", 00:23:31.493 "params": { 00:23:31.493 "bdev_io_pool_size": 65535, 00:23:31.493 "bdev_io_cache_size": 256, 00:23:31.493 "bdev_auto_examine": true, 00:23:31.493 "iobuf_small_cache_size": 128, 00:23:31.493 "iobuf_large_cache_size": 16 00:23:31.493 } 00:23:31.493 }, 00:23:31.493 { 00:23:31.493 "method": "bdev_raid_set_options", 00:23:31.493 "params": { 00:23:31.493 "process_window_size_kb": 1024 00:23:31.493 } 00:23:31.493 }, 00:23:31.493 { 00:23:31.493 "method": "bdev_iscsi_set_options", 00:23:31.493 "params": { 00:23:31.493 "timeout_sec": 30 00:23:31.493 } 00:23:31.493 }, 00:23:31.493 { 00:23:31.493 "method": "bdev_nvme_set_options", 00:23:31.493 "params": { 00:23:31.493 "action_on_timeout": "none", 00:23:31.493 "timeout_us": 0, 00:23:31.493 "timeout_admin_us": 0, 00:23:31.493 "keep_alive_timeout_ms": 10000, 00:23:31.493 "arbitration_burst": 0, 00:23:31.493 "low_priority_weight": 0, 00:23:31.493 "medium_priority_weight": 0, 00:23:31.493 "high_priority_weight": 0, 00:23:31.493 "nvme_adminq_poll_period_us": 10000, 00:23:31.493 "nvme_ioq_poll_period_us": 0, 00:23:31.493 "io_queue_requests": 512, 00:23:31.493 "delay_cmd_submit": true, 00:23:31.493 "transport_retry_count": 4, 00:23:31.493 "bdev_retry_count": 3, 00:23:31.493 "transport_ack_timeout": 0, 00:23:31.493 "ctrlr_loss_timeout_sec": 0, 00:23:31.493 "reconnect_delay_sec": 0, 00:23:31.493 "fast_io_fail_timeout_sec": 0, 00:23:31.493 "disable_auto_failback": false, 00:23:31.493 "generate_uuids": false, 00:23:31.493 "transport_tos": 0, 00:23:31.493 "nvme_error_stat": false, 00:23:31.493 "rdma_srq_size": 0, 00:23:31.493 "io_path_stat": false, 00:23:31.493 "allow_accel_sequence": false, 00:23:31.493 "rdma_max_cq_size": 0, 00:23:31.493 "rdma_cm_event_timeout_ms": 0, 00:23:31.493 "dhchap_digests": [ 00:23:31.493 "sha256", 00:23:31.493 "sha384", 00:23:31.493 "sha512" 00:23:31.493 ], 00:23:31.493 "dhchap_dhgroups": [ 00:23:31.493 "null", 00:23:31.493 "ffdhe2048", 00:23:31.493 "ffdhe3072", 00:23:31.493 "ffdhe4096", 00:23:31.493 "ffdhe6144", 00:23:31.493 "ffdhe8192" 00:23:31.493 ] 00:23:31.493 } 00:23:31.493 }, 00:23:31.493 { 00:23:31.493 "method": "bdev_nvme_attach_controller", 00:23:31.493 "params": { 00:23:31.493 "name": "nvme0", 00:23:31.493 "trtype": "TCP", 00:23:31.493 "adrfam": "IPv4", 00:23:31.493 "traddr": "10.0.0.2", 00:23:31.493 "trsvcid": "4420", 00:23:31.493 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:31.493 "prchk_reftag": false, 00:23:31.493 "prchk_guard": false, 00:23:31.493 "ctrlr_loss_timeout_sec": 0, 00:23:31.493 "reconnect_delay_sec": 0, 00:23:31.493 "fast_io_fail_timeout_sec": 0, 00:23:31.493 "psk": "key0", 00:23:31.493 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:31.494 "hdgst": false, 00:23:31.494 "ddgst": false 00:23:31.494 } 00:23:31.494 }, 00:23:31.494 { 00:23:31.494 "method": "bdev_nvme_set_hotplug", 00:23:31.494 "params": { 00:23:31.494 "period_us": 100000, 00:23:31.494 "enable": false 00:23:31.494 } 00:23:31.494 }, 00:23:31.494 { 00:23:31.494 "method": "bdev_enable_histogram", 00:23:31.494 "params": { 00:23:31.494 "name": "nvme0n1", 00:23:31.494 "enable": true 00:23:31.494 } 00:23:31.494 }, 00:23:31.494 { 00:23:31.494 "method": "bdev_wait_for_examine" 00:23:31.494 } 00:23:31.494 ] 00:23:31.494 }, 00:23:31.494 { 00:23:31.494 "subsystem": "nbd", 00:23:31.494 "config": [] 00:23:31.494 } 00:23:31.494 ] 00:23:31.494 }' 00:23:31.494 05:37:38 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 3279298 00:23:31.494 05:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3279298 ']' 00:23:31.494 05:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3279298 00:23:31.494 05:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:31.494 05:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:31.494 05:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3279298 00:23:31.494 05:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:31.494 05:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:31.494 05:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3279298' 00:23:31.494 killing process with pid 3279298 00:23:31.494 05:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3279298 00:23:31.494 Received shutdown signal, test time was about 1.000000 seconds 00:23:31.494 00:23:31.494 Latency(us) 00:23:31.494 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:31.494 =================================================================================================================== 00:23:31.494 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:31.494 05:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3279298 00:23:31.766 05:37:38 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 3279277 00:23:31.767 05:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3279277 ']' 00:23:31.767 05:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3279277 00:23:31.767 05:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:31.767 05:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:31.767 05:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3279277 00:23:31.767 05:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:31.767 05:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:31.767 05:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3279277' 00:23:31.767 killing process with pid 3279277 00:23:31.767 05:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3279277 00:23:31.767 05:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3279277 00:23:32.027 05:37:38 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:23:32.027 05:37:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:32.027 05:37:38 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:23:32.027 "subsystems": [ 00:23:32.027 { 00:23:32.027 "subsystem": "keyring", 00:23:32.027 "config": [ 00:23:32.027 { 00:23:32.027 "method": "keyring_file_add_key", 00:23:32.027 "params": { 00:23:32.027 "name": "key0", 00:23:32.027 "path": "/tmp/tmp.GuMjktz3FM" 00:23:32.027 } 00:23:32.027 } 00:23:32.027 ] 00:23:32.027 }, 00:23:32.027 { 00:23:32.027 "subsystem": "iobuf", 00:23:32.027 "config": [ 00:23:32.027 { 00:23:32.027 "method": "iobuf_set_options", 00:23:32.027 "params": { 00:23:32.027 "small_pool_count": 8192, 00:23:32.027 "large_pool_count": 1024, 00:23:32.027 "small_bufsize": 8192, 00:23:32.027 "large_bufsize": 135168 00:23:32.027 } 00:23:32.027 } 00:23:32.027 ] 00:23:32.027 }, 00:23:32.027 { 00:23:32.027 "subsystem": "sock", 00:23:32.027 "config": [ 00:23:32.027 { 00:23:32.027 "method": "sock_set_default_impl", 00:23:32.027 "params": { 00:23:32.027 "impl_name": "posix" 00:23:32.027 } 00:23:32.027 }, 00:23:32.027 { 00:23:32.027 "method": "sock_impl_set_options", 00:23:32.027 "params": { 00:23:32.027 "impl_name": "ssl", 00:23:32.027 "recv_buf_size": 4096, 00:23:32.027 "send_buf_size": 4096, 00:23:32.027 "enable_recv_pipe": true, 00:23:32.027 "enable_quickack": false, 00:23:32.027 "enable_placement_id": 0, 00:23:32.027 "enable_zerocopy_send_server": true, 00:23:32.027 "enable_zerocopy_send_client": false, 00:23:32.027 "zerocopy_threshold": 0, 00:23:32.027 "tls_version": 0, 00:23:32.027 "enable_ktls": false 00:23:32.027 } 00:23:32.027 }, 00:23:32.027 { 00:23:32.027 "method": "sock_impl_set_options", 00:23:32.027 "params": { 00:23:32.027 "impl_name": "posix", 00:23:32.027 "recv_buf_size": 2097152, 00:23:32.027 "send_buf_size": 2097152, 00:23:32.027 "enable_recv_pipe": true, 00:23:32.027 "enable_quickack": false, 00:23:32.027 "enable_placement_id": 0, 00:23:32.027 "enable_zerocopy_send_server": true, 00:23:32.027 "enable_zerocopy_send_client": false, 00:23:32.027 "zerocopy_threshold": 0, 00:23:32.027 "tls_version": 0, 00:23:32.027 "enable_ktls": false 00:23:32.027 } 00:23:32.027 } 00:23:32.027 ] 00:23:32.027 }, 00:23:32.027 { 00:23:32.027 "subsystem": "vmd", 00:23:32.027 "config": [] 00:23:32.027 }, 00:23:32.027 { 00:23:32.028 "subsystem": "accel", 00:23:32.028 "config": [ 00:23:32.028 { 00:23:32.028 "method": "accel_set_options", 00:23:32.028 "params": { 00:23:32.028 "small_cache_size": 128, 00:23:32.028 "large_cache_size": 16, 00:23:32.028 "task_count": 2048, 00:23:32.028 "sequence_count": 2048, 00:23:32.028 "buf_count": 2048 00:23:32.028 } 00:23:32.028 } 00:23:32.028 ] 00:23:32.028 }, 00:23:32.028 { 00:23:32.028 "subsystem": "bdev", 00:23:32.028 "config": [ 00:23:32.028 { 00:23:32.028 "method": "bdev_set_options", 00:23:32.028 "params": { 00:23:32.028 "bdev_io_pool_size": 65535, 00:23:32.028 "bdev_io_cache_size": 256, 00:23:32.028 "bdev_auto_examine": true, 00:23:32.028 "iobuf_small_cache_size": 128, 00:23:32.028 "iobuf_large_cache_size": 16 00:23:32.028 } 00:23:32.028 }, 00:23:32.028 { 00:23:32.028 "method": "bdev_raid_set_options", 00:23:32.028 "params": { 00:23:32.028 "process_window_size_kb": 1024 00:23:32.028 } 00:23:32.028 }, 00:23:32.028 { 00:23:32.028 "method": "bdev_iscsi_set_options", 00:23:32.028 "params": { 00:23:32.028 "timeout_sec": 30 00:23:32.028 } 00:23:32.028 }, 00:23:32.028 { 00:23:32.028 "method": "bdev_nvme_set_options", 00:23:32.028 "params": { 00:23:32.028 "action_on_timeout": "none", 00:23:32.028 "timeout_us": 0, 00:23:32.028 "timeout_admin_us": 0, 00:23:32.028 "keep_alive_timeout_ms": 10000, 00:23:32.028 "arbitration_burst": 0, 00:23:32.028 "low_priority_weight": 0, 00:23:32.028 "medium_priority_weight": 0, 00:23:32.028 "high_priority_weight": 0, 00:23:32.028 "nvme_adminq_poll_period_us": 10000, 00:23:32.028 "nvme_ioq_poll_period_us": 0, 00:23:32.028 "io_queue_requests": 0, 00:23:32.028 "delay_cmd_submit": true, 00:23:32.028 "transport_retry_count": 4, 00:23:32.028 "bdev_retry_count": 3, 00:23:32.028 "transport_ack_timeout": 0, 00:23:32.028 "ctrlr_loss_timeout_sec": 0, 00:23:32.028 "reconnect_delay_sec": 0, 00:23:32.028 "fast_io_fail_timeout_sec": 0, 00:23:32.028 "disable_auto_failback": false, 00:23:32.028 "generate_uuids": false, 00:23:32.028 "transport_tos": 0, 00:23:32.028 "nvme_error_stat": false, 00:23:32.028 "rdma_srq_size": 0, 00:23:32.028 "io_path_stat": false, 00:23:32.028 "allow_accel_sequence": false, 00:23:32.028 "rdma_max_cq_size": 0, 00:23:32.028 "rdma_cm_event_timeout_ms": 0, 00:23:32.028 "dhchap_digests": [ 00:23:32.028 "sha256", 00:23:32.028 "sha384", 00:23:32.028 "sha512" 00:23:32.028 ], 00:23:32.028 "dhchap_dhgroups": [ 00:23:32.028 "null", 00:23:32.028 "ffdhe2048", 00:23:32.028 "ffdhe3072", 00:23:32.028 "ffdhe4096", 00:23:32.028 "ffdhe6144", 00:23:32.028 "ffdhe8192" 00:23:32.028 ] 00:23:32.028 } 00:23:32.028 }, 00:23:32.028 { 00:23:32.028 "method": "bdev_nvme_set_hotplug", 00:23:32.028 "params": { 00:23:32.028 "period_us": 100000, 00:23:32.028 "enable": false 00:23:32.028 } 00:23:32.028 }, 00:23:32.028 { 00:23:32.028 "method": "bdev_malloc_create", 00:23:32.028 "params": { 00:23:32.028 "name": "malloc0", 00:23:32.028 "num_blocks": 8192, 00:23:32.028 "block_size": 4096, 00:23:32.028 "physical_block_size": 4096, 00:23:32.028 "uuid": "22dd721b-5619-4077-8ed5-036587284c71", 00:23:32.028 "optimal_io_boundary": 0 00:23:32.028 } 00:23:32.028 }, 00:23:32.028 { 00:23:32.028 "method": "bdev_wait_for_examine" 00:23:32.028 } 00:23:32.028 ] 00:23:32.028 }, 00:23:32.028 { 00:23:32.028 "subsystem": "nbd", 00:23:32.028 "config": [] 00:23:32.028 }, 00:23:32.028 { 00:23:32.028 "subsystem": "scheduler", 00:23:32.028 "config": [ 00:23:32.028 { 00:23:32.028 "method": "framework_set_scheduler", 00:23:32.028 "params": { 00:23:32.028 "name": "static" 00:23:32.028 } 00:23:32.028 } 00:23:32.028 ] 00:23:32.028 }, 00:23:32.028 { 00:23:32.028 "subsystem": "nvmf", 00:23:32.028 "config": [ 00:23:32.028 { 00:23:32.028 "method": "nvmf_set_config", 00:23:32.028 "params": { 00:23:32.028 "discovery_filter": "match_any", 00:23:32.028 "admin_cmd_passthru": { 00:23:32.028 "identify_ctrlr": false 00:23:32.028 } 00:23:32.028 } 00:23:32.028 }, 00:23:32.028 { 00:23:32.028 "method": "nvmf_set_max_subsystems", 00:23:32.028 "params": { 00:23:32.028 "max_subsystems": 1024 00:23:32.028 } 00:23:32.028 }, 00:23:32.028 { 00:23:32.028 "method": "nvmf_set_crdt", 00:23:32.028 "params": { 00:23:32.028 "crdt1": 0, 00:23:32.028 "crdt2": 0, 00:23:32.028 "crdt3": 0 00:23:32.028 } 00:23:32.028 }, 00:23:32.028 { 00:23:32.028 "method": "nvmf_create_transport", 00:23:32.028 "params": { 00:23:32.028 "trtype": "TCP", 00:23:32.028 "max_queue_depth": 128, 00:23:32.028 "max_io_qpairs_per_ctrlr": 127, 00:23:32.028 "in_capsule_data_size": 4096, 00:23:32.028 "max_io_size": 131072, 00:23:32.028 "io_unit_size": 131072, 00:23:32.028 "max_aq_depth": 128, 00:23:32.028 "num_shared_buffers": 511, 00:23:32.028 "buf_cache_size": 4294967295, 00:23:32.028 "dif_insert_or_strip": false, 00:23:32.028 "zcopy": false, 00:23:32.028 "c2h_success": false, 00:23:32.028 "sock_priority": 0, 00:23:32.028 "abort_timeout_sec": 1, 00:23:32.028 "ack_timeout": 0, 00:23:32.028 "data_wr_pool_size": 0 00:23:32.028 } 00:23:32.028 }, 00:23:32.028 { 00:23:32.028 "method": "nvmf_create_subsystem", 00:23:32.028 "params": { 00:23:32.028 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:32.028 "allow_any_host": false, 00:23:32.028 "serial_number": "00000000000000000000", 00:23:32.028 "model_number": "SPDK bdev Controller", 00:23:32.028 "max_namespaces": 32, 00:23:32.028 "min_cntlid": 1, 00:23:32.028 "max_cntlid": 65519, 00:23:32.028 "ana_reporting": false 00:23:32.028 } 00:23:32.028 }, 00:23:32.028 { 00:23:32.028 "method": "nvmf_subsystem_add_host", 00:23:32.028 "params": { 00:23:32.028 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:32.028 "host": "nqn.2016-06.io.spdk:host1", 00:23:32.028 "psk": "key0" 00:23:32.028 } 00:23:32.028 }, 00:23:32.028 { 00:23:32.028 "method": "nvmf_subsystem_add_ns", 00:23:32.028 "params": { 00:23:32.028 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:32.028 "namespace": { 00:23:32.028 "nsid": 1, 00:23:32.028 "bdev_name": "malloc0", 00:23:32.028 "nguid": "22DD721B561940778ED5036587284C71", 00:23:32.028 "uuid": "22dd721b-5619-4077-8ed5-036587284c71", 00:23:32.028 "no_auto_visible": false 00:23:32.028 } 00:23:32.028 } 00:23:32.028 }, 00:23:32.028 { 00:23:32.028 "method": "nvmf_subsystem_add_listener", 00:23:32.028 "params": { 00:23:32.028 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:32.028 "listen_address": { 00:23:32.028 "trtype": "TCP", 00:23:32.028 "adrfam": "IPv4", 00:23:32.028 "traddr": "10.0.0.2", 00:23:32.028 "trsvcid": "4420" 00:23:32.028 }, 00:23:32.028 "secure_channel": true 00:23:32.028 } 00:23:32.028 } 00:23:32.028 ] 00:23:32.028 } 00:23:32.028 ] 00:23:32.028 }' 00:23:32.028 05:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:32.028 05:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:32.028 05:37:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3279707 00:23:32.028 05:37:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:23:32.028 05:37:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3279707 00:23:32.028 05:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3279707 ']' 00:23:32.028 05:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:32.028 05:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:32.028 05:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:32.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:32.028 05:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:32.028 05:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:32.028 [2024-07-14 05:37:38.952572] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:32.028 [2024-07-14 05:37:38.952646] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:32.028 EAL: No free 2048 kB hugepages reported on node 1 00:23:32.028 [2024-07-14 05:37:39.015970] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:32.028 [2024-07-14 05:37:39.098966] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:32.028 [2024-07-14 05:37:39.099033] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:32.028 [2024-07-14 05:37:39.099047] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:32.028 [2024-07-14 05:37:39.099067] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:32.028 [2024-07-14 05:37:39.099078] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:32.028 [2024-07-14 05:37:39.099173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:32.286 [2024-07-14 05:37:39.343534] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:32.286 [2024-07-14 05:37:39.375535] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:32.286 [2024-07-14 05:37:39.386079] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:32.852 05:37:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:32.852 05:37:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:32.852 05:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:32.852 05:37:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:32.852 05:37:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:32.852 05:37:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:32.852 05:37:39 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=3279855 00:23:32.852 05:37:39 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 3279855 /var/tmp/bdevperf.sock 00:23:32.852 05:37:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3279855 ']' 00:23:32.852 05:37:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:32.852 05:37:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:32.852 05:37:39 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:23:32.852 05:37:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:32.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:32.852 05:37:39 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:23:32.852 "subsystems": [ 00:23:32.852 { 00:23:32.852 "subsystem": "keyring", 00:23:32.852 "config": [ 00:23:32.852 { 00:23:32.852 "method": "keyring_file_add_key", 00:23:32.852 "params": { 00:23:32.852 "name": "key0", 00:23:32.852 "path": "/tmp/tmp.GuMjktz3FM" 00:23:32.852 } 00:23:32.852 } 00:23:32.852 ] 00:23:32.852 }, 00:23:32.852 { 00:23:32.852 "subsystem": "iobuf", 00:23:32.852 "config": [ 00:23:32.852 { 00:23:32.852 "method": "iobuf_set_options", 00:23:32.852 "params": { 00:23:32.852 "small_pool_count": 8192, 00:23:32.852 "large_pool_count": 1024, 00:23:32.852 "small_bufsize": 8192, 00:23:32.852 "large_bufsize": 135168 00:23:32.852 } 00:23:32.852 } 00:23:32.852 ] 00:23:32.852 }, 00:23:32.852 { 00:23:32.853 "subsystem": "sock", 00:23:32.853 "config": [ 00:23:32.853 { 00:23:32.853 "method": "sock_set_default_impl", 00:23:32.853 "params": { 00:23:32.853 "impl_name": "posix" 00:23:32.853 } 00:23:32.853 }, 00:23:32.853 { 00:23:32.853 "method": "sock_impl_set_options", 00:23:32.853 "params": { 00:23:32.853 "impl_name": "ssl", 00:23:32.853 "recv_buf_size": 4096, 00:23:32.853 "send_buf_size": 4096, 00:23:32.853 "enable_recv_pipe": true, 00:23:32.853 "enable_quickack": false, 00:23:32.853 "enable_placement_id": 0, 00:23:32.853 "enable_zerocopy_send_server": true, 00:23:32.853 "enable_zerocopy_send_client": false, 00:23:32.853 "zerocopy_threshold": 0, 00:23:32.853 "tls_version": 0, 00:23:32.853 "enable_ktls": false 00:23:32.853 } 00:23:32.853 }, 00:23:32.853 { 00:23:32.853 "method": "sock_impl_set_options", 00:23:32.853 "params": { 00:23:32.853 "impl_name": "posix", 00:23:32.853 "recv_buf_size": 2097152, 00:23:32.853 "send_buf_size": 2097152, 00:23:32.853 "enable_recv_pipe": true, 00:23:32.853 "enable_quickack": false, 00:23:32.853 "enable_placement_id": 0, 00:23:32.853 "enable_zerocopy_send_server": true, 00:23:32.853 "enable_zerocopy_send_client": false, 00:23:32.853 "zerocopy_threshold": 0, 00:23:32.853 "tls_version": 0, 00:23:32.853 "enable_ktls": false 00:23:32.853 } 00:23:32.853 } 00:23:32.853 ] 00:23:32.853 }, 00:23:32.853 { 00:23:32.853 "subsystem": "vmd", 00:23:32.853 "config": [] 00:23:32.853 }, 00:23:32.853 { 00:23:32.853 "subsystem": "accel", 00:23:32.853 "config": [ 00:23:32.853 { 00:23:32.853 "method": "accel_set_options", 00:23:32.853 "params": { 00:23:32.853 "small_cache_size": 128, 00:23:32.853 "large_cache_size": 16, 00:23:32.853 "task_count": 2048, 00:23:32.853 "sequence_count": 2048, 00:23:32.853 "buf_count": 2048 00:23:32.853 } 00:23:32.853 } 00:23:32.853 ] 00:23:32.853 }, 00:23:32.853 { 00:23:32.853 "subsystem": "bdev", 00:23:32.853 "config": [ 00:23:32.853 { 00:23:32.853 "method": "bdev_set_options", 00:23:32.853 "params": { 00:23:32.853 "bdev_io_pool_size": 65535, 00:23:32.853 "bdev_io_cache_size": 256, 00:23:32.853 "bdev_auto_examine": true, 00:23:32.853 "iobuf_small_cache_size": 128, 00:23:32.853 "iobuf_large_cache_size": 16 00:23:32.853 } 00:23:32.853 }, 00:23:32.853 { 00:23:32.853 "method": "bdev_raid_set_options", 00:23:32.853 "params": { 00:23:32.853 "process_window_size_kb": 1024 00:23:32.853 } 00:23:32.853 }, 00:23:32.853 { 00:23:32.853 "method": "bdev_iscsi_set_options", 00:23:32.853 "params": { 00:23:32.853 "timeout_sec": 30 00:23:32.853 } 00:23:32.853 }, 00:23:32.853 { 00:23:32.853 "method": "bdev_nvme_set_options", 00:23:32.853 "params": { 00:23:32.853 "action_on_timeout": "none", 00:23:32.853 "timeout_us": 0, 00:23:32.853 "timeout_admin_us": 0, 00:23:32.853 "keep_alive_timeout_ms": 10000, 00:23:32.853 "arbitration_burst": 0, 00:23:32.853 "low_priority_weight": 0, 00:23:32.853 "medium_priority_weight": 0, 00:23:32.853 "high_priority_weight": 0, 00:23:32.853 "nvme_adminq_poll_period_us": 10000, 00:23:32.853 "nvme_ioq_poll_period_us": 0, 00:23:32.853 "io_queue_requests": 512, 00:23:32.853 "delay_cmd_submit": true, 00:23:32.853 "transport_retry_count": 4, 00:23:32.853 "bdev_retry_count": 3, 00:23:32.853 "transport_ack_timeout": 0, 00:23:32.853 "ctrlr_loss_timeout_sec": 0, 00:23:32.853 "reconnect_delay_sec": 0, 00:23:32.853 "fast_io_fail_timeout_sec": 0, 00:23:32.853 "disable_auto_failback": false, 00:23:32.853 "generate_uuids": false, 00:23:32.853 "transport_tos": 0, 00:23:32.853 "nvme_error_stat": false, 00:23:32.853 "rdma_srq_size": 0, 00:23:32.853 "io_path_stat": false, 00:23:32.853 "allow_accel_sequence": false, 00:23:32.853 "rdma_max_cq_size": 0, 00:23:32.853 "rdma_cm_event_timeout_ms": 0, 00:23:32.853 "dhchap_digests": [ 00:23:32.853 "sha256", 00:23:32.853 "sha384", 00:23:32.853 "sha512" 00:23:32.853 ], 00:23:32.853 "dhchap_dhgroups": [ 00:23:32.853 "null", 00:23:32.853 "ffdhe2048", 00:23:32.853 "ffdhe3072", 00:23:32.853 "ffdhe4096", 00:23:32.853 "ffdhe6144", 00:23:32.853 "ffdhe8192" 00:23:32.853 ] 00:23:32.853 } 00:23:32.853 }, 00:23:32.853 { 00:23:32.853 "method": "bdev_nvme_attach_controller", 00:23:32.853 "params": { 00:23:32.853 "name": "nvme0", 00:23:32.853 "trtype": "TCP", 00:23:32.853 "adrfam": "IPv4", 00:23:32.853 "traddr": "10.0.0.2", 00:23:32.853 "trsvcid": "4420", 00:23:32.853 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:32.853 "prchk_reftag": false, 00:23:32.853 "prchk_guard": false, 00:23:32.853 "ctrlr_loss_timeout_sec": 0, 00:23:32.853 "reconnect_delay_sec": 0, 00:23:32.853 "fast_io_fail_timeout_sec": 0, 00:23:32.853 "psk": "key0", 00:23:32.853 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:32.853 "hdgst": false, 00:23:32.853 "ddgst": false 00:23:32.853 } 00:23:32.853 }, 00:23:32.853 { 00:23:32.853 "method": "bdev_nvme_set_hotplug", 00:23:32.853 "params": { 00:23:32.853 "period_us": 100000, 00:23:32.853 "enable": false 00:23:32.853 } 00:23:32.853 }, 00:23:32.853 { 00:23:32.853 "method": "bdev_enable_histogram", 00:23:32.853 "params": { 00:23:32.853 "name": "nvme0n1", 00:23:32.853 "enable": true 00:23:32.853 } 00:23:32.853 }, 00:23:32.853 { 00:23:32.853 "method": "bdev_wait_for_examine" 00:23:32.853 } 00:23:32.853 ] 00:23:32.853 }, 00:23:32.853 { 00:23:32.853 "subsystem": "nbd", 00:23:32.853 "config": [] 00:23:32.853 } 00:23:32.853 ] 00:23:32.853 }' 00:23:32.853 05:37:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:32.853 05:37:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:33.111 [2024-07-14 05:37:39.971754] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:33.111 [2024-07-14 05:37:39.971842] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3279855 ] 00:23:33.111 EAL: No free 2048 kB hugepages reported on node 1 00:23:33.111 [2024-07-14 05:37:40.039914] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.111 [2024-07-14 05:37:40.135592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:33.369 [2024-07-14 05:37:40.315131] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:33.935 05:37:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:33.935 05:37:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:33.935 05:37:40 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:33.935 05:37:40 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:23:34.192 05:37:41 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.192 05:37:41 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:34.192 Running I/O for 1 seconds... 00:23:35.563 00:23:35.563 Latency(us) 00:23:35.563 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:35.563 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:35.563 Verification LBA range: start 0x0 length 0x2000 00:23:35.563 nvme0n1 : 1.10 1051.60 4.11 0.00 0.00 117439.35 7524.50 170102.33 00:23:35.563 =================================================================================================================== 00:23:35.563 Total : 1051.60 4.11 0.00 0.00 117439.35 7524.50 170102.33 00:23:35.563 0 00:23:35.563 05:37:42 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:23:35.563 05:37:42 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:23:35.563 05:37:42 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:23:35.563 05:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@804 -- # type=--id 00:23:35.563 05:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@805 -- # id=0 00:23:35.563 05:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:23:35.563 05:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:35.563 05:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:23:35.563 05:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:23:35.563 05:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@816 -- # for n in $shm_files 00:23:35.563 05:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:35.563 nvmf_trace.0 00:23:35.563 05:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # return 0 00:23:35.563 05:37:42 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 3279855 00:23:35.563 05:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3279855 ']' 00:23:35.563 05:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3279855 00:23:35.563 05:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:35.563 05:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:35.563 05:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3279855 00:23:35.563 05:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:35.563 05:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:35.563 05:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3279855' 00:23:35.563 killing process with pid 3279855 00:23:35.563 05:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3279855 00:23:35.563 Received shutdown signal, test time was about 1.000000 seconds 00:23:35.563 00:23:35.563 Latency(us) 00:23:35.563 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:35.563 =================================================================================================================== 00:23:35.563 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:35.563 05:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3279855 00:23:35.822 05:37:42 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:23:35.822 05:37:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:35.822 05:37:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:23:35.822 05:37:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:35.822 05:37:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:23:35.822 05:37:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:35.822 05:37:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:35.822 rmmod nvme_tcp 00:23:35.822 rmmod nvme_fabrics 00:23:35.822 rmmod nvme_keyring 00:23:35.822 05:37:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:35.822 05:37:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:23:35.822 05:37:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:23:35.822 05:37:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 3279707 ']' 00:23:35.822 05:37:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 3279707 00:23:35.822 05:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3279707 ']' 00:23:35.822 05:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3279707 00:23:35.822 05:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:35.822 05:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:35.822 05:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3279707 00:23:35.822 05:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:35.822 05:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:35.822 05:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3279707' 00:23:35.822 killing process with pid 3279707 00:23:35.822 05:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3279707 00:23:35.822 05:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3279707 00:23:36.081 05:37:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:36.081 05:37:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:36.081 05:37:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:36.081 05:37:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:36.081 05:37:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:36.081 05:37:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:36.081 05:37:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:36.081 05:37:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:38.016 05:37:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:38.016 05:37:45 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.pHtpqqHnPU /tmp/tmp.KgWnOhT9U2 /tmp/tmp.GuMjktz3FM 00:23:38.016 00:23:38.016 real 1m19.101s 00:23:38.016 user 2m1.895s 00:23:38.016 sys 0m29.862s 00:23:38.016 05:37:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:38.016 05:37:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:38.016 ************************************ 00:23:38.016 END TEST nvmf_tls 00:23:38.016 ************************************ 00:23:38.016 05:37:45 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:38.016 05:37:45 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:38.016 05:37:45 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:38.016 05:37:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:38.274 ************************************ 00:23:38.274 START TEST nvmf_fips 00:23:38.274 ************************************ 00:23:38.274 05:37:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:38.274 * Looking for test storage... 00:23:38.274 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:23:38.274 05:37:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:38.274 05:37:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:23:38.274 05:37:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:38.274 05:37:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:38.274 05:37:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:38.274 05:37:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:38.274 05:37:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:38.274 05:37:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:38.274 05:37:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:38.274 05:37:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:38.274 05:37:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:38.274 05:37:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:38.274 05:37:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:38.274 05:37:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:38.274 05:37:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:38.274 05:37:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:38.274 05:37:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:38.274 05:37:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:38.274 05:37:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:38.274 05:37:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:38.274 05:37:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:23:38.275 Error setting digest 00:23:38.275 00E2F42EF67F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:23:38.275 00E2F42EF67F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:38.275 05:37:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:38.276 05:37:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:23:38.276 05:37:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:40.177 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:40.177 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:40.177 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:40.177 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:40.177 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:40.178 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:40.178 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:40.178 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:40.178 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:40.178 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:40.178 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:40.178 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:40.178 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:40.437 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:40.437 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:40.437 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:40.437 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:40.437 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:40.437 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:40.437 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:40.437 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:40.437 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.112 ms 00:23:40.437 00:23:40.437 --- 10.0.0.2 ping statistics --- 00:23:40.437 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:40.437 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:23:40.437 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:40.437 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:40.437 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:23:40.437 00:23:40.437 --- 10.0.0.1 ping statistics --- 00:23:40.437 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:40.437 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:23:40.437 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:40.437 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:23:40.437 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:40.437 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:40.437 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:40.437 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:40.437 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:40.437 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:40.437 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:40.437 05:37:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:23:40.437 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:40.437 05:37:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:40.437 05:37:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:40.437 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=3282139 00:23:40.437 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:40.437 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 3282139 00:23:40.437 05:37:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 3282139 ']' 00:23:40.437 05:37:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:40.437 05:37:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:40.437 05:37:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:40.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:40.437 05:37:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:40.437 05:37:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:40.437 [2024-07-14 05:37:47.510753] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:40.437 [2024-07-14 05:37:47.510863] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:40.696 EAL: No free 2048 kB hugepages reported on node 1 00:23:40.696 [2024-07-14 05:37:47.582508] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:40.696 [2024-07-14 05:37:47.672605] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:40.696 [2024-07-14 05:37:47.672670] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:40.696 [2024-07-14 05:37:47.672695] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:40.696 [2024-07-14 05:37:47.672708] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:40.696 [2024-07-14 05:37:47.672720] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:40.696 [2024-07-14 05:37:47.672750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:40.696 05:37:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:40.696 05:37:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:23:40.696 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:40.696 05:37:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:40.696 05:37:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:40.955 05:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:40.955 05:37:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:23:40.955 05:37:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:40.955 05:37:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:40.955 05:37:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:40.955 05:37:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:40.955 05:37:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:40.955 05:37:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:40.955 05:37:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:40.955 [2024-07-14 05:37:48.035898] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:40.955 [2024-07-14 05:37:48.051893] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:40.955 [2024-07-14 05:37:48.052120] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:41.213 [2024-07-14 05:37:48.083387] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:41.213 malloc0 00:23:41.213 05:37:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:41.213 05:37:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=3282248 00:23:41.213 05:37:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:41.213 05:37:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 3282248 /var/tmp/bdevperf.sock 00:23:41.213 05:37:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 3282248 ']' 00:23:41.213 05:37:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:41.213 05:37:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:41.213 05:37:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:41.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:41.213 05:37:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:41.213 05:37:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:41.213 [2024-07-14 05:37:48.169074] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:41.213 [2024-07-14 05:37:48.169159] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3282248 ] 00:23:41.213 EAL: No free 2048 kB hugepages reported on node 1 00:23:41.213 [2024-07-14 05:37:48.226099] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.213 [2024-07-14 05:37:48.309140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:41.472 05:37:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:41.472 05:37:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:23:41.472 05:37:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:41.730 [2024-07-14 05:37:48.657954] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:41.730 [2024-07-14 05:37:48.658070] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:41.730 TLSTESTn1 00:23:41.730 05:37:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:41.989 Running I/O for 10 seconds... 00:23:51.959 00:23:51.959 Latency(us) 00:23:51.959 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:51.959 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:51.959 Verification LBA range: start 0x0 length 0x2000 00:23:51.959 TLSTESTn1 : 10.06 1553.88 6.07 0.00 0.00 82132.93 6213.78 111071.38 00:23:51.959 =================================================================================================================== 00:23:51.959 Total : 1553.88 6.07 0.00 0.00 82132.93 6213.78 111071.38 00:23:51.959 0 00:23:51.959 05:37:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:23:51.959 05:37:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:23:51.959 05:37:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@804 -- # type=--id 00:23:51.959 05:37:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@805 -- # id=0 00:23:51.959 05:37:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:23:51.959 05:37:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:51.959 05:37:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:23:51.959 05:37:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:23:51.959 05:37:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@816 -- # for n in $shm_files 00:23:51.959 05:37:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:51.959 nvmf_trace.0 00:23:51.959 05:37:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # return 0 00:23:51.959 05:37:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3282248 00:23:51.959 05:37:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 3282248 ']' 00:23:51.959 05:37:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 3282248 00:23:51.959 05:37:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:23:51.959 05:37:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:51.959 05:37:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3282248 00:23:51.959 05:37:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:51.959 05:37:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:51.959 05:37:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3282248' 00:23:51.959 killing process with pid 3282248 00:23:51.959 05:37:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 3282248 00:23:51.959 Received shutdown signal, test time was about 10.000000 seconds 00:23:51.959 00:23:51.959 Latency(us) 00:23:51.959 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:51.959 =================================================================================================================== 00:23:51.959 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:51.959 [2024-07-14 05:37:59.049044] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:51.959 05:37:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 3282248 00:23:52.217 05:37:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:23:52.217 05:37:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:52.217 05:37:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:23:52.217 05:37:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:52.217 05:37:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:23:52.217 05:37:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:52.217 05:37:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:52.217 rmmod nvme_tcp 00:23:52.217 rmmod nvme_fabrics 00:23:52.217 rmmod nvme_keyring 00:23:52.217 05:37:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:52.217 05:37:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:23:52.217 05:37:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:23:52.217 05:37:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 3282139 ']' 00:23:52.217 05:37:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 3282139 00:23:52.217 05:37:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 3282139 ']' 00:23:52.217 05:37:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 3282139 00:23:52.217 05:37:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:23:52.217 05:37:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:52.217 05:37:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3282139 00:23:52.476 05:37:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:52.476 05:37:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:52.476 05:37:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3282139' 00:23:52.476 killing process with pid 3282139 00:23:52.476 05:37:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 3282139 00:23:52.476 [2024-07-14 05:37:59.348039] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:52.476 05:37:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 3282139 00:23:52.735 05:37:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:52.735 05:37:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:52.735 05:37:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:52.735 05:37:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:52.735 05:37:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:52.735 05:37:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:52.735 05:37:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:52.735 05:37:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:54.637 05:38:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:54.637 05:38:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:54.637 00:23:54.637 real 0m16.516s 00:23:54.637 user 0m20.149s 00:23:54.637 sys 0m6.573s 00:23:54.637 05:38:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:54.637 05:38:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:54.637 ************************************ 00:23:54.637 END TEST nvmf_fips 00:23:54.637 ************************************ 00:23:54.637 05:38:01 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:23:54.637 05:38:01 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:23:54.637 05:38:01 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:54.637 05:38:01 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:54.637 05:38:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:54.637 ************************************ 00:23:54.637 START TEST nvmf_fuzz 00:23:54.637 ************************************ 00:23:54.637 05:38:01 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:23:54.896 * Looking for test storage... 00:23:54.896 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:54.897 05:38:01 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:54.897 05:38:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:23:54.897 05:38:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:54.897 05:38:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:54.897 05:38:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:54.897 05:38:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:54.897 05:38:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:54.897 05:38:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:54.897 05:38:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:54.897 05:38:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:54.897 05:38:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:54.897 05:38:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:54.897 05:38:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:54.897 05:38:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:54.897 05:38:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:54.897 05:38:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:54.897 05:38:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:54.897 05:38:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:54.897 05:38:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:54.897 05:38:01 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:54.897 05:38:01 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:54.897 05:38:01 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:54.897 05:38:01 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.897 05:38:01 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.897 05:38:01 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.897 05:38:01 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:23:54.897 05:38:01 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.897 05:38:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:23:54.897 05:38:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:54.897 05:38:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:54.897 05:38:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:54.897 05:38:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:54.897 05:38:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:54.897 05:38:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:54.897 05:38:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:54.897 05:38:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:54.897 05:38:01 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:23:54.897 05:38:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:54.897 05:38:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:54.897 05:38:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:54.897 05:38:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:54.897 05:38:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:54.897 05:38:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:54.897 05:38:01 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:54.897 05:38:01 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:54.897 05:38:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:54.897 05:38:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:54.897 05:38:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:23:54.897 05:38:01 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:56.811 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:56.811 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:56.811 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:56.811 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:56.811 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:56.812 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:56.812 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:56.812 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:56.812 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:56.812 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:56.812 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:56.812 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:56.812 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:56.812 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:56.812 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:56.812 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:56.812 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:57.070 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:57.070 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:57.070 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:57.070 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:57.070 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:23:57.070 00:23:57.070 --- 10.0.0.2 ping statistics --- 00:23:57.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:57.070 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:23:57.070 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:57.070 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:57.070 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:23:57.070 00:23:57.070 --- 10.0.0.1 ping statistics --- 00:23:57.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:57.070 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:23:57.070 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:57.070 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:23:57.070 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:57.070 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:57.070 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:57.070 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:57.070 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:57.070 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:57.070 05:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:57.070 05:38:03 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=3285491 00:23:57.070 05:38:03 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:57.070 05:38:03 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:23:57.070 05:38:03 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 3285491 00:23:57.070 05:38:03 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@827 -- # '[' -z 3285491 ']' 00:23:57.070 05:38:03 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:57.070 05:38:03 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:57.070 05:38:03 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:57.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:57.070 05:38:03 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:57.070 05:38:03 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:57.328 05:38:04 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:57.328 05:38:04 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@860 -- # return 0 00:23:57.328 05:38:04 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:57.328 05:38:04 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.328 05:38:04 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:57.328 05:38:04 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.328 05:38:04 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:23:57.328 05:38:04 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.328 05:38:04 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:57.328 Malloc0 00:23:57.328 05:38:04 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.328 05:38:04 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:57.328 05:38:04 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.328 05:38:04 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:57.328 05:38:04 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.328 05:38:04 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:57.328 05:38:04 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.328 05:38:04 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:57.328 05:38:04 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.328 05:38:04 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:57.328 05:38:04 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.328 05:38:04 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:57.328 05:38:04 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.328 05:38:04 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:23:57.328 05:38:04 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:24:29.414 Fuzzing completed. Shutting down the fuzz application 00:24:29.414 00:24:29.414 Dumping successful admin opcodes: 00:24:29.414 8, 9, 10, 24, 00:24:29.414 Dumping successful io opcodes: 00:24:29.414 0, 9, 00:24:29.414 NS: 0x200003aeff00 I/O qp, Total commands completed: 422893, total successful commands: 2476, random_seed: 3781356416 00:24:29.414 NS: 0x200003aeff00 admin qp, Total commands completed: 53247, total successful commands: 429, random_seed: 1234920320 00:24:29.414 05:38:34 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:24:29.414 Fuzzing completed. Shutting down the fuzz application 00:24:29.414 00:24:29.414 Dumping successful admin opcodes: 00:24:29.414 24, 00:24:29.414 Dumping successful io opcodes: 00:24:29.414 00:24:29.414 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 3553080239 00:24:29.414 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 3553189388 00:24:29.414 05:38:36 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:29.414 05:38:36 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.414 05:38:36 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:29.414 05:38:36 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.414 05:38:36 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:24:29.414 05:38:36 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:24:29.414 05:38:36 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:29.414 05:38:36 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:24:29.414 05:38:36 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:29.414 05:38:36 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:24:29.414 05:38:36 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:29.414 05:38:36 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:29.414 rmmod nvme_tcp 00:24:29.414 rmmod nvme_fabrics 00:24:29.414 rmmod nvme_keyring 00:24:29.414 05:38:36 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:29.414 05:38:36 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:24:29.414 05:38:36 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:24:29.414 05:38:36 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 3285491 ']' 00:24:29.414 05:38:36 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 3285491 00:24:29.414 05:38:36 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@946 -- # '[' -z 3285491 ']' 00:24:29.414 05:38:36 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@950 -- # kill -0 3285491 00:24:29.414 05:38:36 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@951 -- # uname 00:24:29.414 05:38:36 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:29.414 05:38:36 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3285491 00:24:29.414 05:38:36 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:29.414 05:38:36 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:29.414 05:38:36 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3285491' 00:24:29.414 killing process with pid 3285491 00:24:29.414 05:38:36 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@965 -- # kill 3285491 00:24:29.414 05:38:36 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@970 -- # wait 3285491 00:24:29.673 05:38:36 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:29.673 05:38:36 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:29.673 05:38:36 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:29.673 05:38:36 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:29.673 05:38:36 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:29.673 05:38:36 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:29.673 05:38:36 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:29.674 05:38:36 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:31.573 05:38:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:31.573 05:38:38 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:24:31.573 00:24:31.573 real 0m36.898s 00:24:31.573 user 0m47.395s 00:24:31.573 sys 0m15.655s 00:24:31.573 05:38:38 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:31.573 05:38:38 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:31.573 ************************************ 00:24:31.573 END TEST nvmf_fuzz 00:24:31.573 ************************************ 00:24:31.573 05:38:38 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:31.573 05:38:38 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:31.573 05:38:38 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:31.573 05:38:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:31.573 ************************************ 00:24:31.573 START TEST nvmf_multiconnection 00:24:31.573 ************************************ 00:24:31.573 05:38:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:31.832 * Looking for test storage... 00:24:31.832 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:31.832 05:38:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:31.832 05:38:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:24:31.832 05:38:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:31.832 05:38:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:31.832 05:38:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:31.832 05:38:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:31.832 05:38:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:31.832 05:38:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:31.832 05:38:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:31.832 05:38:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:31.832 05:38:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:31.832 05:38:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:31.832 05:38:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:31.832 05:38:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:31.832 05:38:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:31.832 05:38:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:31.832 05:38:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:31.832 05:38:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:31.832 05:38:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:31.832 05:38:38 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:31.832 05:38:38 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:31.832 05:38:38 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:31.832 05:38:38 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.832 05:38:38 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.832 05:38:38 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.832 05:38:38 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:24:31.832 05:38:38 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.832 05:38:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:24:31.832 05:38:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:31.832 05:38:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:31.832 05:38:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:31.832 05:38:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:31.832 05:38:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:31.832 05:38:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:31.832 05:38:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:31.832 05:38:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:31.832 05:38:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:31.832 05:38:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:31.832 05:38:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:24:31.832 05:38:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:24:31.832 05:38:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:31.832 05:38:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:31.832 05:38:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:31.832 05:38:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:31.832 05:38:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:31.832 05:38:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:31.832 05:38:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:31.832 05:38:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:31.832 05:38:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:31.832 05:38:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:31.832 05:38:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:24:31.832 05:38:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:33.730 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:33.730 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:24:33.730 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:33.730 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:33.730 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:33.730 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:33.730 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:33.730 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:24:33.730 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:33.730 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:24:33.730 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:24:33.730 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:24:33.730 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:24:33.730 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:24:33.730 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:24:33.730 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:33.730 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:33.730 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:33.730 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:33.730 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:33.731 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:33.731 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:33.731 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:33.731 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:33.731 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:33.731 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:24:33.731 00:24:33.731 --- 10.0.0.2 ping statistics --- 00:24:33.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:33.731 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:33.731 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:33.731 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:24:33.731 00:24:33.731 --- 10.0.0.1 ping statistics --- 00:24:33.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:33.731 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=3291111 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 3291111 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@827 -- # '[' -z 3291111 ']' 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:33.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:33.731 05:38:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:33.990 [2024-07-14 05:38:40.848024] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:24:33.990 [2024-07-14 05:38:40.848095] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:33.990 EAL: No free 2048 kB hugepages reported on node 1 00:24:33.990 [2024-07-14 05:38:40.911774] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:33.990 [2024-07-14 05:38:41.001677] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:33.990 [2024-07-14 05:38:41.001744] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:33.990 [2024-07-14 05:38:41.001768] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:33.990 [2024-07-14 05:38:41.001778] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:33.990 [2024-07-14 05:38:41.001788] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:33.990 [2024-07-14 05:38:41.001887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:33.990 [2024-07-14 05:38:41.001945] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:33.990 [2024-07-14 05:38:41.002020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:33.990 [2024-07-14 05:38:41.002023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:34.248 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:34.248 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@860 -- # return 0 00:24:34.248 05:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:34.248 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:34.248 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.248 05:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:34.248 05:38:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:34.248 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.248 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.248 [2024-07-14 05:38:41.159699] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:34.248 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.248 05:38:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:24:34.248 05:38:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:34.248 05:38:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:34.248 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.248 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.248 Malloc1 00:24:34.248 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.248 05:38:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:24:34.248 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.248 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.248 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.248 05:38:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:34.248 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.248 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.248 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.248 05:38:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:34.249 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.249 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.249 [2024-07-14 05:38:41.217187] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:34.249 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.249 05:38:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:34.249 05:38:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:24:34.249 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.249 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.249 Malloc2 00:24:34.249 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.249 05:38:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:24:34.249 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.249 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.249 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.249 05:38:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:24:34.249 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.249 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.249 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.249 05:38:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:34.249 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.249 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.249 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.249 05:38:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:34.249 05:38:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:24:34.249 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.249 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.249 Malloc3 00:24:34.249 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.249 05:38:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:24:34.249 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.249 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.249 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.249 05:38:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:24:34.249 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.249 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.249 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.249 05:38:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:24:34.249 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.249 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.249 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.249 05:38:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:34.249 05:38:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:24:34.249 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.249 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.249 Malloc4 00:24:34.249 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.249 05:38:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:24:34.249 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.249 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.249 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.249 05:38:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:24:34.249 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.249 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.508 Malloc5 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.508 Malloc6 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.508 Malloc7 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.508 Malloc8 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.508 Malloc9 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.508 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.767 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.767 05:38:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:34.767 05:38:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:24:34.767 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.767 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.767 Malloc10 00:24:34.767 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.767 05:38:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:24:34.767 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.767 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.767 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.767 05:38:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:24:34.767 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.767 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.767 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.767 05:38:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:24:34.767 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.767 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.767 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.767 05:38:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:34.767 05:38:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:24:34.767 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.767 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.767 Malloc11 00:24:34.767 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.767 05:38:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:24:34.767 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.767 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.767 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.767 05:38:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:24:34.767 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.767 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.767 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.767 05:38:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:24:34.767 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.767 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.767 05:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.767 05:38:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:24:34.767 05:38:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:34.767 05:38:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:35.332 05:38:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:24:35.332 05:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:35.332 05:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:35.332 05:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:35.332 05:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:37.233 05:38:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:37.233 05:38:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:37.233 05:38:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK1 00:24:37.233 05:38:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:37.233 05:38:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:37.233 05:38:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:37.233 05:38:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:37.233 05:38:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:24:38.165 05:38:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:24:38.165 05:38:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:38.165 05:38:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:38.165 05:38:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:38.165 05:38:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:40.057 05:38:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:40.057 05:38:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:40.057 05:38:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK2 00:24:40.057 05:38:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:40.057 05:38:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:40.057 05:38:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:40.057 05:38:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:40.057 05:38:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:24:40.620 05:38:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:24:40.620 05:38:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:40.620 05:38:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:40.620 05:38:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:40.620 05:38:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:43.170 05:38:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:43.170 05:38:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:43.170 05:38:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK3 00:24:43.170 05:38:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:43.170 05:38:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:43.170 05:38:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:43.170 05:38:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:43.170 05:38:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:24:43.427 05:38:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:24:43.427 05:38:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:43.427 05:38:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:43.427 05:38:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:43.427 05:38:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:45.954 05:38:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:45.954 05:38:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:45.954 05:38:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK4 00:24:45.954 05:38:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:45.954 05:38:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:45.954 05:38:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:45.954 05:38:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:45.954 05:38:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:24:46.527 05:38:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:24:46.527 05:38:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:46.527 05:38:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:46.527 05:38:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:46.527 05:38:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:48.423 05:38:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:48.423 05:38:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:48.423 05:38:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK5 00:24:48.423 05:38:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:48.423 05:38:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:48.423 05:38:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:48.423 05:38:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:48.423 05:38:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:24:49.357 05:38:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:24:49.357 05:38:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:49.357 05:38:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:49.357 05:38:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:49.357 05:38:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:51.255 05:38:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:51.255 05:38:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:51.255 05:38:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK6 00:24:51.255 05:38:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:51.255 05:38:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:51.255 05:38:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:51.255 05:38:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:51.255 05:38:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:24:52.187 05:38:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:24:52.187 05:38:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:52.187 05:38:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:52.187 05:38:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:52.187 05:38:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:54.084 05:39:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:54.084 05:39:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:54.084 05:39:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK7 00:24:54.084 05:39:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:54.084 05:39:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:54.084 05:39:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:54.084 05:39:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:54.084 05:39:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:24:55.016 05:39:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:24:55.016 05:39:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:55.016 05:39:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:55.016 05:39:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:55.016 05:39:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:56.919 05:39:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:56.919 05:39:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:56.919 05:39:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK8 00:24:56.919 05:39:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:56.919 05:39:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:56.919 05:39:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:56.919 05:39:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:56.919 05:39:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:24:57.851 05:39:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:24:57.851 05:39:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:57.851 05:39:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:57.851 05:39:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:57.851 05:39:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:59.746 05:39:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:59.746 05:39:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:59.746 05:39:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK9 00:24:59.746 05:39:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:59.746 05:39:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:59.746 05:39:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:59.746 05:39:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:59.746 05:39:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:25:00.718 05:39:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:25:00.718 05:39:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:00.718 05:39:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:00.718 05:39:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:00.718 05:39:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:02.618 05:39:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:02.618 05:39:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:02.618 05:39:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK10 00:25:02.618 05:39:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:02.618 05:39:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:02.618 05:39:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:02.618 05:39:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:02.618 05:39:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:25:03.991 05:39:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:25:03.991 05:39:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:03.991 05:39:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:03.991 05:39:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:03.991 05:39:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:05.886 05:39:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:05.886 05:39:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:05.886 05:39:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK11 00:25:05.886 05:39:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:05.886 05:39:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:05.886 05:39:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:05.886 05:39:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:25:05.886 [global] 00:25:05.886 thread=1 00:25:05.886 invalidate=1 00:25:05.886 rw=read 00:25:05.886 time_based=1 00:25:05.886 runtime=10 00:25:05.886 ioengine=libaio 00:25:05.886 direct=1 00:25:05.886 bs=262144 00:25:05.886 iodepth=64 00:25:05.886 norandommap=1 00:25:05.886 numjobs=1 00:25:05.886 00:25:05.886 [job0] 00:25:05.886 filename=/dev/nvme0n1 00:25:05.886 [job1] 00:25:05.886 filename=/dev/nvme10n1 00:25:05.886 [job2] 00:25:05.886 filename=/dev/nvme1n1 00:25:05.886 [job3] 00:25:05.886 filename=/dev/nvme2n1 00:25:05.886 [job4] 00:25:05.886 filename=/dev/nvme3n1 00:25:05.886 [job5] 00:25:05.886 filename=/dev/nvme4n1 00:25:05.886 [job6] 00:25:05.886 filename=/dev/nvme5n1 00:25:05.887 [job7] 00:25:05.887 filename=/dev/nvme6n1 00:25:05.887 [job8] 00:25:05.887 filename=/dev/nvme7n1 00:25:05.887 [job9] 00:25:05.887 filename=/dev/nvme8n1 00:25:05.887 [job10] 00:25:05.887 filename=/dev/nvme9n1 00:25:05.887 Could not set queue depth (nvme0n1) 00:25:05.887 Could not set queue depth (nvme10n1) 00:25:05.887 Could not set queue depth (nvme1n1) 00:25:05.887 Could not set queue depth (nvme2n1) 00:25:05.887 Could not set queue depth (nvme3n1) 00:25:05.887 Could not set queue depth (nvme4n1) 00:25:05.887 Could not set queue depth (nvme5n1) 00:25:05.887 Could not set queue depth (nvme6n1) 00:25:05.887 Could not set queue depth (nvme7n1) 00:25:05.887 Could not set queue depth (nvme8n1) 00:25:05.887 Could not set queue depth (nvme9n1) 00:25:05.887 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:05.887 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:05.887 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:05.887 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:05.887 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:05.887 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:05.887 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:05.887 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:05.887 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:05.887 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:05.887 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:05.887 fio-3.35 00:25:05.887 Starting 11 threads 00:25:18.089 00:25:18.089 job0: (groupid=0, jobs=1): err= 0: pid=3296071: Sun Jul 14 05:39:23 2024 00:25:18.089 read: IOPS=467, BW=117MiB/s (123MB/s)(1177MiB/10063msec) 00:25:18.089 slat (usec): min=8, max=126284, avg=1391.33, stdev=6706.43 00:25:18.089 clat (msec): min=2, max=327, avg=135.24, stdev=72.54 00:25:18.089 lat (msec): min=2, max=362, avg=136.63, stdev=73.50 00:25:18.089 clat percentiles (msec): 00:25:18.090 | 1.00th=[ 7], 5.00th=[ 11], 10.00th=[ 18], 20.00th=[ 77], 00:25:18.090 | 30.00th=[ 111], 40.00th=[ 123], 50.00th=[ 136], 60.00th=[ 148], 00:25:18.090 | 70.00th=[ 169], 80.00th=[ 207], 90.00th=[ 236], 95.00th=[ 255], 00:25:18.090 | 99.00th=[ 284], 99.50th=[ 296], 99.90th=[ 305], 99.95th=[ 313], 00:25:18.090 | 99.99th=[ 330] 00:25:18.090 bw ( KiB/s): min=64512, max=279552, per=7.73%, avg=118924.65, stdev=45277.67, samples=20 00:25:18.090 iops : min= 252, max= 1092, avg=464.45, stdev=176.87, samples=20 00:25:18.090 lat (msec) : 4=0.21%, 10=4.03%, 20=7.73%, 50=4.69%, 100=8.62% 00:25:18.090 lat (msec) : 250=67.83%, 500=6.88% 00:25:18.090 cpu : usr=0.25%, sys=1.43%, ctx=1157, majf=0, minf=4097 00:25:18.090 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:18.090 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:18.090 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:18.090 issued rwts: total=4709,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:18.090 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:18.090 job1: (groupid=0, jobs=1): err= 0: pid=3296102: Sun Jul 14 05:39:23 2024 00:25:18.090 read: IOPS=650, BW=163MiB/s (171MB/s)(1640MiB/10076msec) 00:25:18.090 slat (usec): min=8, max=332136, avg=743.29, stdev=7047.33 00:25:18.090 clat (usec): min=1490, max=391783, avg=97482.01, stdev=77335.33 00:25:18.090 lat (usec): min=1558, max=582613, avg=98225.30, stdev=77978.18 00:25:18.090 clat percentiles (msec): 00:25:18.090 | 1.00th=[ 4], 5.00th=[ 7], 10.00th=[ 11], 20.00th=[ 31], 00:25:18.090 | 30.00th=[ 43], 40.00th=[ 55], 50.00th=[ 81], 60.00th=[ 112], 00:25:18.090 | 70.00th=[ 136], 80.00th=[ 153], 90.00th=[ 207], 95.00th=[ 253], 00:25:18.090 | 99.00th=[ 338], 99.50th=[ 363], 99.90th=[ 376], 99.95th=[ 376], 00:25:18.090 | 99.99th=[ 393] 00:25:18.090 bw ( KiB/s): min=51200, max=413696, per=10.81%, avg=166273.95, stdev=84257.14, samples=20 00:25:18.090 iops : min= 200, max= 1616, avg=649.45, stdev=329.16, samples=20 00:25:18.090 lat (msec) : 2=0.03%, 4=1.10%, 10=8.55%, 20=5.73%, 50=21.62% 00:25:18.090 lat (msec) : 100=18.52%, 250=38.77%, 500=5.67% 00:25:18.090 cpu : usr=0.28%, sys=1.89%, ctx=1916, majf=0, minf=4097 00:25:18.090 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:25:18.090 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:18.090 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:18.090 issued rwts: total=6559,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:18.090 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:18.090 job2: (groupid=0, jobs=1): err= 0: pid=3296123: Sun Jul 14 05:39:23 2024 00:25:18.090 read: IOPS=436, BW=109MiB/s (114MB/s)(1098MiB/10069msec) 00:25:18.090 slat (usec): min=9, max=143328, avg=1502.99, stdev=7357.08 00:25:18.090 clat (usec): min=1167, max=633698, avg=145129.59, stdev=97846.50 00:25:18.090 lat (usec): min=1193, max=633718, avg=146632.57, stdev=98763.84 00:25:18.090 clat percentiles (msec): 00:25:18.090 | 1.00th=[ 3], 5.00th=[ 12], 10.00th=[ 23], 20.00th=[ 92], 00:25:18.090 | 30.00th=[ 106], 40.00th=[ 116], 50.00th=[ 128], 60.00th=[ 140], 00:25:18.090 | 70.00th=[ 157], 80.00th=[ 192], 90.00th=[ 284], 95.00th=[ 351], 00:25:18.090 | 99.00th=[ 447], 99.50th=[ 527], 99.90th=[ 617], 99.95th=[ 617], 00:25:18.090 | 99.99th=[ 634] 00:25:18.090 bw ( KiB/s): min=55808, max=210011, per=7.20%, avg=110763.50, stdev=39325.00, samples=20 00:25:18.090 iops : min= 218, max= 820, avg=432.65, stdev=153.57, samples=20 00:25:18.090 lat (msec) : 2=0.30%, 4=1.09%, 10=2.87%, 20=4.37%, 50=6.67% 00:25:18.090 lat (msec) : 100=11.09%, 250=59.94%, 500=13.09%, 750=0.57% 00:25:18.090 cpu : usr=0.24%, sys=1.29%, ctx=1099, majf=0, minf=3721 00:25:18.090 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:25:18.090 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:18.090 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:18.090 issued rwts: total=4391,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:18.090 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:18.090 job3: (groupid=0, jobs=1): err= 0: pid=3296124: Sun Jul 14 05:39:23 2024 00:25:18.090 read: IOPS=487, BW=122MiB/s (128MB/s)(1227MiB/10072msec) 00:25:18.090 slat (usec): min=14, max=83995, avg=1960.88, stdev=6271.00 00:25:18.090 clat (msec): min=7, max=310, avg=129.30, stdev=62.25 00:25:18.090 lat (msec): min=7, max=334, avg=131.26, stdev=63.30 00:25:18.090 clat percentiles (msec): 00:25:18.090 | 1.00th=[ 39], 5.00th=[ 61], 10.00th=[ 69], 20.00th=[ 78], 00:25:18.090 | 30.00th=[ 86], 40.00th=[ 94], 50.00th=[ 109], 60.00th=[ 129], 00:25:18.090 | 70.00th=[ 153], 80.00th=[ 199], 90.00th=[ 234], 95.00th=[ 251], 00:25:18.090 | 99.00th=[ 275], 99.50th=[ 284], 99.90th=[ 296], 99.95th=[ 300], 00:25:18.090 | 99.99th=[ 309] 00:25:18.090 bw ( KiB/s): min=59392, max=224768, per=8.06%, avg=123984.65, stdev=54143.90, samples=20 00:25:18.090 iops : min= 232, max= 878, avg=484.25, stdev=211.50, samples=20 00:25:18.090 lat (msec) : 10=0.06%, 20=0.51%, 50=1.39%, 100=43.59%, 250=49.44% 00:25:18.090 lat (msec) : 500=5.01% 00:25:18.090 cpu : usr=0.27%, sys=1.78%, ctx=817, majf=0, minf=4097 00:25:18.090 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:18.090 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:18.090 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:18.090 issued rwts: total=4907,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:18.090 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:18.090 job4: (groupid=0, jobs=1): err= 0: pid=3296125: Sun Jul 14 05:39:23 2024 00:25:18.090 read: IOPS=767, BW=192MiB/s (201MB/s)(1940MiB/10115msec) 00:25:18.090 slat (usec): min=10, max=188690, avg=1072.61, stdev=4507.89 00:25:18.090 clat (usec): min=1581, max=289843, avg=82280.90, stdev=61247.63 00:25:18.090 lat (usec): min=1604, max=430782, avg=83353.51, stdev=62007.42 00:25:18.090 clat percentiles (msec): 00:25:18.090 | 1.00th=[ 3], 5.00th=[ 6], 10.00th=[ 21], 20.00th=[ 33], 00:25:18.090 | 30.00th=[ 39], 40.00th=[ 41], 50.00th=[ 50], 60.00th=[ 104], 00:25:18.090 | 70.00th=[ 121], 80.00th=[ 140], 90.00th=[ 163], 95.00th=[ 188], 00:25:18.090 | 99.00th=[ 253], 99.50th=[ 275], 99.90th=[ 279], 99.95th=[ 284], 00:25:18.090 | 99.99th=[ 292] 00:25:18.090 bw ( KiB/s): min=92160, max=438272, per=12.81%, avg=197018.80, stdev=115968.36, samples=20 00:25:18.090 iops : min= 360, max= 1712, avg=769.60, stdev=453.00, samples=20 00:25:18.090 lat (msec) : 2=0.05%, 4=3.65%, 10=3.49%, 20=2.54%, 50=41.13% 00:25:18.090 lat (msec) : 100=7.86%, 250=39.91%, 500=1.37% 00:25:18.090 cpu : usr=0.38%, sys=2.56%, ctx=1880, majf=0, minf=4097 00:25:18.090 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:18.090 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:18.090 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:18.090 issued rwts: total=7760,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:18.090 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:18.090 job5: (groupid=0, jobs=1): err= 0: pid=3296126: Sun Jul 14 05:39:23 2024 00:25:18.090 read: IOPS=487, BW=122MiB/s (128MB/s)(1229MiB/10075msec) 00:25:18.090 slat (usec): min=10, max=133738, avg=1868.55, stdev=6687.08 00:25:18.090 clat (msec): min=35, max=329, avg=129.19, stdev=57.27 00:25:18.090 lat (msec): min=40, max=343, avg=131.06, stdev=58.24 00:25:18.090 clat percentiles (msec): 00:25:18.090 | 1.00th=[ 44], 5.00th=[ 47], 10.00th=[ 55], 20.00th=[ 83], 00:25:18.090 | 30.00th=[ 99], 40.00th=[ 113], 50.00th=[ 124], 60.00th=[ 133], 00:25:18.090 | 70.00th=[ 144], 80.00th=[ 167], 90.00th=[ 224], 95.00th=[ 249], 00:25:18.090 | 99.00th=[ 279], 99.50th=[ 292], 99.90th=[ 305], 99.95th=[ 305], 00:25:18.090 | 99.99th=[ 330] 00:25:18.090 bw ( KiB/s): min=62976, max=242203, per=8.08%, avg=124208.60, stdev=53815.01, samples=20 00:25:18.090 iops : min= 246, max= 946, avg=485.15, stdev=210.23, samples=20 00:25:18.090 lat (msec) : 50=8.18%, 100=23.01%, 250=64.28%, 500=4.54% 00:25:18.090 cpu : usr=0.29%, sys=1.83%, ctx=1017, majf=0, minf=4097 00:25:18.090 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:18.090 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:18.090 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:18.090 issued rwts: total=4916,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:18.090 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:18.090 job6: (groupid=0, jobs=1): err= 0: pid=3296127: Sun Jul 14 05:39:23 2024 00:25:18.090 read: IOPS=467, BW=117MiB/s (123MB/s)(1177MiB/10065msec) 00:25:18.090 slat (usec): min=9, max=93501, avg=1339.40, stdev=5504.97 00:25:18.090 clat (usec): min=1301, max=368256, avg=135435.66, stdev=65122.63 00:25:18.090 lat (usec): min=1332, max=368295, avg=136775.06, stdev=65999.08 00:25:18.090 clat percentiles (msec): 00:25:18.090 | 1.00th=[ 5], 5.00th=[ 18], 10.00th=[ 53], 20.00th=[ 92], 00:25:18.090 | 30.00th=[ 109], 40.00th=[ 121], 50.00th=[ 130], 60.00th=[ 140], 00:25:18.090 | 70.00th=[ 153], 80.00th=[ 182], 90.00th=[ 234], 95.00th=[ 255], 00:25:18.090 | 99.00th=[ 292], 99.50th=[ 317], 99.90th=[ 363], 99.95th=[ 363], 00:25:18.090 | 99.99th=[ 368] 00:25:18.090 bw ( KiB/s): min=59904, max=193024, per=7.73%, avg=118871.65, stdev=39727.76, samples=20 00:25:18.090 iops : min= 234, max= 754, avg=464.30, stdev=155.20, samples=20 00:25:18.090 lat (msec) : 2=0.38%, 4=0.51%, 10=1.93%, 20=2.55%, 50=4.10% 00:25:18.090 lat (msec) : 100=14.66%, 250=70.00%, 500=5.86% 00:25:18.090 cpu : usr=0.26%, sys=1.48%, ctx=1231, majf=0, minf=4097 00:25:18.090 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:18.090 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:18.090 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:18.090 issued rwts: total=4706,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:18.090 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:18.090 job7: (groupid=0, jobs=1): err= 0: pid=3296128: Sun Jul 14 05:39:23 2024 00:25:18.090 read: IOPS=628, BW=157MiB/s (165MB/s)(1582MiB/10068msec) 00:25:18.090 slat (usec): min=9, max=177380, avg=993.41, stdev=5228.61 00:25:18.091 clat (usec): min=1342, max=420890, avg=100757.08, stdev=64139.13 00:25:18.091 lat (usec): min=1369, max=420922, avg=101750.50, stdev=64885.45 00:25:18.091 clat percentiles (msec): 00:25:18.091 | 1.00th=[ 5], 5.00th=[ 12], 10.00th=[ 22], 20.00th=[ 44], 00:25:18.091 | 30.00th=[ 56], 40.00th=[ 77], 50.00th=[ 102], 60.00th=[ 121], 00:25:18.091 | 70.00th=[ 136], 80.00th=[ 148], 90.00th=[ 167], 95.00th=[ 218], 00:25:18.091 | 99.00th=[ 309], 99.50th=[ 326], 99.90th=[ 338], 99.95th=[ 338], 00:25:18.091 | 99.99th=[ 422] 00:25:18.091 bw ( KiB/s): min=89600, max=284672, per=10.43%, avg=160355.65, stdev=53450.45, samples=20 00:25:18.091 iops : min= 350, max= 1112, avg=626.30, stdev=208.84, samples=20 00:25:18.091 lat (msec) : 2=0.03%, 4=0.49%, 10=3.59%, 20=5.15%, 50=16.04% 00:25:18.091 lat (msec) : 100=23.97%, 250=48.07%, 500=2.65% 00:25:18.091 cpu : usr=0.31%, sys=1.70%, ctx=1761, majf=0, minf=4097 00:25:18.091 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:25:18.091 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:18.091 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:18.091 issued rwts: total=6328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:18.091 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:18.091 job8: (groupid=0, jobs=1): err= 0: pid=3296129: Sun Jul 14 05:39:23 2024 00:25:18.091 read: IOPS=677, BW=169MiB/s (178MB/s)(1706MiB/10073msec) 00:25:18.091 slat (usec): min=10, max=144684, avg=1211.82, stdev=5064.88 00:25:18.091 clat (usec): min=1020, max=413288, avg=93172.85, stdev=66756.61 00:25:18.091 lat (usec): min=1036, max=413379, avg=94384.68, stdev=67558.62 00:25:18.091 clat percentiles (msec): 00:25:18.091 | 1.00th=[ 4], 5.00th=[ 12], 10.00th=[ 28], 20.00th=[ 35], 00:25:18.091 | 30.00th=[ 42], 40.00th=[ 65], 50.00th=[ 88], 60.00th=[ 108], 00:25:18.091 | 70.00th=[ 123], 80.00th=[ 138], 90.00th=[ 159], 95.00th=[ 215], 00:25:18.091 | 99.00th=[ 355], 99.50th=[ 384], 99.90th=[ 393], 99.95th=[ 401], 00:25:18.091 | 99.99th=[ 414] 00:25:18.091 bw ( KiB/s): min=33792, max=387072, per=11.26%, avg=173088.95, stdev=107712.84, samples=20 00:25:18.091 iops : min= 132, max= 1512, avg=676.10, stdev=420.78, samples=20 00:25:18.091 lat (msec) : 2=0.18%, 4=1.22%, 10=3.05%, 20=2.58%, 50=27.66% 00:25:18.091 lat (msec) : 100=20.41%, 250=41.32%, 500=3.59% 00:25:18.091 cpu : usr=0.29%, sys=2.13%, ctx=1574, majf=0, minf=4097 00:25:18.091 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:25:18.091 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:18.091 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:18.091 issued rwts: total=6825,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:18.091 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:18.091 job9: (groupid=0, jobs=1): err= 0: pid=3296130: Sun Jul 14 05:39:23 2024 00:25:18.091 read: IOPS=529, BW=132MiB/s (139MB/s)(1333MiB/10065msec) 00:25:18.091 slat (usec): min=10, max=195063, avg=1107.14, stdev=6415.99 00:25:18.091 clat (usec): min=1372, max=443189, avg=119603.64, stdev=79701.16 00:25:18.091 lat (usec): min=1400, max=443203, avg=120710.79, stdev=80211.00 00:25:18.091 clat percentiles (msec): 00:25:18.091 | 1.00th=[ 4], 5.00th=[ 11], 10.00th=[ 20], 20.00th=[ 38], 00:25:18.091 | 30.00th=[ 66], 40.00th=[ 106], 50.00th=[ 121], 60.00th=[ 134], 00:25:18.091 | 70.00th=[ 148], 80.00th=[ 174], 90.00th=[ 234], 95.00th=[ 259], 00:25:18.091 | 99.00th=[ 355], 99.50th=[ 426], 99.90th=[ 443], 99.95th=[ 443], 00:25:18.091 | 99.99th=[ 443] 00:25:18.091 bw ( KiB/s): min=38912, max=284672, per=8.77%, avg=134873.25, stdev=59838.11, samples=20 00:25:18.091 iops : min= 152, max= 1112, avg=526.80, stdev=233.75, samples=20 00:25:18.091 lat (msec) : 2=0.62%, 4=0.88%, 10=3.19%, 20=5.68%, 50=14.14% 00:25:18.091 lat (msec) : 100=13.34%, 250=55.94%, 500=6.21% 00:25:18.091 cpu : usr=0.23%, sys=1.64%, ctx=1507, majf=0, minf=4097 00:25:18.091 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:18.091 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:18.091 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:18.091 issued rwts: total=5331,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:18.091 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:18.091 job10: (groupid=0, jobs=1): err= 0: pid=3296131: Sun Jul 14 05:39:23 2024 00:25:18.091 read: IOPS=429, BW=107MiB/s (113MB/s)(1082MiB/10075msec) 00:25:18.091 slat (usec): min=9, max=436675, avg=1680.70, stdev=10181.66 00:25:18.091 clat (msec): min=2, max=724, avg=147.18, stdev=84.21 00:25:18.091 lat (msec): min=2, max=724, avg=148.86, stdev=85.41 00:25:18.091 clat percentiles (msec): 00:25:18.091 | 1.00th=[ 11], 5.00th=[ 37], 10.00th=[ 52], 20.00th=[ 101], 00:25:18.091 | 30.00th=[ 118], 40.00th=[ 129], 50.00th=[ 138], 60.00th=[ 146], 00:25:18.091 | 70.00th=[ 157], 80.00th=[ 178], 90.00th=[ 230], 95.00th=[ 288], 00:25:18.091 | 99.00th=[ 542], 99.50th=[ 617], 99.90th=[ 659], 99.95th=[ 684], 00:25:18.091 | 99.99th=[ 726] 00:25:18.091 bw ( KiB/s): min=15360, max=206236, per=7.10%, avg=109204.60, stdev=37946.97, samples=20 00:25:18.091 iops : min= 60, max= 805, avg=426.55, stdev=148.15, samples=20 00:25:18.091 lat (msec) : 4=0.32%, 10=0.67%, 20=0.92%, 50=7.93%, 100=10.30% 00:25:18.091 lat (msec) : 250=72.64%, 500=5.75%, 750=1.46% 00:25:18.091 cpu : usr=0.17%, sys=1.40%, ctx=1047, majf=0, minf=4097 00:25:18.091 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:25:18.091 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:18.091 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:18.091 issued rwts: total=4328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:18.091 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:18.091 00:25:18.091 Run status group 0 (all jobs): 00:25:18.091 READ: bw=1502MiB/s (1575MB/s), 107MiB/s-192MiB/s (113MB/s-201MB/s), io=14.8GiB (15.9GB), run=10063-10115msec 00:25:18.091 00:25:18.091 Disk stats (read/write): 00:25:18.091 nvme0n1: ios=9228/0, merge=0/0, ticks=1238904/0, in_queue=1238904, util=97.18% 00:25:18.091 nvme10n1: ios=12863/0, merge=0/0, ticks=1238088/0, in_queue=1238088, util=97.43% 00:25:18.091 nvme1n1: ios=8572/0, merge=0/0, ticks=1237483/0, in_queue=1237483, util=97.73% 00:25:18.091 nvme2n1: ios=9598/0, merge=0/0, ticks=1233688/0, in_queue=1233688, util=97.88% 00:25:18.091 nvme3n1: ios=15295/0, merge=0/0, ticks=1235191/0, in_queue=1235191, util=97.96% 00:25:18.091 nvme4n1: ios=9620/0, merge=0/0, ticks=1229894/0, in_queue=1229894, util=98.28% 00:25:18.091 nvme5n1: ios=9195/0, merge=0/0, ticks=1236846/0, in_queue=1236846, util=98.41% 00:25:18.091 nvme6n1: ios=12397/0, merge=0/0, ticks=1242041/0, in_queue=1242041, util=98.53% 00:25:18.091 nvme7n1: ios=13444/0, merge=0/0, ticks=1233981/0, in_queue=1233981, util=98.94% 00:25:18.091 nvme8n1: ios=10429/0, merge=0/0, ticks=1244926/0, in_queue=1244926, util=99.09% 00:25:18.091 nvme9n1: ios=8428/0, merge=0/0, ticks=1237161/0, in_queue=1237161, util=99.22% 00:25:18.091 05:39:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:25:18.091 [global] 00:25:18.091 thread=1 00:25:18.091 invalidate=1 00:25:18.091 rw=randwrite 00:25:18.091 time_based=1 00:25:18.091 runtime=10 00:25:18.091 ioengine=libaio 00:25:18.091 direct=1 00:25:18.091 bs=262144 00:25:18.091 iodepth=64 00:25:18.091 norandommap=1 00:25:18.091 numjobs=1 00:25:18.091 00:25:18.091 [job0] 00:25:18.091 filename=/dev/nvme0n1 00:25:18.091 [job1] 00:25:18.091 filename=/dev/nvme10n1 00:25:18.091 [job2] 00:25:18.091 filename=/dev/nvme1n1 00:25:18.091 [job3] 00:25:18.091 filename=/dev/nvme2n1 00:25:18.091 [job4] 00:25:18.091 filename=/dev/nvme3n1 00:25:18.091 [job5] 00:25:18.091 filename=/dev/nvme4n1 00:25:18.091 [job6] 00:25:18.091 filename=/dev/nvme5n1 00:25:18.091 [job7] 00:25:18.091 filename=/dev/nvme6n1 00:25:18.091 [job8] 00:25:18.091 filename=/dev/nvme7n1 00:25:18.091 [job9] 00:25:18.091 filename=/dev/nvme8n1 00:25:18.091 [job10] 00:25:18.091 filename=/dev/nvme9n1 00:25:18.091 Could not set queue depth (nvme0n1) 00:25:18.091 Could not set queue depth (nvme10n1) 00:25:18.091 Could not set queue depth (nvme1n1) 00:25:18.091 Could not set queue depth (nvme2n1) 00:25:18.091 Could not set queue depth (nvme3n1) 00:25:18.091 Could not set queue depth (nvme4n1) 00:25:18.091 Could not set queue depth (nvme5n1) 00:25:18.091 Could not set queue depth (nvme6n1) 00:25:18.091 Could not set queue depth (nvme7n1) 00:25:18.091 Could not set queue depth (nvme8n1) 00:25:18.091 Could not set queue depth (nvme9n1) 00:25:18.091 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:18.091 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:18.091 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:18.091 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:18.091 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:18.091 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:18.091 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:18.091 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:18.091 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:18.091 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:18.091 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:18.091 fio-3.35 00:25:18.091 Starting 11 threads 00:25:28.063 00:25:28.063 job0: (groupid=0, jobs=1): err= 0: pid=3297153: Sun Jul 14 05:39:34 2024 00:25:28.063 write: IOPS=364, BW=91.0MiB/s (95.4MB/s)(927MiB/10189msec); 0 zone resets 00:25:28.063 slat (usec): min=24, max=84753, avg=2114.80, stdev=5435.97 00:25:28.063 clat (msec): min=2, max=398, avg=173.56, stdev=87.88 00:25:28.063 lat (msec): min=3, max=398, avg=175.67, stdev=89.02 00:25:28.063 clat percentiles (msec): 00:25:28.063 | 1.00th=[ 11], 5.00th=[ 32], 10.00th=[ 49], 20.00th=[ 74], 00:25:28.063 | 30.00th=[ 106], 40.00th=[ 161], 50.00th=[ 199], 60.00th=[ 215], 00:25:28.063 | 70.00th=[ 232], 80.00th=[ 249], 90.00th=[ 279], 95.00th=[ 300], 00:25:28.063 | 99.00th=[ 338], 99.50th=[ 347], 99.90th=[ 384], 99.95th=[ 401], 00:25:28.063 | 99.99th=[ 401] 00:25:28.063 bw ( KiB/s): min=54784, max=202752, per=7.47%, avg=93288.35, stdev=43895.15, samples=20 00:25:28.063 iops : min= 214, max= 792, avg=364.35, stdev=171.49, samples=20 00:25:28.063 lat (msec) : 4=0.11%, 10=0.70%, 20=2.21%, 50=7.55%, 100=18.58% 00:25:28.063 lat (msec) : 250=51.55%, 500=19.30% 00:25:28.063 cpu : usr=0.93%, sys=1.22%, ctx=1827, majf=0, minf=1 00:25:28.063 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:25:28.063 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:28.063 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:28.063 issued rwts: total=0,3709,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:28.063 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:28.063 job1: (groupid=0, jobs=1): err= 0: pid=3297165: Sun Jul 14 05:39:34 2024 00:25:28.063 write: IOPS=466, BW=117MiB/s (122MB/s)(1195MiB/10245msec); 0 zone resets 00:25:28.063 slat (usec): min=20, max=116990, avg=1608.50, stdev=4488.25 00:25:28.063 clat (msec): min=3, max=541, avg=135.41, stdev=78.09 00:25:28.063 lat (msec): min=3, max=541, avg=137.02, stdev=79.07 00:25:28.063 clat percentiles (msec): 00:25:28.063 | 1.00th=[ 13], 5.00th=[ 28], 10.00th=[ 41], 20.00th=[ 75], 00:25:28.063 | 30.00th=[ 92], 40.00th=[ 103], 50.00th=[ 114], 60.00th=[ 138], 00:25:28.063 | 70.00th=[ 180], 80.00th=[ 205], 90.00th=[ 232], 95.00th=[ 262], 00:25:28.063 | 99.00th=[ 363], 99.50th=[ 456], 99.90th=[ 531], 99.95th=[ 531], 00:25:28.063 | 99.99th=[ 542] 00:25:28.063 bw ( KiB/s): min=67584, max=192000, per=9.67%, avg=120747.80, stdev=41482.89, samples=20 00:25:28.063 iops : min= 264, max= 750, avg=471.60, stdev=162.04, samples=20 00:25:28.064 lat (msec) : 4=0.08%, 10=0.52%, 20=2.11%, 50=10.35%, 100=24.95% 00:25:28.064 lat (msec) : 250=55.55%, 500=6.13%, 750=0.29% 00:25:28.064 cpu : usr=1.33%, sys=1.61%, ctx=2528, majf=0, minf=1 00:25:28.064 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:28.064 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:28.064 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:28.064 issued rwts: total=0,4781,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:28.064 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:28.064 job2: (groupid=0, jobs=1): err= 0: pid=3297166: Sun Jul 14 05:39:34 2024 00:25:28.064 write: IOPS=518, BW=130MiB/s (136MB/s)(1316MiB/10161msec); 0 zone resets 00:25:28.064 slat (usec): min=14, max=185007, avg=1294.58, stdev=5470.44 00:25:28.064 clat (usec): min=1910, max=593137, avg=122179.43, stdev=86497.40 00:25:28.064 lat (usec): min=1968, max=595794, avg=123474.01, stdev=87310.67 00:25:28.064 clat percentiles (msec): 00:25:28.064 | 1.00th=[ 8], 5.00th=[ 18], 10.00th=[ 30], 20.00th=[ 62], 00:25:28.064 | 30.00th=[ 83], 40.00th=[ 95], 50.00th=[ 111], 60.00th=[ 127], 00:25:28.064 | 70.00th=[ 142], 80.00th=[ 159], 90.00th=[ 194], 95.00th=[ 288], 00:25:28.064 | 99.00th=[ 447], 99.50th=[ 575], 99.90th=[ 592], 99.95th=[ 592], 00:25:28.064 | 99.99th=[ 592] 00:25:28.064 bw ( KiB/s): min=28672, max=198656, per=10.66%, avg=133120.10, stdev=39777.21, samples=20 00:25:28.064 iops : min= 112, max= 776, avg=519.95, stdev=155.37, samples=20 00:25:28.064 lat (msec) : 2=0.02%, 4=0.17%, 10=1.69%, 20=4.54%, 50=9.65% 00:25:28.064 lat (msec) : 100=27.51%, 250=50.06%, 500=5.57%, 750=0.80% 00:25:28.064 cpu : usr=1.43%, sys=1.54%, ctx=3077, majf=0, minf=1 00:25:28.064 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:28.064 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:28.064 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:28.064 issued rwts: total=0,5264,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:28.064 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:28.064 job3: (groupid=0, jobs=1): err= 0: pid=3297167: Sun Jul 14 05:39:34 2024 00:25:28.064 write: IOPS=479, BW=120MiB/s (126MB/s)(1227MiB/10245msec); 0 zone resets 00:25:28.064 slat (usec): min=18, max=158638, avg=1457.01, stdev=4695.72 00:25:28.064 clat (msec): min=2, max=480, avg=132.05, stdev=82.65 00:25:28.064 lat (msec): min=2, max=480, avg=133.51, stdev=83.61 00:25:28.064 clat percentiles (msec): 00:25:28.064 | 1.00th=[ 9], 5.00th=[ 24], 10.00th=[ 37], 20.00th=[ 66], 00:25:28.064 | 30.00th=[ 83], 40.00th=[ 109], 50.00th=[ 122], 60.00th=[ 133], 00:25:28.064 | 70.00th=[ 153], 80.00th=[ 192], 90.00th=[ 259], 95.00th=[ 292], 00:25:28.064 | 99.00th=[ 388], 99.50th=[ 447], 99.90th=[ 477], 99.95th=[ 481], 00:25:28.064 | 99.99th=[ 481] 00:25:28.064 bw ( KiB/s): min=43008, max=241664, per=9.93%, avg=123993.20, stdev=51488.84, samples=20 00:25:28.064 iops : min= 168, max= 944, avg=484.25, stdev=201.10, samples=20 00:25:28.064 lat (msec) : 4=0.08%, 10=1.57%, 20=2.26%, 50=11.47%, 100=21.56% 00:25:28.064 lat (msec) : 250=52.06%, 500=11.00% 00:25:28.064 cpu : usr=1.52%, sys=1.32%, ctx=2805, majf=0, minf=1 00:25:28.064 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:28.064 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:28.064 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:28.064 issued rwts: total=0,4908,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:28.064 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:28.064 job4: (groupid=0, jobs=1): err= 0: pid=3297168: Sun Jul 14 05:39:34 2024 00:25:28.064 write: IOPS=319, BW=79.9MiB/s (83.7MB/s)(813MiB/10183msec); 0 zone resets 00:25:28.064 slat (usec): min=26, max=85486, avg=2828.97, stdev=6097.23 00:25:28.064 clat (msec): min=3, max=388, avg=197.41, stdev=74.33 00:25:28.064 lat (msec): min=3, max=402, avg=200.24, stdev=75.32 00:25:28.064 clat percentiles (msec): 00:25:28.064 | 1.00th=[ 18], 5.00th=[ 44], 10.00th=[ 86], 20.00th=[ 129], 00:25:28.064 | 30.00th=[ 171], 40.00th=[ 203], 50.00th=[ 215], 60.00th=[ 226], 00:25:28.064 | 70.00th=[ 234], 80.00th=[ 253], 90.00th=[ 279], 95.00th=[ 313], 00:25:28.064 | 99.00th=[ 338], 99.50th=[ 347], 99.90th=[ 388], 99.95th=[ 388], 00:25:28.064 | 99.99th=[ 388] 00:25:28.064 bw ( KiB/s): min=51200, max=139776, per=6.54%, avg=81649.25, stdev=23443.79, samples=20 00:25:28.064 iops : min= 200, max= 546, avg=318.90, stdev=91.59, samples=20 00:25:28.064 lat (msec) : 4=0.03%, 10=0.18%, 20=1.11%, 50=4.21%, 100=7.62% 00:25:28.064 lat (msec) : 250=65.85%, 500=21.00% 00:25:28.064 cpu : usr=0.96%, sys=0.99%, ctx=1190, majf=0, minf=1 00:25:28.064 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:25:28.064 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:28.064 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:28.064 issued rwts: total=0,3253,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:28.064 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:28.064 job5: (groupid=0, jobs=1): err= 0: pid=3297169: Sun Jul 14 05:39:34 2024 00:25:28.064 write: IOPS=390, BW=97.7MiB/s (102MB/s)(995MiB/10185msec); 0 zone resets 00:25:28.064 slat (usec): min=24, max=206766, avg=2309.10, stdev=6139.74 00:25:28.064 clat (msec): min=34, max=802, avg=161.02, stdev=81.87 00:25:28.064 lat (msec): min=34, max=802, avg=163.33, stdev=82.69 00:25:28.064 clat percentiles (msec): 00:25:28.064 | 1.00th=[ 65], 5.00th=[ 78], 10.00th=[ 88], 20.00th=[ 110], 00:25:28.064 | 30.00th=[ 121], 40.00th=[ 134], 50.00th=[ 146], 60.00th=[ 159], 00:25:28.064 | 70.00th=[ 182], 80.00th=[ 207], 90.00th=[ 222], 95.00th=[ 251], 00:25:28.064 | 99.00th=[ 535], 99.50th=[ 760], 99.90th=[ 793], 99.95th=[ 802], 00:25:28.064 | 99.99th=[ 802] 00:25:28.064 bw ( KiB/s): min=10752, max=176640, per=8.02%, avg=100220.65, stdev=37470.20, samples=20 00:25:28.064 iops : min= 42, max= 690, avg=391.40, stdev=146.35, samples=20 00:25:28.064 lat (msec) : 50=0.03%, 100=14.63%, 250=80.37%, 500=3.79%, 750=0.63% 00:25:28.064 lat (msec) : 1000=0.55% 00:25:28.064 cpu : usr=1.23%, sys=1.08%, ctx=1312, majf=0, minf=1 00:25:28.064 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:25:28.064 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:28.064 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:28.064 issued rwts: total=0,3979,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:28.064 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:28.064 job6: (groupid=0, jobs=1): err= 0: pid=3297170: Sun Jul 14 05:39:34 2024 00:25:28.064 write: IOPS=422, BW=106MiB/s (111MB/s)(1064MiB/10084msec); 0 zone resets 00:25:28.064 slat (usec): min=19, max=163527, avg=1869.68, stdev=5557.45 00:25:28.064 clat (msec): min=2, max=326, avg=149.16, stdev=76.28 00:25:28.064 lat (msec): min=2, max=327, avg=151.03, stdev=77.34 00:25:28.064 clat percentiles (msec): 00:25:28.064 | 1.00th=[ 10], 5.00th=[ 42], 10.00th=[ 65], 20.00th=[ 88], 00:25:28.064 | 30.00th=[ 100], 40.00th=[ 110], 50.00th=[ 123], 60.00th=[ 157], 00:25:28.064 | 70.00th=[ 197], 80.00th=[ 232], 90.00th=[ 264], 95.00th=[ 284], 00:25:28.064 | 99.00th=[ 305], 99.50th=[ 309], 99.90th=[ 321], 99.95th=[ 326], 00:25:28.064 | 99.99th=[ 326] 00:25:28.064 bw ( KiB/s): min=58368, max=200704, per=8.59%, avg=107316.45, stdev=44750.75, samples=20 00:25:28.064 iops : min= 228, max= 784, avg=419.05, stdev=174.77, samples=20 00:25:28.064 lat (msec) : 4=0.07%, 10=0.96%, 20=1.34%, 50=4.16%, 100=23.94% 00:25:28.064 lat (msec) : 250=54.65%, 500=14.87% 00:25:28.064 cpu : usr=1.12%, sys=1.36%, ctx=1963, majf=0, minf=1 00:25:28.064 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:25:28.064 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:28.064 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:28.064 issued rwts: total=0,4256,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:28.064 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:28.064 job7: (groupid=0, jobs=1): err= 0: pid=3297171: Sun Jul 14 05:39:34 2024 00:25:28.064 write: IOPS=365, BW=91.3MiB/s (95.7MB/s)(929MiB/10183msec); 0 zone resets 00:25:28.064 slat (usec): min=23, max=143857, avg=2275.46, stdev=5912.35 00:25:28.064 clat (msec): min=4, max=402, avg=172.96, stdev=74.34 00:25:28.064 lat (msec): min=6, max=402, avg=175.23, stdev=75.45 00:25:28.064 clat percentiles (msec): 00:25:28.064 | 1.00th=[ 26], 5.00th=[ 66], 10.00th=[ 83], 20.00th=[ 115], 00:25:28.064 | 30.00th=[ 126], 40.00th=[ 146], 50.00th=[ 161], 60.00th=[ 182], 00:25:28.064 | 70.00th=[ 211], 80.00th=[ 247], 90.00th=[ 279], 95.00th=[ 309], 00:25:28.064 | 99.00th=[ 347], 99.50th=[ 363], 99.90th=[ 388], 99.95th=[ 401], 00:25:28.064 | 99.99th=[ 401] 00:25:28.064 bw ( KiB/s): min=49152, max=147968, per=7.49%, avg=93511.45, stdev=29916.36, samples=20 00:25:28.064 iops : min= 192, max= 578, avg=365.20, stdev=116.84, samples=20 00:25:28.064 lat (msec) : 10=0.19%, 20=0.38%, 50=2.61%, 100=12.35%, 250=66.32% 00:25:28.064 lat (msec) : 500=18.16% 00:25:28.064 cpu : usr=1.14%, sys=1.12%, ctx=1662, majf=0, minf=1 00:25:28.064 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:25:28.064 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:28.064 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:28.064 issued rwts: total=0,3717,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:28.064 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:28.064 job8: (groupid=0, jobs=1): err= 0: pid=3297172: Sun Jul 14 05:39:34 2024 00:25:28.064 write: IOPS=420, BW=105MiB/s (110MB/s)(1058MiB/10067msec); 0 zone resets 00:25:28.064 slat (usec): min=24, max=321603, avg=1915.15, stdev=10168.53 00:25:28.064 clat (msec): min=3, max=894, avg=150.25, stdev=96.77 00:25:28.064 lat (msec): min=4, max=908, avg=152.17, stdev=97.89 00:25:28.064 clat percentiles (msec): 00:25:28.064 | 1.00th=[ 15], 5.00th=[ 41], 10.00th=[ 60], 20.00th=[ 81], 00:25:28.064 | 30.00th=[ 93], 40.00th=[ 118], 50.00th=[ 130], 60.00th=[ 148], 00:25:28.064 | 70.00th=[ 165], 80.00th=[ 226], 90.00th=[ 268], 95.00th=[ 292], 00:25:28.064 | 99.00th=[ 625], 99.50th=[ 693], 99.90th=[ 743], 99.95th=[ 894], 00:25:28.064 | 99.99th=[ 894] 00:25:28.064 bw ( KiB/s): min=33280, max=211968, per=8.54%, avg=106663.45, stdev=46165.04, samples=20 00:25:28.064 iops : min= 130, max= 828, avg=416.60, stdev=180.30, samples=20 00:25:28.064 lat (msec) : 4=0.05%, 10=0.54%, 20=0.97%, 50=5.53%, 100=26.71% 00:25:28.064 lat (msec) : 250=52.33%, 500=12.60%, 750=1.18%, 1000=0.09% 00:25:28.064 cpu : usr=1.16%, sys=1.47%, ctx=1910, majf=0, minf=1 00:25:28.064 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:25:28.064 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:28.064 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:28.064 issued rwts: total=0,4231,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:28.064 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:28.064 job9: (groupid=0, jobs=1): err= 0: pid=3297173: Sun Jul 14 05:39:34 2024 00:25:28.064 write: IOPS=618, BW=155MiB/s (162MB/s)(1558MiB/10084msec); 0 zone resets 00:25:28.064 slat (usec): min=20, max=74397, avg=1114.44, stdev=2973.38 00:25:28.064 clat (usec): min=1470, max=469936, avg=102384.81, stdev=59201.68 00:25:28.065 lat (usec): min=1527, max=485819, avg=103499.25, stdev=59645.28 00:25:28.065 clat percentiles (msec): 00:25:28.065 | 1.00th=[ 8], 5.00th=[ 25], 10.00th=[ 45], 20.00th=[ 71], 00:25:28.065 | 30.00th=[ 77], 40.00th=[ 83], 50.00th=[ 88], 60.00th=[ 102], 00:25:28.065 | 70.00th=[ 114], 80.00th=[ 128], 90.00th=[ 161], 95.00th=[ 239], 00:25:28.065 | 99.00th=[ 330], 99.50th=[ 342], 99.90th=[ 393], 99.95th=[ 460], 00:25:28.065 | 99.99th=[ 472] 00:25:28.065 bw ( KiB/s): min=93696, max=229888, per=12.64%, avg=157903.70, stdev=39478.14, samples=20 00:25:28.065 iops : min= 366, max= 898, avg=616.75, stdev=154.20, samples=20 00:25:28.065 lat (msec) : 2=0.42%, 4=0.26%, 10=0.55%, 20=2.70%, 50=8.29% 00:25:28.065 lat (msec) : 100=47.39%, 250=36.07%, 500=4.33% 00:25:28.065 cpu : usr=1.70%, sys=2.05%, ctx=3173, majf=0, minf=1 00:25:28.065 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:25:28.065 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:28.065 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:28.065 issued rwts: total=0,6233,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:28.065 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:28.065 job10: (groupid=0, jobs=1): err= 0: pid=3297174: Sun Jul 14 05:39:34 2024 00:25:28.065 write: IOPS=561, BW=140MiB/s (147MB/s)(1415MiB/10084msec); 0 zone resets 00:25:28.065 slat (usec): min=23, max=109675, avg=1418.47, stdev=4181.21 00:25:28.065 clat (msec): min=2, max=397, avg=112.34, stdev=73.75 00:25:28.065 lat (msec): min=3, max=397, avg=113.76, stdev=74.68 00:25:28.065 clat percentiles (msec): 00:25:28.065 | 1.00th=[ 10], 5.00th=[ 24], 10.00th=[ 41], 20.00th=[ 65], 00:25:28.065 | 30.00th=[ 74], 40.00th=[ 82], 50.00th=[ 91], 60.00th=[ 105], 00:25:28.065 | 70.00th=[ 124], 80.00th=[ 148], 90.00th=[ 213], 95.00th=[ 288], 00:25:28.065 | 99.00th=[ 355], 99.50th=[ 368], 99.90th=[ 376], 99.95th=[ 393], 00:25:28.065 | 99.99th=[ 397] 00:25:28.065 bw ( KiB/s): min=51200, max=223744, per=11.46%, avg=143178.50, stdev=51635.78, samples=20 00:25:28.065 iops : min= 200, max= 874, avg=559.25, stdev=201.66, samples=20 00:25:28.065 lat (msec) : 4=0.07%, 10=1.10%, 20=2.58%, 50=9.65%, 100=44.01% 00:25:28.065 lat (msec) : 250=35.12%, 500=7.48% 00:25:28.065 cpu : usr=1.61%, sys=1.79%, ctx=2629, majf=0, minf=1 00:25:28.065 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:28.065 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:28.065 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:28.065 issued rwts: total=0,5658,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:28.065 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:28.065 00:25:28.065 Run status group 0 (all jobs): 00:25:28.065 WRITE: bw=1220MiB/s (1279MB/s), 79.9MiB/s-155MiB/s (83.7MB/s-162MB/s), io=12.2GiB (13.1GB), run=10067-10245msec 00:25:28.065 00:25:28.065 Disk stats (read/write): 00:25:28.065 nvme0n1: ios=48/7374, merge=0/0, ticks=2435/1235857, in_queue=1238292, util=99.90% 00:25:28.065 nvme10n1: ios=43/9490, merge=0/0, ticks=1210/1229526, in_queue=1230736, util=100.00% 00:25:28.065 nvme1n1: ios=0/10514, merge=0/0, ticks=0/1238379, in_queue=1238379, util=97.30% 00:25:28.065 nvme2n1: ios=5/9748, merge=0/0, ticks=5/1233165, in_queue=1233170, util=97.56% 00:25:28.065 nvme3n1: ios=46/6462, merge=0/0, ticks=1892/1226701, in_queue=1228593, util=100.00% 00:25:28.065 nvme4n1: ios=41/7920, merge=0/0, ticks=2252/1229027, in_queue=1231279, util=100.00% 00:25:28.065 nvme5n1: ios=46/8237, merge=0/0, ticks=2305/1181089, in_queue=1183394, util=100.00% 00:25:28.065 nvme6n1: ios=0/7398, merge=0/0, ticks=0/1235576, in_queue=1235576, util=98.29% 00:25:28.065 nvme7n1: ios=51/8154, merge=0/0, ticks=7844/1088766, in_queue=1096610, util=100.00% 00:25:28.065 nvme8n1: ios=0/12219, merge=0/0, ticks=0/1213150, in_queue=1213150, util=98.89% 00:25:28.065 nvme9n1: ios=43/11066, merge=0/0, ticks=1437/1204620, in_queue=1206057, util=100.00% 00:25:28.065 05:39:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:25:28.065 05:39:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:25:28.065 05:39:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:28.065 05:39:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:28.065 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:28.065 05:39:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:25:28.065 05:39:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:28.065 05:39:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:28.065 05:39:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:25:28.065 05:39:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:28.065 05:39:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK1 00:25:28.065 05:39:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:28.065 05:39:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:28.065 05:39:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.065 05:39:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:28.065 05:39:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.065 05:39:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:28.065 05:39:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:25:28.065 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:25:28.065 05:39:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:25:28.065 05:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:28.065 05:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:28.065 05:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:25:28.065 05:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:28.065 05:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK2 00:25:28.065 05:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:28.065 05:39:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:28.065 05:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.065 05:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:28.065 05:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.065 05:39:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:28.065 05:39:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:25:28.323 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:25:28.323 05:39:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:25:28.323 05:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:28.323 05:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:28.323 05:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:25:28.323 05:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:28.323 05:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK3 00:25:28.323 05:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:28.323 05:39:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:25:28.323 05:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.323 05:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:28.323 05:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.323 05:39:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:28.323 05:39:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:25:28.581 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:25:28.581 05:39:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:25:28.581 05:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:28.581 05:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:28.581 05:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:25:28.581 05:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:28.581 05:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK4 00:25:28.581 05:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:28.581 05:39:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:25:28.581 05:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.581 05:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:28.839 05:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.839 05:39:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:28.839 05:39:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:25:29.097 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:25:29.097 05:39:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:25:29.097 05:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:29.097 05:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:29.097 05:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:25:29.097 05:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:29.097 05:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK5 00:25:29.097 05:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:29.097 05:39:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:25:29.097 05:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.097 05:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.097 05:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.097 05:39:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:29.097 05:39:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:25:29.355 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:25:29.355 05:39:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:25:29.355 05:39:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:29.355 05:39:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:29.355 05:39:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:25:29.355 05:39:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:29.355 05:39:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK6 00:25:29.355 05:39:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:29.355 05:39:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:25:29.355 05:39:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.355 05:39:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.355 05:39:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.355 05:39:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:29.355 05:39:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:25:29.355 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:25:29.355 05:39:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:25:29.355 05:39:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:29.614 05:39:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:29.614 05:39:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:25:29.614 05:39:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:29.614 05:39:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK7 00:25:29.614 05:39:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:29.614 05:39:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:25:29.614 05:39:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.614 05:39:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.614 05:39:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.614 05:39:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:29.614 05:39:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:25:29.614 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:25:29.614 05:39:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:25:29.614 05:39:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:29.614 05:39:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:29.614 05:39:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:25:29.614 05:39:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:29.614 05:39:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK8 00:25:29.614 05:39:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:29.614 05:39:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:25:29.614 05:39:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.614 05:39:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.614 05:39:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.614 05:39:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:29.614 05:39:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:25:29.872 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:25:29.872 05:39:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:25:29.872 05:39:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:29.872 05:39:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:29.872 05:39:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:25:29.872 05:39:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:29.872 05:39:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK9 00:25:29.872 05:39:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:29.872 05:39:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:25:29.872 05:39:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.872 05:39:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.872 05:39:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.872 05:39:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:29.872 05:39:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:25:29.872 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:25:29.872 05:39:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:25:29.872 05:39:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:29.872 05:39:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:29.872 05:39:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:25:29.872 05:39:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:29.872 05:39:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK10 00:25:29.872 05:39:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:29.872 05:39:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:25:29.872 05:39:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.872 05:39:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.872 05:39:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.872 05:39:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:29.872 05:39:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:25:30.131 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:25:30.131 05:39:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:25:30.131 05:39:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:30.131 05:39:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:30.131 05:39:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:25:30.131 05:39:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:30.131 05:39:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK11 00:25:30.131 05:39:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:30.131 05:39:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:25:30.131 05:39:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.131 05:39:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:30.131 05:39:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.131 05:39:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:25:30.131 05:39:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:25:30.131 05:39:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:25:30.131 05:39:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:30.131 05:39:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:25:30.131 05:39:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:30.131 05:39:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:25:30.131 05:39:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:30.131 05:39:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:30.131 rmmod nvme_tcp 00:25:30.131 rmmod nvme_fabrics 00:25:30.131 rmmod nvme_keyring 00:25:30.131 05:39:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:30.131 05:39:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:25:30.131 05:39:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:25:30.131 05:39:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 3291111 ']' 00:25:30.131 05:39:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 3291111 00:25:30.131 05:39:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@946 -- # '[' -z 3291111 ']' 00:25:30.131 05:39:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@950 -- # kill -0 3291111 00:25:30.131 05:39:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@951 -- # uname 00:25:30.131 05:39:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:30.131 05:39:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3291111 00:25:30.131 05:39:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:30.131 05:39:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:30.131 05:39:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3291111' 00:25:30.131 killing process with pid 3291111 00:25:30.131 05:39:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@965 -- # kill 3291111 00:25:30.131 05:39:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@970 -- # wait 3291111 00:25:30.729 05:39:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:30.729 05:39:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:30.729 05:39:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:30.729 05:39:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:30.729 05:39:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:30.729 05:39:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:30.729 05:39:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:30.729 05:39:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:32.627 05:39:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:32.627 00:25:32.627 real 1m1.055s 00:25:32.627 user 3m22.272s 00:25:32.627 sys 0m23.777s 00:25:32.627 05:39:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:32.627 05:39:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:32.627 ************************************ 00:25:32.627 END TEST nvmf_multiconnection 00:25:32.627 ************************************ 00:25:32.628 05:39:39 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:32.926 05:39:39 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:32.926 05:39:39 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:32.926 05:39:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:32.926 ************************************ 00:25:32.926 START TEST nvmf_initiator_timeout 00:25:32.926 ************************************ 00:25:32.926 05:39:39 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:32.926 * Looking for test storage... 00:25:32.926 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:32.926 05:39:39 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:32.926 05:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:25:32.926 05:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:32.926 05:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:32.926 05:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:32.926 05:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:32.926 05:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:32.926 05:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:32.926 05:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:32.926 05:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:32.926 05:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:32.926 05:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:32.926 05:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:32.926 05:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:32.926 05:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:32.926 05:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:32.926 05:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:32.926 05:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:32.926 05:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:32.926 05:39:39 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:32.926 05:39:39 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:32.926 05:39:39 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:32.926 05:39:39 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.926 05:39:39 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.926 05:39:39 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.926 05:39:39 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:25:32.926 05:39:39 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.926 05:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:25:32.926 05:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:32.926 05:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:32.926 05:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:32.926 05:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:32.926 05:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:32.926 05:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:32.926 05:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:32.926 05:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:32.926 05:39:39 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:32.926 05:39:39 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:32.926 05:39:39 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:25:32.926 05:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:32.926 05:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:32.926 05:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:32.926 05:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:32.926 05:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:32.926 05:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:32.926 05:39:39 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:32.926 05:39:39 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:32.926 05:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:32.926 05:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:32.926 05:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:25:32.926 05:39:39 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:34.828 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:34.828 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:25:34.828 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:34.828 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:34.828 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:34.828 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:34.828 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:34.828 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:25:34.828 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:34.828 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:25:34.828 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:25:34.828 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:25:34.828 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:25:34.828 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:25:34.828 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:25:34.828 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:34.828 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:34.828 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:34.828 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:34.828 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:34.828 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:34.828 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:34.828 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:34.828 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:34.828 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:34.828 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:34.828 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:34.828 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:34.828 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:34.829 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:34.829 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:34.829 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:34.829 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:34.829 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:34.829 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:34.829 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:34.829 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:34.829 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:34.829 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:34.829 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:34.829 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:34.829 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:34.829 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:34.829 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:34.829 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:34.829 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:34.829 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:34.829 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:34.829 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:34.829 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:34.829 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:34.829 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:34.829 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:34.829 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:34.829 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:34.829 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:34.829 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:34.829 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:34.829 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:34.829 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:34.829 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:34.829 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:34.829 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:34.829 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:34.829 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:34.829 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:34.829 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:34.829 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:34.829 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:34.829 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:34.829 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:34.829 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:34.829 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:25:34.829 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:34.829 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:34.829 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:34.829 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:34.829 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:34.829 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:34.829 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:34.829 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:34.829 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:34.829 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:34.829 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:34.829 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:34.829 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:34.829 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:34.829 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:34.829 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:34.829 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:34.829 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:34.829 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:34.829 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:34.829 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:34.829 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:34.829 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:34.829 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:34.829 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:25:34.829 00:25:34.829 --- 10.0.0.2 ping statistics --- 00:25:34.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:34.829 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:25:34.829 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:34.829 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:34.829 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.248 ms 00:25:34.829 00:25:34.829 --- 10.0.0.1 ping statistics --- 00:25:34.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:34.829 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:25:34.829 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:34.829 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:25:34.829 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:34.829 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:34.829 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:34.829 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:34.829 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:34.829 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:34.829 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:35.088 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:25:35.088 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:35.088 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:35.088 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:35.088 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=3300508 00:25:35.088 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:35.088 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 3300508 00:25:35.088 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@827 -- # '[' -z 3300508 ']' 00:25:35.088 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:35.088 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:35.088 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:35.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:35.088 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:35.088 05:39:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:35.088 [2024-07-14 05:39:41.989283] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:25:35.088 [2024-07-14 05:39:41.989363] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:35.088 EAL: No free 2048 kB hugepages reported on node 1 00:25:35.088 [2024-07-14 05:39:42.054673] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:35.088 [2024-07-14 05:39:42.144032] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:35.088 [2024-07-14 05:39:42.144092] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:35.088 [2024-07-14 05:39:42.144113] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:35.088 [2024-07-14 05:39:42.144125] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:35.088 [2024-07-14 05:39:42.144135] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:35.088 [2024-07-14 05:39:42.144200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:35.088 [2024-07-14 05:39:42.144258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:35.088 [2024-07-14 05:39:42.144325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:35.088 [2024-07-14 05:39:42.144328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:35.347 05:39:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:35.347 05:39:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # return 0 00:25:35.347 05:39:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:35.347 05:39:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:35.347 05:39:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:35.347 05:39:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:35.347 05:39:42 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:35.347 05:39:42 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:35.347 05:39:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.347 05:39:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:35.347 Malloc0 00:25:35.347 05:39:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.347 05:39:42 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:25:35.347 05:39:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.347 05:39:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:35.347 Delay0 00:25:35.347 05:39:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.347 05:39:42 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:35.347 05:39:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.347 05:39:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:35.347 [2024-07-14 05:39:42.328226] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:35.347 05:39:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.347 05:39:42 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:25:35.347 05:39:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.347 05:39:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:35.347 05:39:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.347 05:39:42 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:35.347 05:39:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.347 05:39:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:35.347 05:39:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.347 05:39:42 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:35.347 05:39:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.347 05:39:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:35.347 [2024-07-14 05:39:42.356498] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:35.347 05:39:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.347 05:39:42 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:36.283 05:39:43 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:25:36.283 05:39:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1194 -- # local i=0 00:25:36.283 05:39:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:36.283 05:39:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:36.283 05:39:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1201 -- # sleep 2 00:25:38.180 05:39:45 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:38.180 05:39:45 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:38.180 05:39:45 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:25:38.180 05:39:45 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:38.180 05:39:45 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:38.180 05:39:45 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # return 0 00:25:38.180 05:39:45 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=3300933 00:25:38.180 05:39:45 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:25:38.180 05:39:45 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:25:38.180 [global] 00:25:38.180 thread=1 00:25:38.180 invalidate=1 00:25:38.180 rw=write 00:25:38.180 time_based=1 00:25:38.180 runtime=60 00:25:38.180 ioengine=libaio 00:25:38.180 direct=1 00:25:38.180 bs=4096 00:25:38.180 iodepth=1 00:25:38.180 norandommap=0 00:25:38.180 numjobs=1 00:25:38.180 00:25:38.180 verify_dump=1 00:25:38.180 verify_backlog=512 00:25:38.180 verify_state_save=0 00:25:38.180 do_verify=1 00:25:38.180 verify=crc32c-intel 00:25:38.180 [job0] 00:25:38.180 filename=/dev/nvme0n1 00:25:38.180 Could not set queue depth (nvme0n1) 00:25:38.180 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:38.180 fio-3.35 00:25:38.180 Starting 1 thread 00:25:41.457 05:39:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:25:41.457 05:39:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.457 05:39:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:41.457 true 00:25:41.457 05:39:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.457 05:39:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:25:41.457 05:39:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.457 05:39:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:41.457 true 00:25:41.457 05:39:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.457 05:39:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:25:41.457 05:39:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.457 05:39:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:41.457 true 00:25:41.457 05:39:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.457 05:39:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:25:41.457 05:39:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.457 05:39:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:41.457 true 00:25:41.457 05:39:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.457 05:39:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:25:44.732 05:39:51 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:25:44.732 05:39:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.732 05:39:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:44.732 true 00:25:44.732 05:39:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.732 05:39:51 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:25:44.732 05:39:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.732 05:39:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:44.732 true 00:25:44.732 05:39:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.732 05:39:51 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:25:44.732 05:39:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.732 05:39:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:44.732 true 00:25:44.732 05:39:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.732 05:39:51 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:25:44.732 05:39:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.732 05:39:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:44.732 true 00:25:44.732 05:39:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.732 05:39:51 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:25:44.732 05:39:51 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 3300933 00:26:40.935 00:26:40.935 job0: (groupid=0, jobs=1): err= 0: pid=3301002: Sun Jul 14 05:40:45 2024 00:26:40.935 read: IOPS=113, BW=455KiB/s (466kB/s)(26.7MiB/60024msec) 00:26:40.935 slat (nsec): min=5407, max=76594, avg=18371.65, stdev=9147.65 00:26:40.935 clat (usec): min=345, max=41053k, avg=8378.18, stdev=496930.61 00:26:40.935 lat (usec): min=351, max=41053k, avg=8396.55, stdev=496930.58 00:26:40.935 clat percentiles (usec): 00:26:40.935 | 1.00th=[ 363], 5.00th=[ 375], 10.00th=[ 383], 00:26:40.935 | 20.00th=[ 408], 30.00th=[ 445], 40.00th=[ 474], 00:26:40.935 | 50.00th=[ 486], 60.00th=[ 494], 70.00th=[ 519], 00:26:40.935 | 80.00th=[ 562], 90.00th=[ 635], 95.00th=[ 701], 00:26:40.935 | 99.00th=[ 41157], 99.50th=[ 41157], 99.90th=[ 41157], 00:26:40.935 | 99.95th=[ 41157], 99.99th=[17112761] 00:26:40.935 write: IOPS=119, BW=478KiB/s (489kB/s)(28.0MiB/60024msec); 0 zone resets 00:26:40.935 slat (nsec): min=5910, max=84568, avg=20533.62, stdev=11772.56 00:26:40.935 clat (usec): min=231, max=1119, avg=346.68, stdev=64.99 00:26:40.935 lat (usec): min=237, max=1137, avg=367.21, stdev=71.81 00:26:40.935 clat percentiles (usec): 00:26:40.935 | 1.00th=[ 241], 5.00th=[ 249], 10.00th=[ 262], 20.00th=[ 281], 00:26:40.935 | 30.00th=[ 302], 40.00th=[ 326], 50.00th=[ 347], 60.00th=[ 371], 00:26:40.935 | 70.00th=[ 388], 80.00th=[ 400], 90.00th=[ 433], 95.00th=[ 453], 00:26:40.935 | 99.00th=[ 490], 99.50th=[ 498], 99.90th=[ 523], 99.95th=[ 652], 00:26:40.935 | 99.99th=[ 1123] 00:26:40.935 bw ( KiB/s): min= 896, max= 6648, per=100.00%, avg=4096.00, stdev=1609.07, samples=14 00:26:40.936 iops : min= 224, max= 1662, avg=1024.00, stdev=402.27, samples=14 00:26:40.936 lat (usec) : 250=2.64%, 500=78.82%, 750=16.26%, 1000=0.01% 00:26:40.936 lat (msec) : 2=0.01%, 50=2.27%, >=2000=0.01% 00:26:40.936 cpu : usr=0.32%, sys=0.54%, ctx=13995, majf=0, minf=2 00:26:40.936 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:40.936 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:40.936 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:40.936 issued rwts: total=6826,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:40.936 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:40.936 00:26:40.936 Run status group 0 (all jobs): 00:26:40.936 READ: bw=455KiB/s (466kB/s), 455KiB/s-455KiB/s (466kB/s-466kB/s), io=26.7MiB (28.0MB), run=60024-60024msec 00:26:40.936 WRITE: bw=478KiB/s (489kB/s), 478KiB/s-478KiB/s (489kB/s-489kB/s), io=28.0MiB (29.4MB), run=60024-60024msec 00:26:40.936 00:26:40.936 Disk stats (read/write): 00:26:40.936 nvme0n1: ios=6921/7168, merge=0/0, ticks=16898/2344, in_queue=19242, util=99.79% 00:26:40.936 05:40:45 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:40.936 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:40.936 05:40:45 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:40.936 05:40:45 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1215 -- # local i=0 00:26:40.936 05:40:45 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:40.936 05:40:45 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:26:40.936 05:40:45 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:26:40.936 05:40:45 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:40.936 05:40:45 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # return 0 00:26:40.936 05:40:45 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:26:40.936 05:40:45 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:26:40.936 nvmf hotplug test: fio successful as expected 00:26:40.936 05:40:45 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:40.936 05:40:45 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.936 05:40:45 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:40.936 05:40:45 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.936 05:40:45 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:26:40.936 05:40:45 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:26:40.936 05:40:45 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:26:40.936 05:40:45 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:40.936 05:40:45 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:26:40.936 05:40:45 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:40.936 05:40:45 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:26:40.936 05:40:45 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:40.936 05:40:45 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:40.936 rmmod nvme_tcp 00:26:40.936 rmmod nvme_fabrics 00:26:40.936 rmmod nvme_keyring 00:26:40.936 05:40:45 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:40.936 05:40:45 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:26:40.936 05:40:45 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:26:40.936 05:40:45 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 3300508 ']' 00:26:40.936 05:40:45 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 3300508 00:26:40.936 05:40:45 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@946 -- # '[' -z 3300508 ']' 00:26:40.936 05:40:45 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # kill -0 3300508 00:26:40.936 05:40:45 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # uname 00:26:40.936 05:40:45 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:40.936 05:40:45 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3300508 00:26:40.936 05:40:45 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:40.936 05:40:45 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:40.936 05:40:45 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3300508' 00:26:40.936 killing process with pid 3300508 00:26:40.936 05:40:45 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@965 -- # kill 3300508 00:26:40.936 05:40:45 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@970 -- # wait 3300508 00:26:40.936 05:40:45 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:40.936 05:40:45 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:40.936 05:40:45 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:40.936 05:40:45 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:40.936 05:40:45 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:40.936 05:40:45 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:40.936 05:40:45 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:40.936 05:40:45 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:40.936 05:40:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:40.936 00:26:40.936 real 1m8.137s 00:26:40.936 user 4m10.921s 00:26:40.936 sys 0m6.825s 00:26:40.936 05:40:47 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:40.936 05:40:47 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:40.936 ************************************ 00:26:40.936 END TEST nvmf_initiator_timeout 00:26:40.936 ************************************ 00:26:40.936 05:40:47 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:26:40.936 05:40:47 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:26:40.936 05:40:47 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:26:40.936 05:40:47 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:26:40.936 05:40:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:43.468 05:40:49 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:43.468 05:40:49 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:26:43.468 05:40:49 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:43.468 05:40:49 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:43.468 05:40:49 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:43.468 05:40:49 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:43.468 05:40:49 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:43.468 05:40:49 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:26:43.468 05:40:49 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:43.468 05:40:49 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:26:43.468 05:40:49 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:26:43.468 05:40:49 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:26:43.468 05:40:49 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:26:43.468 05:40:49 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:26:43.468 05:40:49 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:26:43.468 05:40:49 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:43.468 05:40:49 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:43.468 05:40:49 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:43.468 05:40:49 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:43.468 05:40:49 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:43.468 05:40:49 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:43.468 05:40:49 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:43.468 05:40:49 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:43.468 05:40:49 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:43.468 05:40:49 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:43.468 05:40:49 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:43.468 05:40:49 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:43.468 05:40:49 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:43.468 05:40:49 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:43.468 05:40:49 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:43.468 05:40:49 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:43.468 05:40:49 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:43.468 05:40:49 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:43.468 05:40:49 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:43.468 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:43.468 05:40:49 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:43.468 05:40:49 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:43.468 05:40:49 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:43.468 05:40:49 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:43.468 05:40:49 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:43.468 05:40:49 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:43.468 05:40:49 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:43.468 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:43.468 05:40:49 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:43.468 05:40:49 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:43.468 05:40:49 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:43.468 05:40:49 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:43.468 05:40:49 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:43.468 05:40:49 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:43.468 05:40:49 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:43.468 05:40:49 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:43.468 05:40:49 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:43.468 05:40:49 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:43.468 05:40:49 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:43.469 05:40:49 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:43.469 05:40:49 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:43.469 05:40:49 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:43.469 05:40:49 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:43.469 05:40:49 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:43.469 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:43.469 05:40:49 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:43.469 05:40:49 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:43.469 05:40:49 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:43.469 05:40:49 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:43.469 05:40:49 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:43.469 05:40:49 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:43.469 05:40:49 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:43.469 05:40:49 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:43.469 05:40:49 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:43.469 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:43.469 05:40:49 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:43.469 05:40:49 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:43.469 05:40:49 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:43.469 05:40:49 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:26:43.469 05:40:49 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:43.469 05:40:49 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:26:43.469 05:40:49 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:43.469 05:40:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:43.469 ************************************ 00:26:43.469 START TEST nvmf_perf_adq 00:26:43.469 ************************************ 00:26:43.469 05:40:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:43.469 * Looking for test storage... 00:26:43.469 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:43.469 05:40:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:43.469 05:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:26:43.469 05:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:43.469 05:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:43.469 05:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:43.469 05:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:43.469 05:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:43.469 05:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:43.469 05:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:43.469 05:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:43.469 05:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:43.469 05:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:43.469 05:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:43.469 05:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:43.469 05:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:43.469 05:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:43.469 05:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:43.469 05:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:43.469 05:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:43.469 05:40:50 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:43.469 05:40:50 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:43.469 05:40:50 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:43.469 05:40:50 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:43.469 05:40:50 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:43.469 05:40:50 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:43.469 05:40:50 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:26:43.469 05:40:50 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:43.469 05:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:26:43.469 05:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:43.469 05:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:43.469 05:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:43.469 05:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:43.469 05:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:43.469 05:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:43.469 05:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:43.469 05:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:43.469 05:40:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:26:43.469 05:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:26:43.469 05:40:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:45.368 05:40:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:45.368 05:40:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:26:45.368 05:40:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:45.368 05:40:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:45.368 05:40:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:45.368 05:40:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:45.368 05:40:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:45.368 05:40:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:26:45.368 05:40:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:45.368 05:40:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:26:45.368 05:40:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:26:45.368 05:40:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:26:45.368 05:40:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:26:45.368 05:40:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:26:45.368 05:40:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:26:45.368 05:40:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:45.368 05:40:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:45.368 05:40:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:45.368 05:40:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:45.368 05:40:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:45.368 05:40:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:45.368 05:40:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:45.368 05:40:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:45.368 05:40:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:45.368 05:40:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:45.368 05:40:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:45.368 05:40:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:45.368 05:40:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:45.368 05:40:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:45.368 05:40:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:45.368 05:40:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:45.368 05:40:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:45.368 05:40:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:45.368 05:40:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:45.368 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:45.368 05:40:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:45.368 05:40:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:45.368 05:40:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:45.368 05:40:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:45.368 05:40:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:45.368 05:40:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:45.368 05:40:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:45.368 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:45.368 05:40:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:45.368 05:40:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:45.368 05:40:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:45.368 05:40:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:45.368 05:40:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:45.368 05:40:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:45.368 05:40:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:45.368 05:40:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:45.368 05:40:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:45.368 05:40:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:45.368 05:40:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:45.368 05:40:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:45.368 05:40:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:45.368 05:40:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:45.368 05:40:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:45.368 05:40:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:45.368 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:45.368 05:40:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:45.368 05:40:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:45.368 05:40:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:45.368 05:40:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:45.368 05:40:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:45.368 05:40:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:45.368 05:40:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:45.368 05:40:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:45.368 05:40:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:45.368 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:45.368 05:40:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:45.368 05:40:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:45.368 05:40:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:45.368 05:40:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:26:45.368 05:40:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:26:45.368 05:40:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:26:45.368 05:40:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:26:45.934 05:40:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:26:47.838 05:40:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:26:53.105 05:40:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:53.106 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:53.106 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:53.106 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:53.106 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:53.106 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:53.106 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:26:53.106 00:26:53.106 --- 10.0.0.2 ping statistics --- 00:26:53.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:53.106 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:53.106 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:53.106 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:26:53.106 00:26:53.106 --- 10.0.0.1 ping statistics --- 00:26:53.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:53.106 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:53.106 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:53.107 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:53.107 05:40:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:53.107 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:53.107 05:40:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:53.107 05:40:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:53.107 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3312510 00:26:53.107 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:53.107 05:40:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3312510 00:26:53.107 05:40:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 3312510 ']' 00:26:53.107 05:40:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:53.107 05:40:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:53.107 05:40:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:53.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:53.107 05:40:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:53.107 05:40:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:53.107 [2024-07-14 05:40:59.890482] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:26:53.107 [2024-07-14 05:40:59.890558] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:53.107 EAL: No free 2048 kB hugepages reported on node 1 00:26:53.107 [2024-07-14 05:40:59.955660] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:53.107 [2024-07-14 05:41:00.056104] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:53.107 [2024-07-14 05:41:00.056184] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:53.107 [2024-07-14 05:41:00.056205] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:53.107 [2024-07-14 05:41:00.056217] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:53.107 [2024-07-14 05:41:00.056242] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:53.107 [2024-07-14 05:41:00.056339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:53.107 [2024-07-14 05:41:00.056404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:53.107 [2024-07-14 05:41:00.056471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:53.107 [2024-07-14 05:41:00.056473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:53.107 05:41:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:53.107 05:41:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:26:53.107 05:41:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:53.107 05:41:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:53.107 05:41:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:53.107 05:41:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:53.107 05:41:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:26:53.107 05:41:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:26:53.107 05:41:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:26:53.107 05:41:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.107 05:41:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:53.107 05:41:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.107 05:41:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:26:53.107 05:41:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:26:53.107 05:41:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.107 05:41:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:53.107 05:41:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.107 05:41:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:26:53.107 05:41:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.107 05:41:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:53.366 05:41:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.366 05:41:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:26:53.366 05:41:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.366 05:41:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:53.366 [2024-07-14 05:41:00.300590] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:53.367 05:41:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.367 05:41:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:53.367 05:41:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.367 05:41:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:53.367 Malloc1 00:26:53.367 05:41:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.367 05:41:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:53.367 05:41:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.367 05:41:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:53.367 05:41:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.367 05:41:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:53.367 05:41:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.367 05:41:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:53.367 05:41:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.367 05:41:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:53.367 05:41:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.367 05:41:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:53.367 [2024-07-14 05:41:00.352459] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:53.367 05:41:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.367 05:41:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=3312607 00:26:53.367 05:41:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:26:53.367 05:41:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:53.367 EAL: No free 2048 kB hugepages reported on node 1 00:26:55.345 05:41:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:26:55.345 05:41:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.345 05:41:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:55.345 05:41:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.345 05:41:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:26:55.345 "tick_rate": 2700000000, 00:26:55.345 "poll_groups": [ 00:26:55.345 { 00:26:55.345 "name": "nvmf_tgt_poll_group_000", 00:26:55.345 "admin_qpairs": 1, 00:26:55.345 "io_qpairs": 1, 00:26:55.345 "current_admin_qpairs": 1, 00:26:55.345 "current_io_qpairs": 1, 00:26:55.345 "pending_bdev_io": 0, 00:26:55.345 "completed_nvme_io": 19241, 00:26:55.345 "transports": [ 00:26:55.345 { 00:26:55.345 "trtype": "TCP" 00:26:55.345 } 00:26:55.345 ] 00:26:55.345 }, 00:26:55.345 { 00:26:55.345 "name": "nvmf_tgt_poll_group_001", 00:26:55.345 "admin_qpairs": 0, 00:26:55.345 "io_qpairs": 1, 00:26:55.345 "current_admin_qpairs": 0, 00:26:55.345 "current_io_qpairs": 1, 00:26:55.345 "pending_bdev_io": 0, 00:26:55.345 "completed_nvme_io": 17960, 00:26:55.345 "transports": [ 00:26:55.345 { 00:26:55.345 "trtype": "TCP" 00:26:55.345 } 00:26:55.345 ] 00:26:55.345 }, 00:26:55.345 { 00:26:55.345 "name": "nvmf_tgt_poll_group_002", 00:26:55.345 "admin_qpairs": 0, 00:26:55.345 "io_qpairs": 1, 00:26:55.345 "current_admin_qpairs": 0, 00:26:55.345 "current_io_qpairs": 1, 00:26:55.345 "pending_bdev_io": 0, 00:26:55.345 "completed_nvme_io": 17705, 00:26:55.345 "transports": [ 00:26:55.345 { 00:26:55.345 "trtype": "TCP" 00:26:55.345 } 00:26:55.345 ] 00:26:55.345 }, 00:26:55.345 { 00:26:55.345 "name": "nvmf_tgt_poll_group_003", 00:26:55.345 "admin_qpairs": 0, 00:26:55.345 "io_qpairs": 1, 00:26:55.345 "current_admin_qpairs": 0, 00:26:55.345 "current_io_qpairs": 1, 00:26:55.345 "pending_bdev_io": 0, 00:26:55.345 "completed_nvme_io": 18768, 00:26:55.345 "transports": [ 00:26:55.345 { 00:26:55.345 "trtype": "TCP" 00:26:55.345 } 00:26:55.345 ] 00:26:55.345 } 00:26:55.345 ] 00:26:55.345 }' 00:26:55.345 05:41:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:26:55.345 05:41:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:26:55.345 05:41:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:26:55.345 05:41:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:26:55.345 05:41:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 3312607 00:27:05.311 Initializing NVMe Controllers 00:27:05.311 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:05.311 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:05.311 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:05.311 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:05.311 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:05.311 Initialization complete. Launching workers. 00:27:05.311 ======================================================== 00:27:05.311 Latency(us) 00:27:05.311 Device Information : IOPS MiB/s Average min max 00:27:05.311 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10524.40 41.11 6081.54 1391.09 8439.31 00:27:05.311 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10052.60 39.27 6366.20 3015.23 9867.90 00:27:05.311 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 9889.70 38.63 6472.08 2130.36 9508.83 00:27:05.311 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10851.20 42.39 5898.44 2049.65 8820.27 00:27:05.311 ======================================================== 00:27:05.311 Total : 41317.90 161.40 6196.19 1391.09 9867.90 00:27:05.311 00:27:05.311 05:41:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:27:05.311 05:41:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:05.311 05:41:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:27:05.311 05:41:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:05.311 05:41:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:27:05.311 05:41:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:05.311 05:41:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:05.311 rmmod nvme_tcp 00:27:05.311 rmmod nvme_fabrics 00:27:05.311 rmmod nvme_keyring 00:27:05.311 05:41:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:05.311 05:41:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:27:05.311 05:41:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:27:05.311 05:41:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3312510 ']' 00:27:05.311 05:41:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3312510 00:27:05.311 05:41:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 3312510 ']' 00:27:05.311 05:41:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 3312510 00:27:05.311 05:41:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:27:05.311 05:41:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:05.311 05:41:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3312510 00:27:05.311 05:41:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:05.311 05:41:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:05.311 05:41:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3312510' 00:27:05.311 killing process with pid 3312510 00:27:05.311 05:41:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 3312510 00:27:05.311 05:41:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 3312510 00:27:05.311 05:41:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:05.311 05:41:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:05.311 05:41:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:05.311 05:41:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:05.311 05:41:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:05.311 05:41:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:05.311 05:41:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:05.311 05:41:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:05.877 05:41:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:05.877 05:41:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:27:05.877 05:41:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:27:06.812 05:41:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:27:08.713 05:41:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:13.983 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:13.983 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:13.983 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:13.983 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:13.983 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:13.984 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:13.984 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:13.984 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:13.984 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:13.984 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:13.984 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:13.984 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:13.984 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:13.984 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:13.984 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:13.984 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:13.984 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:13.984 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:13.984 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:13.984 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:13.984 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:27:13.984 00:27:13.984 --- 10.0.0.2 ping statistics --- 00:27:13.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:13.984 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:27:13.984 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:13.984 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:13.984 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:27:13.984 00:27:13.984 --- 10.0.0.1 ping statistics --- 00:27:13.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:13.984 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:27:13.984 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:13.984 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:27:13.984 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:13.984 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:13.984 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:13.984 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:13.984 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:13.984 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:13.984 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:13.984 05:41:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:27:13.984 05:41:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:27:13.984 05:41:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:27:13.984 05:41:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:27:13.984 net.core.busy_poll = 1 00:27:13.984 05:41:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:27:13.984 net.core.busy_read = 1 00:27:13.984 05:41:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:27:13.984 05:41:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:27:13.984 05:41:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:27:13.984 05:41:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:27:13.984 05:41:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:27:13.984 05:41:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:13.984 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:13.984 05:41:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:13.984 05:41:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:13.984 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3315217 00:27:13.984 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:13.984 05:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3315217 00:27:13.984 05:41:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 3315217 ']' 00:27:13.984 05:41:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:13.984 05:41:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:13.984 05:41:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:13.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:13.984 05:41:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:13.984 05:41:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:13.984 [2024-07-14 05:41:20.887220] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:27:13.984 [2024-07-14 05:41:20.887318] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:13.984 EAL: No free 2048 kB hugepages reported on node 1 00:27:13.984 [2024-07-14 05:41:20.952964] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:13.984 [2024-07-14 05:41:21.042663] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:13.984 [2024-07-14 05:41:21.042726] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:13.984 [2024-07-14 05:41:21.042739] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:13.984 [2024-07-14 05:41:21.042750] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:13.984 [2024-07-14 05:41:21.042759] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:13.984 [2024-07-14 05:41:21.042840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:13.984 [2024-07-14 05:41:21.042906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:13.984 [2024-07-14 05:41:21.042974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:13.984 [2024-07-14 05:41:21.042977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:14.241 05:41:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:14.241 05:41:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:27:14.241 05:41:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:14.241 05:41:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:14.241 05:41:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:14.241 05:41:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:14.241 05:41:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:27:14.241 05:41:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:14.241 05:41:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:14.241 05:41:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.241 05:41:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:14.241 05:41:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.241 05:41:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:14.241 05:41:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:27:14.241 05:41:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.241 05:41:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:14.241 05:41:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.241 05:41:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:14.241 05:41:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.241 05:41:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:14.241 05:41:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.241 05:41:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:27:14.241 05:41:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.241 05:41:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:14.241 [2024-07-14 05:41:21.293810] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:14.241 05:41:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.241 05:41:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:14.241 05:41:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.241 05:41:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:14.241 Malloc1 00:27:14.241 05:41:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.241 05:41:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:14.241 05:41:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.241 05:41:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:14.241 05:41:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.242 05:41:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:14.242 05:41:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.242 05:41:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:14.242 05:41:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.242 05:41:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:14.242 05:41:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.242 05:41:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:14.242 [2024-07-14 05:41:21.347074] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:14.499 05:41:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.499 05:41:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=3315296 00:27:14.499 05:41:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:27:14.499 05:41:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:14.499 EAL: No free 2048 kB hugepages reported on node 1 00:27:16.399 05:41:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:27:16.399 05:41:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.399 05:41:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:16.399 05:41:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.399 05:41:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:27:16.399 "tick_rate": 2700000000, 00:27:16.399 "poll_groups": [ 00:27:16.399 { 00:27:16.399 "name": "nvmf_tgt_poll_group_000", 00:27:16.399 "admin_qpairs": 1, 00:27:16.399 "io_qpairs": 2, 00:27:16.399 "current_admin_qpairs": 1, 00:27:16.399 "current_io_qpairs": 2, 00:27:16.399 "pending_bdev_io": 0, 00:27:16.399 "completed_nvme_io": 24878, 00:27:16.399 "transports": [ 00:27:16.399 { 00:27:16.399 "trtype": "TCP" 00:27:16.399 } 00:27:16.399 ] 00:27:16.399 }, 00:27:16.399 { 00:27:16.399 "name": "nvmf_tgt_poll_group_001", 00:27:16.399 "admin_qpairs": 0, 00:27:16.399 "io_qpairs": 2, 00:27:16.399 "current_admin_qpairs": 0, 00:27:16.399 "current_io_qpairs": 2, 00:27:16.399 "pending_bdev_io": 0, 00:27:16.399 "completed_nvme_io": 24318, 00:27:16.399 "transports": [ 00:27:16.399 { 00:27:16.399 "trtype": "TCP" 00:27:16.399 } 00:27:16.399 ] 00:27:16.399 }, 00:27:16.399 { 00:27:16.399 "name": "nvmf_tgt_poll_group_002", 00:27:16.399 "admin_qpairs": 0, 00:27:16.399 "io_qpairs": 0, 00:27:16.399 "current_admin_qpairs": 0, 00:27:16.399 "current_io_qpairs": 0, 00:27:16.399 "pending_bdev_io": 0, 00:27:16.399 "completed_nvme_io": 0, 00:27:16.399 "transports": [ 00:27:16.399 { 00:27:16.399 "trtype": "TCP" 00:27:16.399 } 00:27:16.399 ] 00:27:16.399 }, 00:27:16.399 { 00:27:16.399 "name": "nvmf_tgt_poll_group_003", 00:27:16.399 "admin_qpairs": 0, 00:27:16.399 "io_qpairs": 0, 00:27:16.399 "current_admin_qpairs": 0, 00:27:16.399 "current_io_qpairs": 0, 00:27:16.399 "pending_bdev_io": 0, 00:27:16.399 "completed_nvme_io": 0, 00:27:16.399 "transports": [ 00:27:16.399 { 00:27:16.399 "trtype": "TCP" 00:27:16.399 } 00:27:16.399 ] 00:27:16.399 } 00:27:16.399 ] 00:27:16.399 }' 00:27:16.399 05:41:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:27:16.399 05:41:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:27:16.399 05:41:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:27:16.399 05:41:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:27:16.399 05:41:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 3315296 00:27:24.540 Initializing NVMe Controllers 00:27:24.540 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:24.540 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:24.540 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:24.540 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:24.540 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:24.540 Initialization complete. Launching workers. 00:27:24.540 ======================================================== 00:27:24.540 Latency(us) 00:27:24.540 Device Information : IOPS MiB/s Average min max 00:27:24.540 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7189.00 28.08 8935.14 1571.89 54000.67 00:27:24.540 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7658.30 29.92 8358.10 1791.16 52643.96 00:27:24.540 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6373.70 24.90 10041.49 1917.59 53776.20 00:27:24.540 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5913.00 23.10 10848.09 1749.12 54310.45 00:27:24.540 ======================================================== 00:27:24.540 Total : 27134.00 105.99 9449.02 1571.89 54310.45 00:27:24.540 00:27:24.540 05:41:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:27:24.540 05:41:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:24.540 05:41:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:27:24.540 05:41:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:24.540 05:41:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:27:24.540 05:41:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:24.540 05:41:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:24.540 rmmod nvme_tcp 00:27:24.540 rmmod nvme_fabrics 00:27:24.540 rmmod nvme_keyring 00:27:24.540 05:41:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:24.540 05:41:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:27:24.540 05:41:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:27:24.540 05:41:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3315217 ']' 00:27:24.798 05:41:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3315217 00:27:24.798 05:41:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 3315217 ']' 00:27:24.798 05:41:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 3315217 00:27:24.798 05:41:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:27:24.798 05:41:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:24.798 05:41:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3315217 00:27:24.798 05:41:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:24.798 05:41:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:24.798 05:41:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3315217' 00:27:24.798 killing process with pid 3315217 00:27:24.798 05:41:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 3315217 00:27:24.798 05:41:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 3315217 00:27:25.056 05:41:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:25.056 05:41:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:25.056 05:41:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:25.057 05:41:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:25.057 05:41:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:25.057 05:41:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:25.057 05:41:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:25.057 05:41:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:28.339 05:41:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:28.339 05:41:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:27:28.339 00:27:28.339 real 0m44.934s 00:27:28.339 user 2m40.410s 00:27:28.339 sys 0m9.527s 00:27:28.339 05:41:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:28.339 05:41:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:28.339 ************************************ 00:27:28.339 END TEST nvmf_perf_adq 00:27:28.339 ************************************ 00:27:28.339 05:41:34 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:28.339 05:41:34 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:28.339 05:41:34 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:28.339 05:41:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:28.339 ************************************ 00:27:28.339 START TEST nvmf_shutdown 00:27:28.339 ************************************ 00:27:28.339 05:41:35 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:28.339 * Looking for test storage... 00:27:28.339 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:28.339 05:41:35 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:28.339 05:41:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:27:28.339 05:41:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:28.339 05:41:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:28.339 05:41:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:28.339 05:41:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:28.339 05:41:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:28.339 05:41:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:28.339 05:41:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:28.339 05:41:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:28.339 05:41:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:28.339 05:41:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:28.339 05:41:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:28.339 05:41:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:28.339 05:41:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:28.339 05:41:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:28.339 05:41:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:28.339 05:41:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:28.339 05:41:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:28.339 05:41:35 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:28.339 05:41:35 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:28.339 05:41:35 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:28.339 05:41:35 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.339 05:41:35 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.339 05:41:35 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.339 05:41:35 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:27:28.340 05:41:35 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.340 05:41:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:27:28.340 05:41:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:28.340 05:41:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:28.340 05:41:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:28.340 05:41:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:28.340 05:41:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:28.340 05:41:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:28.340 05:41:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:28.340 05:41:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:28.340 05:41:35 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:28.340 05:41:35 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:28.340 05:41:35 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:27:28.340 05:41:35 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:28.340 05:41:35 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:28.340 05:41:35 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:28.340 ************************************ 00:27:28.340 START TEST nvmf_shutdown_tc1 00:27:28.340 ************************************ 00:27:28.340 05:41:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc1 00:27:28.340 05:41:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:27:28.340 05:41:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:28.340 05:41:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:28.340 05:41:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:28.340 05:41:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:28.340 05:41:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:28.340 05:41:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:28.340 05:41:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:28.340 05:41:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:28.340 05:41:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:28.340 05:41:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:28.340 05:41:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:28.340 05:41:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:28.340 05:41:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:30.242 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:30.242 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:30.242 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:30.242 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:30.242 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:30.242 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:30.242 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:30.242 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:27:30.242 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:30.242 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:27:30.242 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:27:30.242 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:27:30.242 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:27:30.242 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:27:30.242 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:30.242 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:30.242 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:30.242 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:30.242 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:30.242 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:30.242 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:30.242 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:30.242 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:30.242 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:30.242 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:30.242 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:30.242 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:30.242 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:30.242 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:30.242 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:30.242 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:30.242 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:30.242 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:30.242 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:30.242 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:30.242 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:30.242 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:30.242 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:30.242 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:30.242 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:30.242 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:30.242 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:30.242 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:30.242 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:30.242 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:30.242 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:30.242 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:30.242 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:30.242 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:30.242 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:30.242 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:30.242 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:30.242 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:30.242 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:30.242 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:30.242 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:30.242 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:30.242 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:30.242 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:30.242 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:30.242 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:30.242 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:30.242 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:30.242 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:30.242 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:30.242 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:30.243 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:30.243 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:30.243 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:30.243 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:30.243 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:30.243 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:30.243 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:30.243 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:30.243 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:30.243 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:30.243 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:30.243 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:30.243 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:30.243 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:30.243 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:30.243 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:30.243 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:30.243 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:30.243 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:30.243 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:30.243 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:30.243 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:30.243 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:30.243 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:30.243 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:30.243 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:30.243 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:30.243 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:30.243 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:30.243 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:30.243 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:30.243 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:27:30.243 00:27:30.243 --- 10.0.0.2 ping statistics --- 00:27:30.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:30.243 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:27:30.243 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:30.243 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:30.243 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:27:30.243 00:27:30.243 --- 10.0.0.1 ping statistics --- 00:27:30.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:30.243 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:27:30.243 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:30.243 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:27:30.243 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:30.243 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:30.243 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:30.243 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:30.243 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:30.243 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:30.243 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:30.243 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:30.243 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:30.243 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:30.243 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:30.243 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=3318584 00:27:30.243 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:30.243 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 3318584 00:27:30.243 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 3318584 ']' 00:27:30.243 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:30.243 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:30.243 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:30.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:30.243 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:30.243 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:30.243 [2024-07-14 05:41:37.256463] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:27:30.243 [2024-07-14 05:41:37.256542] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:30.243 EAL: No free 2048 kB hugepages reported on node 1 00:27:30.243 [2024-07-14 05:41:37.322986] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:30.502 [2024-07-14 05:41:37.413860] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:30.502 [2024-07-14 05:41:37.413940] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:30.502 [2024-07-14 05:41:37.413954] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:30.502 [2024-07-14 05:41:37.413966] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:30.502 [2024-07-14 05:41:37.413976] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:30.502 [2024-07-14 05:41:37.414072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:30.502 [2024-07-14 05:41:37.414095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:30.502 [2024-07-14 05:41:37.414154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:30.502 [2024-07-14 05:41:37.414157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:30.502 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:30.502 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:27:30.502 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:30.502 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:30.502 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:30.502 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:30.502 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:30.502 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.502 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:30.502 [2024-07-14 05:41:37.565706] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:30.502 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.502 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:30.502 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:30.502 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:30.502 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:30.502 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:30.502 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:30.502 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:30.502 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:30.502 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:30.502 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:30.502 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:30.502 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:30.502 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:30.502 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:30.502 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:30.502 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:30.502 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:30.502 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:30.502 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:30.502 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:30.502 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:30.502 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:30.502 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:30.502 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:30.502 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:30.502 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:30.502 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.502 05:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:30.760 Malloc1 00:27:30.760 [2024-07-14 05:41:37.649200] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:30.760 Malloc2 00:27:30.760 Malloc3 00:27:30.760 Malloc4 00:27:30.760 Malloc5 00:27:30.760 Malloc6 00:27:31.018 Malloc7 00:27:31.018 Malloc8 00:27:31.018 Malloc9 00:27:31.018 Malloc10 00:27:31.018 05:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.018 05:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:31.018 05:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:31.018 05:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:31.018 05:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=3318767 00:27:31.018 05:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 3318767 /var/tmp/bdevperf.sock 00:27:31.018 05:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 3318767 ']' 00:27:31.018 05:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:31.019 05:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:27:31.019 05:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:31.019 05:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:31.019 05:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:31.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:31.019 05:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:27:31.019 05:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:31.019 05:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:27:31.019 05:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:31.019 05:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:31.019 05:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:31.019 { 00:27:31.019 "params": { 00:27:31.019 "name": "Nvme$subsystem", 00:27:31.019 "trtype": "$TEST_TRANSPORT", 00:27:31.019 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:31.019 "adrfam": "ipv4", 00:27:31.019 "trsvcid": "$NVMF_PORT", 00:27:31.019 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:31.019 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:31.019 "hdgst": ${hdgst:-false}, 00:27:31.019 "ddgst": ${ddgst:-false} 00:27:31.019 }, 00:27:31.019 "method": "bdev_nvme_attach_controller" 00:27:31.019 } 00:27:31.019 EOF 00:27:31.019 )") 00:27:31.019 05:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:31.019 05:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:31.019 05:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:31.019 { 00:27:31.019 "params": { 00:27:31.019 "name": "Nvme$subsystem", 00:27:31.019 "trtype": "$TEST_TRANSPORT", 00:27:31.019 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:31.019 "adrfam": "ipv4", 00:27:31.019 "trsvcid": "$NVMF_PORT", 00:27:31.019 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:31.019 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:31.019 "hdgst": ${hdgst:-false}, 00:27:31.019 "ddgst": ${ddgst:-false} 00:27:31.019 }, 00:27:31.019 "method": "bdev_nvme_attach_controller" 00:27:31.019 } 00:27:31.019 EOF 00:27:31.019 )") 00:27:31.019 05:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:31.019 05:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:31.019 05:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:31.019 { 00:27:31.019 "params": { 00:27:31.019 "name": "Nvme$subsystem", 00:27:31.019 "trtype": "$TEST_TRANSPORT", 00:27:31.019 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:31.019 "adrfam": "ipv4", 00:27:31.019 "trsvcid": "$NVMF_PORT", 00:27:31.019 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:31.019 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:31.019 "hdgst": ${hdgst:-false}, 00:27:31.019 "ddgst": ${ddgst:-false} 00:27:31.019 }, 00:27:31.019 "method": "bdev_nvme_attach_controller" 00:27:31.019 } 00:27:31.019 EOF 00:27:31.019 )") 00:27:31.019 05:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:31.019 05:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:31.019 05:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:31.019 { 00:27:31.019 "params": { 00:27:31.019 "name": "Nvme$subsystem", 00:27:31.019 "trtype": "$TEST_TRANSPORT", 00:27:31.019 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:31.019 "adrfam": "ipv4", 00:27:31.019 "trsvcid": "$NVMF_PORT", 00:27:31.019 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:31.019 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:31.019 "hdgst": ${hdgst:-false}, 00:27:31.019 "ddgst": ${ddgst:-false} 00:27:31.019 }, 00:27:31.019 "method": "bdev_nvme_attach_controller" 00:27:31.019 } 00:27:31.019 EOF 00:27:31.019 )") 00:27:31.019 05:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:31.019 05:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:31.019 05:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:31.019 { 00:27:31.019 "params": { 00:27:31.019 "name": "Nvme$subsystem", 00:27:31.019 "trtype": "$TEST_TRANSPORT", 00:27:31.019 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:31.019 "adrfam": "ipv4", 00:27:31.019 "trsvcid": "$NVMF_PORT", 00:27:31.019 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:31.019 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:31.019 "hdgst": ${hdgst:-false}, 00:27:31.019 "ddgst": ${ddgst:-false} 00:27:31.019 }, 00:27:31.019 "method": "bdev_nvme_attach_controller" 00:27:31.019 } 00:27:31.019 EOF 00:27:31.019 )") 00:27:31.019 05:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:31.019 05:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:31.019 05:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:31.019 { 00:27:31.019 "params": { 00:27:31.019 "name": "Nvme$subsystem", 00:27:31.019 "trtype": "$TEST_TRANSPORT", 00:27:31.019 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:31.019 "adrfam": "ipv4", 00:27:31.019 "trsvcid": "$NVMF_PORT", 00:27:31.019 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:31.019 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:31.019 "hdgst": ${hdgst:-false}, 00:27:31.019 "ddgst": ${ddgst:-false} 00:27:31.019 }, 00:27:31.019 "method": "bdev_nvme_attach_controller" 00:27:31.019 } 00:27:31.019 EOF 00:27:31.019 )") 00:27:31.019 05:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:31.019 05:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:31.019 05:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:31.019 { 00:27:31.019 "params": { 00:27:31.019 "name": "Nvme$subsystem", 00:27:31.019 "trtype": "$TEST_TRANSPORT", 00:27:31.019 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:31.019 "adrfam": "ipv4", 00:27:31.019 "trsvcid": "$NVMF_PORT", 00:27:31.019 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:31.019 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:31.019 "hdgst": ${hdgst:-false}, 00:27:31.019 "ddgst": ${ddgst:-false} 00:27:31.019 }, 00:27:31.019 "method": "bdev_nvme_attach_controller" 00:27:31.019 } 00:27:31.019 EOF 00:27:31.019 )") 00:27:31.019 05:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:31.277 05:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:31.277 05:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:31.277 { 00:27:31.277 "params": { 00:27:31.277 "name": "Nvme$subsystem", 00:27:31.277 "trtype": "$TEST_TRANSPORT", 00:27:31.277 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:31.277 "adrfam": "ipv4", 00:27:31.277 "trsvcid": "$NVMF_PORT", 00:27:31.277 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:31.277 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:31.278 "hdgst": ${hdgst:-false}, 00:27:31.278 "ddgst": ${ddgst:-false} 00:27:31.278 }, 00:27:31.278 "method": "bdev_nvme_attach_controller" 00:27:31.278 } 00:27:31.278 EOF 00:27:31.278 )") 00:27:31.278 05:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:31.278 05:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:31.278 05:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:31.278 { 00:27:31.278 "params": { 00:27:31.278 "name": "Nvme$subsystem", 00:27:31.278 "trtype": "$TEST_TRANSPORT", 00:27:31.278 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:31.278 "adrfam": "ipv4", 00:27:31.278 "trsvcid": "$NVMF_PORT", 00:27:31.278 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:31.278 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:31.278 "hdgst": ${hdgst:-false}, 00:27:31.278 "ddgst": ${ddgst:-false} 00:27:31.278 }, 00:27:31.278 "method": "bdev_nvme_attach_controller" 00:27:31.278 } 00:27:31.278 EOF 00:27:31.278 )") 00:27:31.278 05:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:31.278 05:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:31.278 05:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:31.278 { 00:27:31.278 "params": { 00:27:31.278 "name": "Nvme$subsystem", 00:27:31.278 "trtype": "$TEST_TRANSPORT", 00:27:31.278 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:31.278 "adrfam": "ipv4", 00:27:31.278 "trsvcid": "$NVMF_PORT", 00:27:31.278 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:31.278 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:31.278 "hdgst": ${hdgst:-false}, 00:27:31.278 "ddgst": ${ddgst:-false} 00:27:31.278 }, 00:27:31.278 "method": "bdev_nvme_attach_controller" 00:27:31.278 } 00:27:31.278 EOF 00:27:31.278 )") 00:27:31.278 05:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:31.278 05:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:27:31.278 05:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:27:31.278 05:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:31.278 "params": { 00:27:31.278 "name": "Nvme1", 00:27:31.278 "trtype": "tcp", 00:27:31.278 "traddr": "10.0.0.2", 00:27:31.278 "adrfam": "ipv4", 00:27:31.278 "trsvcid": "4420", 00:27:31.278 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:31.278 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:31.278 "hdgst": false, 00:27:31.278 "ddgst": false 00:27:31.278 }, 00:27:31.278 "method": "bdev_nvme_attach_controller" 00:27:31.278 },{ 00:27:31.278 "params": { 00:27:31.278 "name": "Nvme2", 00:27:31.278 "trtype": "tcp", 00:27:31.278 "traddr": "10.0.0.2", 00:27:31.278 "adrfam": "ipv4", 00:27:31.278 "trsvcid": "4420", 00:27:31.278 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:31.278 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:31.278 "hdgst": false, 00:27:31.278 "ddgst": false 00:27:31.278 }, 00:27:31.278 "method": "bdev_nvme_attach_controller" 00:27:31.278 },{ 00:27:31.278 "params": { 00:27:31.278 "name": "Nvme3", 00:27:31.278 "trtype": "tcp", 00:27:31.278 "traddr": "10.0.0.2", 00:27:31.278 "adrfam": "ipv4", 00:27:31.278 "trsvcid": "4420", 00:27:31.278 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:31.278 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:31.278 "hdgst": false, 00:27:31.278 "ddgst": false 00:27:31.278 }, 00:27:31.278 "method": "bdev_nvme_attach_controller" 00:27:31.278 },{ 00:27:31.278 "params": { 00:27:31.278 "name": "Nvme4", 00:27:31.278 "trtype": "tcp", 00:27:31.278 "traddr": "10.0.0.2", 00:27:31.278 "adrfam": "ipv4", 00:27:31.278 "trsvcid": "4420", 00:27:31.278 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:31.278 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:31.278 "hdgst": false, 00:27:31.278 "ddgst": false 00:27:31.278 }, 00:27:31.278 "method": "bdev_nvme_attach_controller" 00:27:31.278 },{ 00:27:31.278 "params": { 00:27:31.278 "name": "Nvme5", 00:27:31.278 "trtype": "tcp", 00:27:31.278 "traddr": "10.0.0.2", 00:27:31.278 "adrfam": "ipv4", 00:27:31.278 "trsvcid": "4420", 00:27:31.278 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:31.278 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:31.278 "hdgst": false, 00:27:31.278 "ddgst": false 00:27:31.278 }, 00:27:31.278 "method": "bdev_nvme_attach_controller" 00:27:31.278 },{ 00:27:31.278 "params": { 00:27:31.278 "name": "Nvme6", 00:27:31.278 "trtype": "tcp", 00:27:31.278 "traddr": "10.0.0.2", 00:27:31.278 "adrfam": "ipv4", 00:27:31.278 "trsvcid": "4420", 00:27:31.278 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:31.278 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:31.278 "hdgst": false, 00:27:31.278 "ddgst": false 00:27:31.278 }, 00:27:31.278 "method": "bdev_nvme_attach_controller" 00:27:31.278 },{ 00:27:31.278 "params": { 00:27:31.278 "name": "Nvme7", 00:27:31.278 "trtype": "tcp", 00:27:31.278 "traddr": "10.0.0.2", 00:27:31.278 "adrfam": "ipv4", 00:27:31.278 "trsvcid": "4420", 00:27:31.278 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:31.278 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:31.278 "hdgst": false, 00:27:31.278 "ddgst": false 00:27:31.278 }, 00:27:31.278 "method": "bdev_nvme_attach_controller" 00:27:31.278 },{ 00:27:31.278 "params": { 00:27:31.278 "name": "Nvme8", 00:27:31.278 "trtype": "tcp", 00:27:31.278 "traddr": "10.0.0.2", 00:27:31.278 "adrfam": "ipv4", 00:27:31.278 "trsvcid": "4420", 00:27:31.278 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:31.278 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:31.278 "hdgst": false, 00:27:31.278 "ddgst": false 00:27:31.278 }, 00:27:31.278 "method": "bdev_nvme_attach_controller" 00:27:31.278 },{ 00:27:31.278 "params": { 00:27:31.278 "name": "Nvme9", 00:27:31.278 "trtype": "tcp", 00:27:31.278 "traddr": "10.0.0.2", 00:27:31.278 "adrfam": "ipv4", 00:27:31.278 "trsvcid": "4420", 00:27:31.278 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:31.278 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:31.278 "hdgst": false, 00:27:31.278 "ddgst": false 00:27:31.278 }, 00:27:31.278 "method": "bdev_nvme_attach_controller" 00:27:31.278 },{ 00:27:31.278 "params": { 00:27:31.278 "name": "Nvme10", 00:27:31.278 "trtype": "tcp", 00:27:31.278 "traddr": "10.0.0.2", 00:27:31.278 "adrfam": "ipv4", 00:27:31.278 "trsvcid": "4420", 00:27:31.278 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:31.278 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:31.278 "hdgst": false, 00:27:31.278 "ddgst": false 00:27:31.278 }, 00:27:31.278 "method": "bdev_nvme_attach_controller" 00:27:31.278 }' 00:27:31.278 [2024-07-14 05:41:38.144079] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:27:31.278 [2024-07-14 05:41:38.144175] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:27:31.278 EAL: No free 2048 kB hugepages reported on node 1 00:27:31.278 [2024-07-14 05:41:38.209212] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:31.278 [2024-07-14 05:41:38.296092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:33.174 05:41:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:33.174 05:41:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:27:33.174 05:41:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:33.174 05:41:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.174 05:41:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:33.174 05:41:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.174 05:41:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 3318767 00:27:33.174 05:41:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:27:33.174 05:41:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:27:34.105 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 3318767 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:27:34.105 05:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 3318584 00:27:34.105 05:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:27:34.105 05:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:34.105 05:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:27:34.105 05:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:27:34.105 05:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:34.105 05:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:34.105 { 00:27:34.105 "params": { 00:27:34.105 "name": "Nvme$subsystem", 00:27:34.105 "trtype": "$TEST_TRANSPORT", 00:27:34.105 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:34.105 "adrfam": "ipv4", 00:27:34.105 "trsvcid": "$NVMF_PORT", 00:27:34.105 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:34.105 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:34.105 "hdgst": ${hdgst:-false}, 00:27:34.105 "ddgst": ${ddgst:-false} 00:27:34.105 }, 00:27:34.105 "method": "bdev_nvme_attach_controller" 00:27:34.105 } 00:27:34.105 EOF 00:27:34.105 )") 00:27:34.105 05:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:34.105 05:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:34.105 05:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:34.105 { 00:27:34.105 "params": { 00:27:34.105 "name": "Nvme$subsystem", 00:27:34.105 "trtype": "$TEST_TRANSPORT", 00:27:34.105 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:34.105 "adrfam": "ipv4", 00:27:34.105 "trsvcid": "$NVMF_PORT", 00:27:34.105 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:34.105 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:34.105 "hdgst": ${hdgst:-false}, 00:27:34.105 "ddgst": ${ddgst:-false} 00:27:34.105 }, 00:27:34.105 "method": "bdev_nvme_attach_controller" 00:27:34.105 } 00:27:34.105 EOF 00:27:34.105 )") 00:27:34.105 05:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:34.105 05:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:34.105 05:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:34.105 { 00:27:34.105 "params": { 00:27:34.105 "name": "Nvme$subsystem", 00:27:34.105 "trtype": "$TEST_TRANSPORT", 00:27:34.105 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:34.105 "adrfam": "ipv4", 00:27:34.105 "trsvcid": "$NVMF_PORT", 00:27:34.105 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:34.105 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:34.105 "hdgst": ${hdgst:-false}, 00:27:34.105 "ddgst": ${ddgst:-false} 00:27:34.105 }, 00:27:34.105 "method": "bdev_nvme_attach_controller" 00:27:34.105 } 00:27:34.105 EOF 00:27:34.105 )") 00:27:34.105 05:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:34.105 05:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:34.105 05:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:34.105 { 00:27:34.105 "params": { 00:27:34.105 "name": "Nvme$subsystem", 00:27:34.105 "trtype": "$TEST_TRANSPORT", 00:27:34.105 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:34.105 "adrfam": "ipv4", 00:27:34.105 "trsvcid": "$NVMF_PORT", 00:27:34.105 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:34.105 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:34.105 "hdgst": ${hdgst:-false}, 00:27:34.105 "ddgst": ${ddgst:-false} 00:27:34.105 }, 00:27:34.105 "method": "bdev_nvme_attach_controller" 00:27:34.105 } 00:27:34.105 EOF 00:27:34.105 )") 00:27:34.105 05:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:34.105 05:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:34.105 05:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:34.105 { 00:27:34.105 "params": { 00:27:34.105 "name": "Nvme$subsystem", 00:27:34.105 "trtype": "$TEST_TRANSPORT", 00:27:34.105 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:34.105 "adrfam": "ipv4", 00:27:34.105 "trsvcid": "$NVMF_PORT", 00:27:34.105 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:34.105 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:34.105 "hdgst": ${hdgst:-false}, 00:27:34.105 "ddgst": ${ddgst:-false} 00:27:34.105 }, 00:27:34.105 "method": "bdev_nvme_attach_controller" 00:27:34.105 } 00:27:34.105 EOF 00:27:34.105 )") 00:27:34.105 05:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:34.105 05:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:34.105 05:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:34.105 { 00:27:34.105 "params": { 00:27:34.105 "name": "Nvme$subsystem", 00:27:34.105 "trtype": "$TEST_TRANSPORT", 00:27:34.105 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:34.105 "adrfam": "ipv4", 00:27:34.105 "trsvcid": "$NVMF_PORT", 00:27:34.105 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:34.105 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:34.105 "hdgst": ${hdgst:-false}, 00:27:34.105 "ddgst": ${ddgst:-false} 00:27:34.105 }, 00:27:34.105 "method": "bdev_nvme_attach_controller" 00:27:34.105 } 00:27:34.105 EOF 00:27:34.105 )") 00:27:34.105 05:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:34.105 05:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:34.105 05:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:34.105 { 00:27:34.105 "params": { 00:27:34.105 "name": "Nvme$subsystem", 00:27:34.105 "trtype": "$TEST_TRANSPORT", 00:27:34.105 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:34.105 "adrfam": "ipv4", 00:27:34.105 "trsvcid": "$NVMF_PORT", 00:27:34.105 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:34.105 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:34.105 "hdgst": ${hdgst:-false}, 00:27:34.106 "ddgst": ${ddgst:-false} 00:27:34.106 }, 00:27:34.106 "method": "bdev_nvme_attach_controller" 00:27:34.106 } 00:27:34.106 EOF 00:27:34.106 )") 00:27:34.106 05:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:34.106 05:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:34.106 05:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:34.106 { 00:27:34.106 "params": { 00:27:34.106 "name": "Nvme$subsystem", 00:27:34.106 "trtype": "$TEST_TRANSPORT", 00:27:34.106 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:34.106 "adrfam": "ipv4", 00:27:34.106 "trsvcid": "$NVMF_PORT", 00:27:34.106 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:34.106 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:34.106 "hdgst": ${hdgst:-false}, 00:27:34.106 "ddgst": ${ddgst:-false} 00:27:34.106 }, 00:27:34.106 "method": "bdev_nvme_attach_controller" 00:27:34.106 } 00:27:34.106 EOF 00:27:34.106 )") 00:27:34.106 05:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:34.106 05:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:34.106 05:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:34.106 { 00:27:34.106 "params": { 00:27:34.106 "name": "Nvme$subsystem", 00:27:34.106 "trtype": "$TEST_TRANSPORT", 00:27:34.106 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:34.106 "adrfam": "ipv4", 00:27:34.106 "trsvcid": "$NVMF_PORT", 00:27:34.106 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:34.106 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:34.106 "hdgst": ${hdgst:-false}, 00:27:34.106 "ddgst": ${ddgst:-false} 00:27:34.106 }, 00:27:34.106 "method": "bdev_nvme_attach_controller" 00:27:34.106 } 00:27:34.106 EOF 00:27:34.106 )") 00:27:34.106 05:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:34.106 05:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:34.106 05:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:34.106 { 00:27:34.106 "params": { 00:27:34.106 "name": "Nvme$subsystem", 00:27:34.106 "trtype": "$TEST_TRANSPORT", 00:27:34.106 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:34.106 "adrfam": "ipv4", 00:27:34.106 "trsvcid": "$NVMF_PORT", 00:27:34.106 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:34.106 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:34.106 "hdgst": ${hdgst:-false}, 00:27:34.106 "ddgst": ${ddgst:-false} 00:27:34.106 }, 00:27:34.106 "method": "bdev_nvme_attach_controller" 00:27:34.106 } 00:27:34.106 EOF 00:27:34.106 )") 00:27:34.106 05:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:34.106 05:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:27:34.106 05:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:27:34.106 05:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:34.106 "params": { 00:27:34.106 "name": "Nvme1", 00:27:34.106 "trtype": "tcp", 00:27:34.106 "traddr": "10.0.0.2", 00:27:34.106 "adrfam": "ipv4", 00:27:34.106 "trsvcid": "4420", 00:27:34.106 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:34.106 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:34.106 "hdgst": false, 00:27:34.106 "ddgst": false 00:27:34.106 }, 00:27:34.106 "method": "bdev_nvme_attach_controller" 00:27:34.106 },{ 00:27:34.106 "params": { 00:27:34.106 "name": "Nvme2", 00:27:34.106 "trtype": "tcp", 00:27:34.106 "traddr": "10.0.0.2", 00:27:34.106 "adrfam": "ipv4", 00:27:34.106 "trsvcid": "4420", 00:27:34.106 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:34.106 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:34.106 "hdgst": false, 00:27:34.106 "ddgst": false 00:27:34.106 }, 00:27:34.106 "method": "bdev_nvme_attach_controller" 00:27:34.106 },{ 00:27:34.106 "params": { 00:27:34.106 "name": "Nvme3", 00:27:34.106 "trtype": "tcp", 00:27:34.106 "traddr": "10.0.0.2", 00:27:34.106 "adrfam": "ipv4", 00:27:34.106 "trsvcid": "4420", 00:27:34.106 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:34.106 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:34.106 "hdgst": false, 00:27:34.106 "ddgst": false 00:27:34.106 }, 00:27:34.106 "method": "bdev_nvme_attach_controller" 00:27:34.106 },{ 00:27:34.106 "params": { 00:27:34.106 "name": "Nvme4", 00:27:34.106 "trtype": "tcp", 00:27:34.106 "traddr": "10.0.0.2", 00:27:34.106 "adrfam": "ipv4", 00:27:34.106 "trsvcid": "4420", 00:27:34.106 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:34.106 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:34.106 "hdgst": false, 00:27:34.106 "ddgst": false 00:27:34.106 }, 00:27:34.106 "method": "bdev_nvme_attach_controller" 00:27:34.106 },{ 00:27:34.106 "params": { 00:27:34.106 "name": "Nvme5", 00:27:34.106 "trtype": "tcp", 00:27:34.106 "traddr": "10.0.0.2", 00:27:34.106 "adrfam": "ipv4", 00:27:34.106 "trsvcid": "4420", 00:27:34.106 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:34.106 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:34.106 "hdgst": false, 00:27:34.106 "ddgst": false 00:27:34.106 }, 00:27:34.106 "method": "bdev_nvme_attach_controller" 00:27:34.106 },{ 00:27:34.106 "params": { 00:27:34.106 "name": "Nvme6", 00:27:34.106 "trtype": "tcp", 00:27:34.106 "traddr": "10.0.0.2", 00:27:34.106 "adrfam": "ipv4", 00:27:34.106 "trsvcid": "4420", 00:27:34.106 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:34.106 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:34.106 "hdgst": false, 00:27:34.106 "ddgst": false 00:27:34.106 }, 00:27:34.106 "method": "bdev_nvme_attach_controller" 00:27:34.106 },{ 00:27:34.106 "params": { 00:27:34.106 "name": "Nvme7", 00:27:34.106 "trtype": "tcp", 00:27:34.106 "traddr": "10.0.0.2", 00:27:34.106 "adrfam": "ipv4", 00:27:34.106 "trsvcid": "4420", 00:27:34.106 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:34.106 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:34.106 "hdgst": false, 00:27:34.106 "ddgst": false 00:27:34.106 }, 00:27:34.106 "method": "bdev_nvme_attach_controller" 00:27:34.106 },{ 00:27:34.106 "params": { 00:27:34.106 "name": "Nvme8", 00:27:34.106 "trtype": "tcp", 00:27:34.106 "traddr": "10.0.0.2", 00:27:34.106 "adrfam": "ipv4", 00:27:34.106 "trsvcid": "4420", 00:27:34.106 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:34.106 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:34.106 "hdgst": false, 00:27:34.106 "ddgst": false 00:27:34.106 }, 00:27:34.106 "method": "bdev_nvme_attach_controller" 00:27:34.106 },{ 00:27:34.106 "params": { 00:27:34.106 "name": "Nvme9", 00:27:34.106 "trtype": "tcp", 00:27:34.106 "traddr": "10.0.0.2", 00:27:34.106 "adrfam": "ipv4", 00:27:34.106 "trsvcid": "4420", 00:27:34.106 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:34.106 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:34.106 "hdgst": false, 00:27:34.106 "ddgst": false 00:27:34.106 }, 00:27:34.106 "method": "bdev_nvme_attach_controller" 00:27:34.106 },{ 00:27:34.106 "params": { 00:27:34.106 "name": "Nvme10", 00:27:34.106 "trtype": "tcp", 00:27:34.106 "traddr": "10.0.0.2", 00:27:34.106 "adrfam": "ipv4", 00:27:34.106 "trsvcid": "4420", 00:27:34.106 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:34.106 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:34.106 "hdgst": false, 00:27:34.106 "ddgst": false 00:27:34.106 }, 00:27:34.106 "method": "bdev_nvme_attach_controller" 00:27:34.106 }' 00:27:34.106 [2024-07-14 05:41:41.162902] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:27:34.106 [2024-07-14 05:41:41.162986] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3319067 ] 00:27:34.106 EAL: No free 2048 kB hugepages reported on node 1 00:27:34.364 [2024-07-14 05:41:41.232200] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:34.364 [2024-07-14 05:41:41.320644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:36.270 Running I/O for 1 seconds... 00:27:37.650 00:27:37.650 Latency(us) 00:27:37.650 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:37.650 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:37.650 Verification LBA range: start 0x0 length 0x400 00:27:37.650 Nvme1n1 : 1.07 179.50 11.22 0.00 0.00 352909.59 23010.42 274959.93 00:27:37.650 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:37.650 Verification LBA range: start 0x0 length 0x400 00:27:37.650 Nvme2n1 : 1.15 227.44 14.21 0.00 0.00 273070.11 6699.24 265639.25 00:27:37.650 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:37.650 Verification LBA range: start 0x0 length 0x400 00:27:37.650 Nvme3n1 : 1.14 232.41 14.53 0.00 0.00 261072.54 6941.96 254765.13 00:27:37.650 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:37.650 Verification LBA range: start 0x0 length 0x400 00:27:37.650 Nvme4n1 : 1.07 238.46 14.90 0.00 0.00 251879.54 19320.98 262532.36 00:27:37.650 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:37.650 Verification LBA range: start 0x0 length 0x400 00:27:37.650 Nvme5n1 : 1.13 234.95 14.68 0.00 0.00 241724.06 24660.95 246997.90 00:27:37.650 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:37.650 Verification LBA range: start 0x0 length 0x400 00:27:37.650 Nvme6n1 : 1.15 221.66 13.85 0.00 0.00 263019.90 21845.33 271853.04 00:27:37.650 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:37.650 Verification LBA range: start 0x0 length 0x400 00:27:37.650 Nvme7n1 : 1.18 216.44 13.53 0.00 0.00 265407.53 20874.43 284280.60 00:27:37.650 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:37.650 Verification LBA range: start 0x0 length 0x400 00:27:37.650 Nvme8n1 : 1.19 268.11 16.76 0.00 0.00 210083.08 9223.59 262532.36 00:27:37.650 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:37.650 Verification LBA range: start 0x0 length 0x400 00:27:37.650 Nvme9n1 : 1.20 213.65 13.35 0.00 0.00 260170.90 22427.88 313796.08 00:27:37.650 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:37.650 Verification LBA range: start 0x0 length 0x400 00:27:37.650 Nvme10n1 : 1.22 261.40 16.34 0.00 0.00 209571.23 10534.31 267192.70 00:27:37.650 =================================================================================================================== 00:27:37.650 Total : 2294.04 143.38 0.00 0.00 254223.50 6699.24 313796.08 00:27:37.650 05:41:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:27:37.650 05:41:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:37.650 05:41:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:37.650 05:41:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:37.650 05:41:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:37.650 05:41:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:37.650 05:41:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:27:37.650 05:41:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:37.650 05:41:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:27:37.650 05:41:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:37.650 05:41:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:37.651 rmmod nvme_tcp 00:27:37.651 rmmod nvme_fabrics 00:27:37.651 rmmod nvme_keyring 00:27:37.651 05:41:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:37.651 05:41:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:27:37.651 05:41:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:27:37.651 05:41:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 3318584 ']' 00:27:37.651 05:41:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 3318584 00:27:37.651 05:41:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@946 -- # '[' -z 3318584 ']' 00:27:37.651 05:41:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # kill -0 3318584 00:27:37.651 05:41:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # uname 00:27:37.651 05:41:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:37.651 05:41:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3318584 00:27:37.651 05:41:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:37.651 05:41:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:37.651 05:41:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3318584' 00:27:37.651 killing process with pid 3318584 00:27:37.651 05:41:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@965 -- # kill 3318584 00:27:37.651 05:41:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # wait 3318584 00:27:38.216 05:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:38.216 05:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:38.216 05:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:38.216 05:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:38.216 05:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:38.216 05:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:38.216 05:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:38.216 05:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:40.746 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:40.746 00:27:40.746 real 0m12.134s 00:27:40.746 user 0m35.792s 00:27:40.746 sys 0m3.332s 00:27:40.746 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:40.746 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:40.746 ************************************ 00:27:40.746 END TEST nvmf_shutdown_tc1 00:27:40.746 ************************************ 00:27:40.746 05:41:47 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:27:40.746 05:41:47 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:40.747 ************************************ 00:27:40.747 START TEST nvmf_shutdown_tc2 00:27:40.747 ************************************ 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc2 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:40.747 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:40.747 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:40.747 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:40.747 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:40.747 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:40.747 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:27:40.747 00:27:40.747 --- 10.0.0.2 ping statistics --- 00:27:40.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:40.747 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:27:40.747 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:40.747 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:40.747 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:27:40.748 00:27:40.748 --- 10.0.0.1 ping statistics --- 00:27:40.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:40.748 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:27:40.748 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:40.748 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:27:40.748 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:40.748 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:40.748 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:40.748 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:40.748 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:40.748 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:40.748 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:40.748 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:40.748 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:40.748 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:40.748 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:40.748 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3319955 00:27:40.748 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:40.748 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3319955 00:27:40.748 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 3319955 ']' 00:27:40.748 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:40.748 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:40.748 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:40.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:40.748 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:40.748 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:40.748 [2024-07-14 05:41:47.514851] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:27:40.748 [2024-07-14 05:41:47.514959] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:40.748 EAL: No free 2048 kB hugepages reported on node 1 00:27:40.748 [2024-07-14 05:41:47.580859] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:40.748 [2024-07-14 05:41:47.669327] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:40.748 [2024-07-14 05:41:47.669377] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:40.748 [2024-07-14 05:41:47.669407] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:40.748 [2024-07-14 05:41:47.669418] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:40.748 [2024-07-14 05:41:47.669429] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:40.748 [2024-07-14 05:41:47.669484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:40.748 [2024-07-14 05:41:47.669544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:40.748 [2024-07-14 05:41:47.669610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:40.748 [2024-07-14 05:41:47.669613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:40.748 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:40.748 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:27:40.748 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:40.748 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:40.748 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:40.748 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:40.748 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:40.748 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.748 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:40.748 [2024-07-14 05:41:47.826752] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:40.748 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.748 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:40.748 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:40.748 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:40.748 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:40.748 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:40.748 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:40.748 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:40.748 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:40.748 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:40.748 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:40.748 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:40.748 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:41.006 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:41.006 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:41.006 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:41.006 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:41.006 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:41.006 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:41.006 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:41.006 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:41.006 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:41.006 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:41.006 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:41.006 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:41.006 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:41.006 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:41.006 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.006 05:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:41.006 Malloc1 00:27:41.006 [2024-07-14 05:41:47.916433] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:41.006 Malloc2 00:27:41.006 Malloc3 00:27:41.006 Malloc4 00:27:41.006 Malloc5 00:27:41.264 Malloc6 00:27:41.264 Malloc7 00:27:41.264 Malloc8 00:27:41.264 Malloc9 00:27:41.264 Malloc10 00:27:41.264 05:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.264 05:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:41.264 05:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:41.264 05:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:41.523 05:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=3320134 00:27:41.523 05:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 3320134 /var/tmp/bdevperf.sock 00:27:41.523 05:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 3320134 ']' 00:27:41.523 05:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:41.523 05:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:41.523 05:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:41.523 05:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:41.523 05:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:41.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:41.523 05:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:27:41.523 05:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:41.523 05:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:27:41.523 05:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:41.523 05:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:41.523 05:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:41.523 { 00:27:41.523 "params": { 00:27:41.523 "name": "Nvme$subsystem", 00:27:41.523 "trtype": "$TEST_TRANSPORT", 00:27:41.523 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:41.523 "adrfam": "ipv4", 00:27:41.523 "trsvcid": "$NVMF_PORT", 00:27:41.523 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:41.523 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:41.523 "hdgst": ${hdgst:-false}, 00:27:41.523 "ddgst": ${ddgst:-false} 00:27:41.523 }, 00:27:41.523 "method": "bdev_nvme_attach_controller" 00:27:41.523 } 00:27:41.523 EOF 00:27:41.523 )") 00:27:41.523 05:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:41.523 05:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:41.523 05:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:41.523 { 00:27:41.523 "params": { 00:27:41.523 "name": "Nvme$subsystem", 00:27:41.523 "trtype": "$TEST_TRANSPORT", 00:27:41.523 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:41.523 "adrfam": "ipv4", 00:27:41.523 "trsvcid": "$NVMF_PORT", 00:27:41.523 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:41.523 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:41.523 "hdgst": ${hdgst:-false}, 00:27:41.523 "ddgst": ${ddgst:-false} 00:27:41.523 }, 00:27:41.523 "method": "bdev_nvme_attach_controller" 00:27:41.523 } 00:27:41.523 EOF 00:27:41.523 )") 00:27:41.523 05:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:41.523 05:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:41.523 05:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:41.523 { 00:27:41.523 "params": { 00:27:41.523 "name": "Nvme$subsystem", 00:27:41.523 "trtype": "$TEST_TRANSPORT", 00:27:41.523 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:41.523 "adrfam": "ipv4", 00:27:41.523 "trsvcid": "$NVMF_PORT", 00:27:41.523 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:41.523 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:41.523 "hdgst": ${hdgst:-false}, 00:27:41.523 "ddgst": ${ddgst:-false} 00:27:41.523 }, 00:27:41.523 "method": "bdev_nvme_attach_controller" 00:27:41.523 } 00:27:41.523 EOF 00:27:41.523 )") 00:27:41.523 05:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:41.523 05:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:41.523 05:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:41.523 { 00:27:41.523 "params": { 00:27:41.523 "name": "Nvme$subsystem", 00:27:41.523 "trtype": "$TEST_TRANSPORT", 00:27:41.523 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:41.523 "adrfam": "ipv4", 00:27:41.523 "trsvcid": "$NVMF_PORT", 00:27:41.523 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:41.523 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:41.523 "hdgst": ${hdgst:-false}, 00:27:41.523 "ddgst": ${ddgst:-false} 00:27:41.523 }, 00:27:41.523 "method": "bdev_nvme_attach_controller" 00:27:41.523 } 00:27:41.523 EOF 00:27:41.523 )") 00:27:41.524 05:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:41.524 05:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:41.524 05:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:41.524 { 00:27:41.524 "params": { 00:27:41.524 "name": "Nvme$subsystem", 00:27:41.524 "trtype": "$TEST_TRANSPORT", 00:27:41.524 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:41.524 "adrfam": "ipv4", 00:27:41.524 "trsvcid": "$NVMF_PORT", 00:27:41.524 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:41.524 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:41.524 "hdgst": ${hdgst:-false}, 00:27:41.524 "ddgst": ${ddgst:-false} 00:27:41.524 }, 00:27:41.524 "method": "bdev_nvme_attach_controller" 00:27:41.524 } 00:27:41.524 EOF 00:27:41.524 )") 00:27:41.524 05:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:41.524 05:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:41.524 05:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:41.524 { 00:27:41.524 "params": { 00:27:41.524 "name": "Nvme$subsystem", 00:27:41.524 "trtype": "$TEST_TRANSPORT", 00:27:41.524 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:41.524 "adrfam": "ipv4", 00:27:41.524 "trsvcid": "$NVMF_PORT", 00:27:41.524 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:41.524 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:41.524 "hdgst": ${hdgst:-false}, 00:27:41.524 "ddgst": ${ddgst:-false} 00:27:41.524 }, 00:27:41.524 "method": "bdev_nvme_attach_controller" 00:27:41.524 } 00:27:41.524 EOF 00:27:41.524 )") 00:27:41.524 05:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:41.524 05:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:41.524 05:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:41.524 { 00:27:41.524 "params": { 00:27:41.524 "name": "Nvme$subsystem", 00:27:41.524 "trtype": "$TEST_TRANSPORT", 00:27:41.524 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:41.524 "adrfam": "ipv4", 00:27:41.524 "trsvcid": "$NVMF_PORT", 00:27:41.524 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:41.524 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:41.524 "hdgst": ${hdgst:-false}, 00:27:41.524 "ddgst": ${ddgst:-false} 00:27:41.524 }, 00:27:41.524 "method": "bdev_nvme_attach_controller" 00:27:41.524 } 00:27:41.524 EOF 00:27:41.524 )") 00:27:41.524 05:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:41.524 05:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:41.524 05:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:41.524 { 00:27:41.524 "params": { 00:27:41.524 "name": "Nvme$subsystem", 00:27:41.524 "trtype": "$TEST_TRANSPORT", 00:27:41.524 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:41.524 "adrfam": "ipv4", 00:27:41.524 "trsvcid": "$NVMF_PORT", 00:27:41.524 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:41.524 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:41.524 "hdgst": ${hdgst:-false}, 00:27:41.524 "ddgst": ${ddgst:-false} 00:27:41.524 }, 00:27:41.524 "method": "bdev_nvme_attach_controller" 00:27:41.524 } 00:27:41.524 EOF 00:27:41.524 )") 00:27:41.524 05:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:41.524 05:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:41.524 05:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:41.524 { 00:27:41.524 "params": { 00:27:41.524 "name": "Nvme$subsystem", 00:27:41.524 "trtype": "$TEST_TRANSPORT", 00:27:41.524 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:41.524 "adrfam": "ipv4", 00:27:41.524 "trsvcid": "$NVMF_PORT", 00:27:41.524 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:41.524 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:41.524 "hdgst": ${hdgst:-false}, 00:27:41.524 "ddgst": ${ddgst:-false} 00:27:41.524 }, 00:27:41.524 "method": "bdev_nvme_attach_controller" 00:27:41.524 } 00:27:41.524 EOF 00:27:41.524 )") 00:27:41.524 05:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:41.524 05:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:41.524 05:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:41.524 { 00:27:41.524 "params": { 00:27:41.524 "name": "Nvme$subsystem", 00:27:41.524 "trtype": "$TEST_TRANSPORT", 00:27:41.524 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:41.524 "adrfam": "ipv4", 00:27:41.524 "trsvcid": "$NVMF_PORT", 00:27:41.524 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:41.524 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:41.524 "hdgst": ${hdgst:-false}, 00:27:41.524 "ddgst": ${ddgst:-false} 00:27:41.524 }, 00:27:41.524 "method": "bdev_nvme_attach_controller" 00:27:41.524 } 00:27:41.524 EOF 00:27:41.524 )") 00:27:41.524 05:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:41.524 05:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:27:41.524 05:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:27:41.524 05:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:41.524 "params": { 00:27:41.524 "name": "Nvme1", 00:27:41.524 "trtype": "tcp", 00:27:41.524 "traddr": "10.0.0.2", 00:27:41.524 "adrfam": "ipv4", 00:27:41.524 "trsvcid": "4420", 00:27:41.524 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:41.524 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:41.524 "hdgst": false, 00:27:41.524 "ddgst": false 00:27:41.524 }, 00:27:41.524 "method": "bdev_nvme_attach_controller" 00:27:41.524 },{ 00:27:41.524 "params": { 00:27:41.524 "name": "Nvme2", 00:27:41.524 "trtype": "tcp", 00:27:41.524 "traddr": "10.0.0.2", 00:27:41.524 "adrfam": "ipv4", 00:27:41.524 "trsvcid": "4420", 00:27:41.524 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:41.524 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:41.524 "hdgst": false, 00:27:41.524 "ddgst": false 00:27:41.524 }, 00:27:41.524 "method": "bdev_nvme_attach_controller" 00:27:41.524 },{ 00:27:41.524 "params": { 00:27:41.524 "name": "Nvme3", 00:27:41.524 "trtype": "tcp", 00:27:41.524 "traddr": "10.0.0.2", 00:27:41.524 "adrfam": "ipv4", 00:27:41.524 "trsvcid": "4420", 00:27:41.524 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:41.524 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:41.524 "hdgst": false, 00:27:41.524 "ddgst": false 00:27:41.524 }, 00:27:41.524 "method": "bdev_nvme_attach_controller" 00:27:41.524 },{ 00:27:41.524 "params": { 00:27:41.524 "name": "Nvme4", 00:27:41.524 "trtype": "tcp", 00:27:41.524 "traddr": "10.0.0.2", 00:27:41.524 "adrfam": "ipv4", 00:27:41.524 "trsvcid": "4420", 00:27:41.524 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:41.524 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:41.524 "hdgst": false, 00:27:41.524 "ddgst": false 00:27:41.524 }, 00:27:41.524 "method": "bdev_nvme_attach_controller" 00:27:41.524 },{ 00:27:41.524 "params": { 00:27:41.524 "name": "Nvme5", 00:27:41.524 "trtype": "tcp", 00:27:41.524 "traddr": "10.0.0.2", 00:27:41.524 "adrfam": "ipv4", 00:27:41.524 "trsvcid": "4420", 00:27:41.524 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:41.524 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:41.524 "hdgst": false, 00:27:41.524 "ddgst": false 00:27:41.524 }, 00:27:41.524 "method": "bdev_nvme_attach_controller" 00:27:41.524 },{ 00:27:41.524 "params": { 00:27:41.524 "name": "Nvme6", 00:27:41.524 "trtype": "tcp", 00:27:41.524 "traddr": "10.0.0.2", 00:27:41.524 "adrfam": "ipv4", 00:27:41.524 "trsvcid": "4420", 00:27:41.524 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:41.524 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:41.524 "hdgst": false, 00:27:41.524 "ddgst": false 00:27:41.524 }, 00:27:41.524 "method": "bdev_nvme_attach_controller" 00:27:41.524 },{ 00:27:41.524 "params": { 00:27:41.524 "name": "Nvme7", 00:27:41.524 "trtype": "tcp", 00:27:41.524 "traddr": "10.0.0.2", 00:27:41.524 "adrfam": "ipv4", 00:27:41.524 "trsvcid": "4420", 00:27:41.524 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:41.524 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:41.524 "hdgst": false, 00:27:41.524 "ddgst": false 00:27:41.524 }, 00:27:41.524 "method": "bdev_nvme_attach_controller" 00:27:41.524 },{ 00:27:41.524 "params": { 00:27:41.524 "name": "Nvme8", 00:27:41.524 "trtype": "tcp", 00:27:41.524 "traddr": "10.0.0.2", 00:27:41.524 "adrfam": "ipv4", 00:27:41.524 "trsvcid": "4420", 00:27:41.524 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:41.524 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:41.524 "hdgst": false, 00:27:41.524 "ddgst": false 00:27:41.524 }, 00:27:41.524 "method": "bdev_nvme_attach_controller" 00:27:41.524 },{ 00:27:41.525 "params": { 00:27:41.525 "name": "Nvme9", 00:27:41.525 "trtype": "tcp", 00:27:41.525 "traddr": "10.0.0.2", 00:27:41.525 "adrfam": "ipv4", 00:27:41.525 "trsvcid": "4420", 00:27:41.525 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:41.525 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:41.525 "hdgst": false, 00:27:41.525 "ddgst": false 00:27:41.525 }, 00:27:41.525 "method": "bdev_nvme_attach_controller" 00:27:41.525 },{ 00:27:41.525 "params": { 00:27:41.525 "name": "Nvme10", 00:27:41.525 "trtype": "tcp", 00:27:41.525 "traddr": "10.0.0.2", 00:27:41.525 "adrfam": "ipv4", 00:27:41.525 "trsvcid": "4420", 00:27:41.525 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:41.525 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:41.525 "hdgst": false, 00:27:41.525 "ddgst": false 00:27:41.525 }, 00:27:41.525 "method": "bdev_nvme_attach_controller" 00:27:41.525 }' 00:27:41.525 [2024-07-14 05:41:48.426314] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:27:41.525 [2024-07-14 05:41:48.426389] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3320134 ] 00:27:41.525 EAL: No free 2048 kB hugepages reported on node 1 00:27:41.525 [2024-07-14 05:41:48.490198] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:41.525 [2024-07-14 05:41:48.576784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:43.454 Running I/O for 10 seconds... 00:27:43.454 05:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:43.454 05:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:27:43.454 05:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:43.454 05:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.454 05:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:43.454 05:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.454 05:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:43.454 05:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:43.454 05:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:27:43.454 05:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:27:43.454 05:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:27:43.454 05:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:27:43.454 05:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:43.454 05:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:43.454 05:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:43.454 05:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.454 05:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:43.454 05:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.454 05:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:27:43.454 05:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:27:43.454 05:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:43.712 05:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:43.712 05:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:43.712 05:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:43.712 05:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:43.712 05:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.712 05:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:43.712 05:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.712 05:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:27:43.712 05:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:27:43.712 05:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:43.987 05:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:43.987 05:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:43.987 05:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:43.987 05:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:43.987 05:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.987 05:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:43.987 05:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.987 05:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:27:43.987 05:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:27:43.987 05:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:27:43.987 05:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:27:43.987 05:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:27:43.987 05:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 3320134 00:27:43.987 05:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 3320134 ']' 00:27:43.987 05:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 3320134 00:27:43.987 05:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:27:43.987 05:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:43.987 05:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3320134 00:27:44.249 05:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:44.249 05:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:44.249 05:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3320134' 00:27:44.249 killing process with pid 3320134 00:27:44.249 05:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 3320134 00:27:44.249 05:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 3320134 00:27:44.249 Received shutdown signal, test time was about 0.963691 seconds 00:27:44.249 00:27:44.249 Latency(us) 00:27:44.249 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:44.249 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:44.249 Verification LBA range: start 0x0 length 0x400 00:27:44.249 Nvme1n1 : 0.96 267.72 16.73 0.00 0.00 236051.53 20194.80 250104.79 00:27:44.249 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:44.249 Verification LBA range: start 0x0 length 0x400 00:27:44.249 Nvme2n1 : 0.93 205.97 12.87 0.00 0.00 300935.90 23884.23 256318.58 00:27:44.249 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:44.249 Verification LBA range: start 0x0 length 0x400 00:27:44.249 Nvme3n1 : 0.95 268.69 16.79 0.00 0.00 226249.77 20000.62 254765.13 00:27:44.249 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:44.249 Verification LBA range: start 0x0 length 0x400 00:27:44.249 Nvme4n1 : 0.96 265.87 16.62 0.00 0.00 224293.74 21165.70 236123.78 00:27:44.249 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:44.249 Verification LBA range: start 0x0 length 0x400 00:27:44.249 Nvme5n1 : 0.94 204.23 12.76 0.00 0.00 285240.89 21165.70 279620.27 00:27:44.249 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:44.249 Verification LBA range: start 0x0 length 0x400 00:27:44.249 Nvme6n1 : 0.95 269.46 16.84 0.00 0.00 211770.41 19418.07 229910.00 00:27:44.249 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:44.249 Verification LBA range: start 0x0 length 0x400 00:27:44.249 Nvme7n1 : 0.91 211.54 13.22 0.00 0.00 262403.41 29515.47 290494.39 00:27:44.249 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:44.249 Verification LBA range: start 0x0 length 0x400 00:27:44.249 Nvme8n1 : 0.92 208.71 13.04 0.00 0.00 260195.87 22524.97 254765.13 00:27:44.249 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:44.249 Verification LBA range: start 0x0 length 0x400 00:27:44.249 Nvme9n1 : 0.93 207.42 12.96 0.00 0.00 256298.35 18835.53 237677.23 00:27:44.249 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:44.249 Verification LBA range: start 0x0 length 0x400 00:27:44.249 Nvme10n1 : 0.94 203.61 12.73 0.00 0.00 256138.81 19418.07 290494.39 00:27:44.249 =================================================================================================================== 00:27:44.249 Total : 2313.22 144.58 0.00 0.00 248738.28 18835.53 290494.39 00:27:44.507 05:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:27:45.440 05:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 3319955 00:27:45.440 05:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:27:45.440 05:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:45.440 05:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:45.440 05:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:45.440 05:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:45.440 05:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:45.440 05:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:27:45.440 05:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:45.440 05:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:27:45.440 05:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:45.440 05:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:45.440 rmmod nvme_tcp 00:27:45.440 rmmod nvme_fabrics 00:27:45.440 rmmod nvme_keyring 00:27:45.440 05:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:45.440 05:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:27:45.440 05:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:27:45.440 05:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 3319955 ']' 00:27:45.440 05:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 3319955 00:27:45.440 05:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 3319955 ']' 00:27:45.440 05:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 3319955 00:27:45.440 05:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:27:45.440 05:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:45.440 05:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3319955 00:27:45.440 05:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:45.440 05:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:45.440 05:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3319955' 00:27:45.440 killing process with pid 3319955 00:27:45.440 05:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 3319955 00:27:45.440 05:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 3319955 00:27:46.007 05:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:46.007 05:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:46.007 05:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:46.007 05:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:46.007 05:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:46.007 05:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:46.007 05:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:46.007 05:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:48.537 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:48.537 00:27:48.537 real 0m7.772s 00:27:48.537 user 0m23.696s 00:27:48.537 sys 0m1.533s 00:27:48.537 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:48.537 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:48.537 ************************************ 00:27:48.537 END TEST nvmf_shutdown_tc2 00:27:48.537 ************************************ 00:27:48.537 05:41:55 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:27:48.537 05:41:55 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:48.537 05:41:55 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:48.537 05:41:55 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:48.537 ************************************ 00:27:48.537 START TEST nvmf_shutdown_tc3 00:27:48.537 ************************************ 00:27:48.537 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc3 00:27:48.537 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:27:48.537 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:48.537 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:48.537 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:48.537 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:48.537 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:48.537 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:48.537 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:48.537 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:48.537 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:48.537 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:48.537 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:48.537 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:48.537 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:48.537 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:48.537 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:48.537 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:48.537 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:48.537 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:48.537 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:48.537 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:48.537 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:27:48.537 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:48.537 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:27:48.537 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:27:48.537 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:27:48.537 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:48.538 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:48.538 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:48.538 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:48.538 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:48.538 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:48.538 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:27:48.538 00:27:48.538 --- 10.0.0.2 ping statistics --- 00:27:48.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:48.538 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:48.538 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:48.538 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:27:48.538 00:27:48.538 --- 10.0.0.1 ping statistics --- 00:27:48.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:48.538 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=3321050 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 3321050 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 3321050 ']' 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:48.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:48.538 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:48.538 [2024-07-14 05:41:55.344306] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:27:48.538 [2024-07-14 05:41:55.344375] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:48.538 EAL: No free 2048 kB hugepages reported on node 1 00:27:48.539 [2024-07-14 05:41:55.407763] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:48.539 [2024-07-14 05:41:55.496171] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:48.539 [2024-07-14 05:41:55.496220] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:48.539 [2024-07-14 05:41:55.496242] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:48.539 [2024-07-14 05:41:55.496254] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:48.539 [2024-07-14 05:41:55.496263] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:48.539 [2024-07-14 05:41:55.496349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:48.539 [2024-07-14 05:41:55.496414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:48.539 [2024-07-14 05:41:55.496482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:48.539 [2024-07-14 05:41:55.496480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:48.539 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:48.539 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:27:48.539 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:48.539 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:48.539 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:48.798 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:48.798 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:48.798 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.798 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:48.798 [2024-07-14 05:41:55.653726] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:48.798 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.798 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:48.798 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:48.798 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:48.798 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:48.798 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:48.798 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:48.798 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:48.798 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:48.798 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:48.798 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:48.798 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:48.798 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:48.798 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:48.798 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:48.798 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:48.798 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:48.798 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:48.798 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:48.798 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:48.798 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:48.798 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:48.798 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:48.798 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:48.798 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:48.798 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:48.798 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:48.798 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.798 05:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:48.798 Malloc1 00:27:48.798 [2024-07-14 05:41:55.743420] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:48.798 Malloc2 00:27:48.798 Malloc3 00:27:48.798 Malloc4 00:27:49.057 Malloc5 00:27:49.057 Malloc6 00:27:49.057 Malloc7 00:27:49.057 Malloc8 00:27:49.057 Malloc9 00:27:49.315 Malloc10 00:27:49.315 05:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.315 05:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:49.315 05:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:49.315 05:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:49.315 05:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=3321221 00:27:49.315 05:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 3321221 /var/tmp/bdevperf.sock 00:27:49.315 05:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 3321221 ']' 00:27:49.315 05:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:49.315 05:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:49.315 05:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:49.315 05:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:49.315 05:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:27:49.315 05:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:49.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:49.315 05:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:27:49.315 05:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:49.315 05:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:49.315 05:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:49.315 05:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:49.315 { 00:27:49.315 "params": { 00:27:49.315 "name": "Nvme$subsystem", 00:27:49.315 "trtype": "$TEST_TRANSPORT", 00:27:49.315 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:49.315 "adrfam": "ipv4", 00:27:49.315 "trsvcid": "$NVMF_PORT", 00:27:49.315 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:49.315 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:49.315 "hdgst": ${hdgst:-false}, 00:27:49.315 "ddgst": ${ddgst:-false} 00:27:49.315 }, 00:27:49.315 "method": "bdev_nvme_attach_controller" 00:27:49.315 } 00:27:49.315 EOF 00:27:49.315 )") 00:27:49.315 05:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:49.315 05:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:49.315 05:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:49.315 { 00:27:49.315 "params": { 00:27:49.315 "name": "Nvme$subsystem", 00:27:49.315 "trtype": "$TEST_TRANSPORT", 00:27:49.315 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:49.315 "adrfam": "ipv4", 00:27:49.315 "trsvcid": "$NVMF_PORT", 00:27:49.315 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:49.315 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:49.315 "hdgst": ${hdgst:-false}, 00:27:49.315 "ddgst": ${ddgst:-false} 00:27:49.315 }, 00:27:49.315 "method": "bdev_nvme_attach_controller" 00:27:49.315 } 00:27:49.315 EOF 00:27:49.315 )") 00:27:49.315 05:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:49.315 05:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:49.315 05:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:49.315 { 00:27:49.315 "params": { 00:27:49.315 "name": "Nvme$subsystem", 00:27:49.315 "trtype": "$TEST_TRANSPORT", 00:27:49.315 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:49.315 "adrfam": "ipv4", 00:27:49.315 "trsvcid": "$NVMF_PORT", 00:27:49.315 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:49.315 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:49.315 "hdgst": ${hdgst:-false}, 00:27:49.315 "ddgst": ${ddgst:-false} 00:27:49.315 }, 00:27:49.315 "method": "bdev_nvme_attach_controller" 00:27:49.315 } 00:27:49.315 EOF 00:27:49.315 )") 00:27:49.315 05:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:49.315 05:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:49.315 05:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:49.315 { 00:27:49.315 "params": { 00:27:49.315 "name": "Nvme$subsystem", 00:27:49.315 "trtype": "$TEST_TRANSPORT", 00:27:49.315 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:49.315 "adrfam": "ipv4", 00:27:49.315 "trsvcid": "$NVMF_PORT", 00:27:49.315 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:49.315 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:49.315 "hdgst": ${hdgst:-false}, 00:27:49.315 "ddgst": ${ddgst:-false} 00:27:49.315 }, 00:27:49.315 "method": "bdev_nvme_attach_controller" 00:27:49.315 } 00:27:49.315 EOF 00:27:49.315 )") 00:27:49.315 05:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:49.315 05:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:49.315 05:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:49.315 { 00:27:49.315 "params": { 00:27:49.315 "name": "Nvme$subsystem", 00:27:49.315 "trtype": "$TEST_TRANSPORT", 00:27:49.315 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:49.315 "adrfam": "ipv4", 00:27:49.315 "trsvcid": "$NVMF_PORT", 00:27:49.315 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:49.315 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:49.315 "hdgst": ${hdgst:-false}, 00:27:49.315 "ddgst": ${ddgst:-false} 00:27:49.315 }, 00:27:49.315 "method": "bdev_nvme_attach_controller" 00:27:49.315 } 00:27:49.315 EOF 00:27:49.315 )") 00:27:49.315 05:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:49.315 05:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:49.315 05:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:49.315 { 00:27:49.315 "params": { 00:27:49.315 "name": "Nvme$subsystem", 00:27:49.315 "trtype": "$TEST_TRANSPORT", 00:27:49.315 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:49.315 "adrfam": "ipv4", 00:27:49.315 "trsvcid": "$NVMF_PORT", 00:27:49.315 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:49.315 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:49.315 "hdgst": ${hdgst:-false}, 00:27:49.315 "ddgst": ${ddgst:-false} 00:27:49.315 }, 00:27:49.315 "method": "bdev_nvme_attach_controller" 00:27:49.315 } 00:27:49.315 EOF 00:27:49.315 )") 00:27:49.315 05:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:49.315 05:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:49.315 05:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:49.315 { 00:27:49.315 "params": { 00:27:49.315 "name": "Nvme$subsystem", 00:27:49.315 "trtype": "$TEST_TRANSPORT", 00:27:49.315 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:49.315 "adrfam": "ipv4", 00:27:49.315 "trsvcid": "$NVMF_PORT", 00:27:49.316 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:49.316 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:49.316 "hdgst": ${hdgst:-false}, 00:27:49.316 "ddgst": ${ddgst:-false} 00:27:49.316 }, 00:27:49.316 "method": "bdev_nvme_attach_controller" 00:27:49.316 } 00:27:49.316 EOF 00:27:49.316 )") 00:27:49.316 05:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:49.316 05:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:49.316 05:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:49.316 { 00:27:49.316 "params": { 00:27:49.316 "name": "Nvme$subsystem", 00:27:49.316 "trtype": "$TEST_TRANSPORT", 00:27:49.316 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:49.316 "adrfam": "ipv4", 00:27:49.316 "trsvcid": "$NVMF_PORT", 00:27:49.316 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:49.316 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:49.316 "hdgst": ${hdgst:-false}, 00:27:49.316 "ddgst": ${ddgst:-false} 00:27:49.316 }, 00:27:49.316 "method": "bdev_nvme_attach_controller" 00:27:49.316 } 00:27:49.316 EOF 00:27:49.316 )") 00:27:49.316 05:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:49.316 05:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:49.316 05:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:49.316 { 00:27:49.316 "params": { 00:27:49.316 "name": "Nvme$subsystem", 00:27:49.316 "trtype": "$TEST_TRANSPORT", 00:27:49.316 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:49.316 "adrfam": "ipv4", 00:27:49.316 "trsvcid": "$NVMF_PORT", 00:27:49.316 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:49.316 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:49.316 "hdgst": ${hdgst:-false}, 00:27:49.316 "ddgst": ${ddgst:-false} 00:27:49.316 }, 00:27:49.316 "method": "bdev_nvme_attach_controller" 00:27:49.316 } 00:27:49.316 EOF 00:27:49.316 )") 00:27:49.316 05:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:49.316 05:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:49.316 05:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:49.316 { 00:27:49.316 "params": { 00:27:49.316 "name": "Nvme$subsystem", 00:27:49.316 "trtype": "$TEST_TRANSPORT", 00:27:49.316 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:49.316 "adrfam": "ipv4", 00:27:49.316 "trsvcid": "$NVMF_PORT", 00:27:49.316 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:49.316 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:49.316 "hdgst": ${hdgst:-false}, 00:27:49.316 "ddgst": ${ddgst:-false} 00:27:49.316 }, 00:27:49.316 "method": "bdev_nvme_attach_controller" 00:27:49.316 } 00:27:49.316 EOF 00:27:49.316 )") 00:27:49.316 05:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:49.316 05:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:27:49.316 05:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:27:49.316 05:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:49.316 "params": { 00:27:49.316 "name": "Nvme1", 00:27:49.316 "trtype": "tcp", 00:27:49.316 "traddr": "10.0.0.2", 00:27:49.316 "adrfam": "ipv4", 00:27:49.316 "trsvcid": "4420", 00:27:49.316 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:49.316 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:49.316 "hdgst": false, 00:27:49.316 "ddgst": false 00:27:49.316 }, 00:27:49.316 "method": "bdev_nvme_attach_controller" 00:27:49.316 },{ 00:27:49.316 "params": { 00:27:49.316 "name": "Nvme2", 00:27:49.316 "trtype": "tcp", 00:27:49.316 "traddr": "10.0.0.2", 00:27:49.316 "adrfam": "ipv4", 00:27:49.316 "trsvcid": "4420", 00:27:49.316 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:49.316 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:49.316 "hdgst": false, 00:27:49.316 "ddgst": false 00:27:49.316 }, 00:27:49.316 "method": "bdev_nvme_attach_controller" 00:27:49.316 },{ 00:27:49.316 "params": { 00:27:49.316 "name": "Nvme3", 00:27:49.316 "trtype": "tcp", 00:27:49.316 "traddr": "10.0.0.2", 00:27:49.316 "adrfam": "ipv4", 00:27:49.316 "trsvcid": "4420", 00:27:49.316 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:49.316 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:49.316 "hdgst": false, 00:27:49.316 "ddgst": false 00:27:49.316 }, 00:27:49.316 "method": "bdev_nvme_attach_controller" 00:27:49.316 },{ 00:27:49.316 "params": { 00:27:49.316 "name": "Nvme4", 00:27:49.316 "trtype": "tcp", 00:27:49.316 "traddr": "10.0.0.2", 00:27:49.316 "adrfam": "ipv4", 00:27:49.316 "trsvcid": "4420", 00:27:49.316 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:49.316 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:49.316 "hdgst": false, 00:27:49.316 "ddgst": false 00:27:49.316 }, 00:27:49.316 "method": "bdev_nvme_attach_controller" 00:27:49.316 },{ 00:27:49.316 "params": { 00:27:49.316 "name": "Nvme5", 00:27:49.316 "trtype": "tcp", 00:27:49.316 "traddr": "10.0.0.2", 00:27:49.316 "adrfam": "ipv4", 00:27:49.316 "trsvcid": "4420", 00:27:49.316 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:49.316 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:49.316 "hdgst": false, 00:27:49.316 "ddgst": false 00:27:49.316 }, 00:27:49.316 "method": "bdev_nvme_attach_controller" 00:27:49.316 },{ 00:27:49.316 "params": { 00:27:49.316 "name": "Nvme6", 00:27:49.316 "trtype": "tcp", 00:27:49.316 "traddr": "10.0.0.2", 00:27:49.316 "adrfam": "ipv4", 00:27:49.316 "trsvcid": "4420", 00:27:49.316 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:49.316 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:49.316 "hdgst": false, 00:27:49.316 "ddgst": false 00:27:49.316 }, 00:27:49.316 "method": "bdev_nvme_attach_controller" 00:27:49.316 },{ 00:27:49.316 "params": { 00:27:49.316 "name": "Nvme7", 00:27:49.316 "trtype": "tcp", 00:27:49.316 "traddr": "10.0.0.2", 00:27:49.316 "adrfam": "ipv4", 00:27:49.316 "trsvcid": "4420", 00:27:49.316 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:49.316 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:49.316 "hdgst": false, 00:27:49.316 "ddgst": false 00:27:49.316 }, 00:27:49.316 "method": "bdev_nvme_attach_controller" 00:27:49.316 },{ 00:27:49.316 "params": { 00:27:49.316 "name": "Nvme8", 00:27:49.316 "trtype": "tcp", 00:27:49.316 "traddr": "10.0.0.2", 00:27:49.316 "adrfam": "ipv4", 00:27:49.316 "trsvcid": "4420", 00:27:49.316 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:49.316 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:49.316 "hdgst": false, 00:27:49.316 "ddgst": false 00:27:49.316 }, 00:27:49.316 "method": "bdev_nvme_attach_controller" 00:27:49.316 },{ 00:27:49.316 "params": { 00:27:49.316 "name": "Nvme9", 00:27:49.316 "trtype": "tcp", 00:27:49.316 "traddr": "10.0.0.2", 00:27:49.316 "adrfam": "ipv4", 00:27:49.316 "trsvcid": "4420", 00:27:49.316 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:49.316 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:49.316 "hdgst": false, 00:27:49.316 "ddgst": false 00:27:49.316 }, 00:27:49.316 "method": "bdev_nvme_attach_controller" 00:27:49.316 },{ 00:27:49.316 "params": { 00:27:49.316 "name": "Nvme10", 00:27:49.316 "trtype": "tcp", 00:27:49.316 "traddr": "10.0.0.2", 00:27:49.316 "adrfam": "ipv4", 00:27:49.316 "trsvcid": "4420", 00:27:49.316 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:49.316 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:49.316 "hdgst": false, 00:27:49.316 "ddgst": false 00:27:49.316 }, 00:27:49.316 "method": "bdev_nvme_attach_controller" 00:27:49.316 }' 00:27:49.316 [2024-07-14 05:41:56.258610] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:27:49.316 [2024-07-14 05:41:56.258697] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3321221 ] 00:27:49.316 EAL: No free 2048 kB hugepages reported on node 1 00:27:49.316 [2024-07-14 05:41:56.323944] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:49.316 [2024-07-14 05:41:56.410717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:51.216 Running I/O for 10 seconds... 00:27:51.216 05:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:51.216 05:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:27:51.216 05:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:51.216 05:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.216 05:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:51.216 05:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.216 05:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:51.216 05:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:51.216 05:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:51.216 05:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:27:51.216 05:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:27:51.216 05:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:27:51.216 05:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:27:51.216 05:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:51.216 05:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:51.216 05:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:51.216 05:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.216 05:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:51.216 05:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.216 05:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:27:51.216 05:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:27:51.216 05:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:51.475 05:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:51.475 05:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:51.475 05:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:51.475 05:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:51.475 05:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.475 05:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:51.475 05:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.475 05:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:27:51.475 05:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:27:51.475 05:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:51.733 05:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:51.733 05:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:51.733 05:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:51.733 05:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:51.733 05:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.733 05:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:52.006 05:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.006 05:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:27:52.006 05:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:27:52.006 05:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:27:52.006 05:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:27:52.006 05:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:27:52.006 05:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 3321050 00:27:52.006 05:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@946 -- # '[' -z 3321050 ']' 00:27:52.006 05:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # kill -0 3321050 00:27:52.006 05:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # uname 00:27:52.006 05:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:52.006 05:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3321050 00:27:52.006 05:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:52.006 05:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:52.006 05:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3321050' 00:27:52.006 killing process with pid 3321050 00:27:52.006 05:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@965 -- # kill 3321050 00:27:52.006 05:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # wait 3321050 00:27:52.006 [2024-07-14 05:41:58.885472] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde0560 is same with the state(5) to be set 00:27:52.006 [2024-07-14 05:41:58.885560] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde0560 is same with the state(5) to be set 00:27:52.006 [2024-07-14 05:41:58.885576] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde0560 is same with the state(5) to be set 00:27:52.006 [2024-07-14 05:41:58.885588] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde0560 is same with the state(5) to be set 00:27:52.006 [2024-07-14 05:41:58.885600] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde0560 is same with the state(5) to be set 00:27:52.006 [2024-07-14 05:41:58.885612] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde0560 is same with the state(5) to be set 00:27:52.006 [2024-07-14 05:41:58.886707] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27700 is same with the state(5) to be set 00:27:52.006 [2024-07-14 05:41:58.886743] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27700 is same with the state(5) to be set 00:27:52.006 [2024-07-14 05:41:58.886758] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27700 is same with the state(5) to be set 00:27:52.006 [2024-07-14 05:41:58.886772] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27700 is same with the state(5) to be set 00:27:52.006 [2024-07-14 05:41:58.886794] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27700 is same with the state(5) to be set 00:27:52.006 [2024-07-14 05:41:58.886807] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27700 is same with the state(5) to be set 00:27:52.006 [2024-07-14 05:41:58.886819] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27700 is same with the state(5) to be set 00:27:52.006 [2024-07-14 05:41:58.886830] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27700 is same with the state(5) to be set 00:27:52.006 [2024-07-14 05:41:58.886842] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27700 is same with the state(5) to be set 00:27:52.006 [2024-07-14 05:41:58.886854] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27700 is same with the state(5) to be set 00:27:52.006 [2024-07-14 05:41:58.886873] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27700 is same with the state(5) to be set 00:27:52.006 [2024-07-14 05:41:58.886902] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27700 is same with the state(5) to be set 00:27:52.006 [2024-07-14 05:41:58.886922] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27700 is same with the state(5) to be set 00:27:52.006 [2024-07-14 05:41:58.886934] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27700 is same with the state(5) to be set 00:27:52.006 [2024-07-14 05:41:58.886947] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27700 is same with the state(5) to be set 00:27:52.006 [2024-07-14 05:41:58.886959] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27700 is same with the state(5) to be set 00:27:52.006 [2024-07-14 05:41:58.886971] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27700 is same with the state(5) to be set 00:27:52.006 [2024-07-14 05:41:58.886983] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27700 is same with the state(5) to be set 00:27:52.006 [2024-07-14 05:41:58.886995] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27700 is same with the state(5) to be set 00:27:52.006 [2024-07-14 05:41:58.887007] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27700 is same with the state(5) to be set 00:27:52.006 [2024-07-14 05:41:58.887020] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27700 is same with the state(5) to be set 00:27:52.006 [2024-07-14 05:41:58.887032] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27700 is same with the state(5) to be set 00:27:52.006 [2024-07-14 05:41:58.887044] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27700 is same with the state(5) to be set 00:27:52.006 [2024-07-14 05:41:58.887056] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27700 is same with the state(5) to be set 00:27:52.006 [2024-07-14 05:41:58.887068] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27700 is same with the state(5) to be set 00:27:52.006 [2024-07-14 05:41:58.887081] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27700 is same with the state(5) to be set 00:27:52.006 [2024-07-14 05:41:58.887093] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27700 is same with the state(5) to be set 00:27:52.006 [2024-07-14 05:41:58.887104] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27700 is same with the state(5) to be set 00:27:52.006 [2024-07-14 05:41:58.887117] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27700 is same with the state(5) to be set 00:27:52.006 [2024-07-14 05:41:58.887135] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27700 is same with the state(5) to be set 00:27:52.006 [2024-07-14 05:41:58.887147] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27700 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.887163] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27700 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.887176] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27700 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.887204] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27700 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.887217] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27700 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.887229] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27700 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.887241] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27700 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.887254] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27700 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.887266] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27700 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.887279] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27700 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.887291] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27700 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.887302] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27700 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.887314] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27700 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.887327] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27700 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.887339] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27700 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.887351] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27700 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.887363] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27700 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.887374] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27700 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.887385] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27700 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.887397] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27700 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.887409] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27700 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.887421] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27700 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.887433] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27700 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.887444] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27700 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.887455] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27700 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.887467] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27700 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.887479] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27700 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.887491] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27700 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.887505] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27700 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.887517] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27700 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.887530] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27700 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.887542] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27700 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.887554] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27700 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.888989] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde0a00 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.889013] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde0a00 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.889027] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde0a00 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.889039] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde0a00 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.889052] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde0a00 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.889065] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde0a00 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.889077] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde0a00 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.889089] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde0a00 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.889102] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde0a00 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.889114] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde0a00 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.889138] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde0a00 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.889150] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde0a00 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.889163] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde0a00 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.889183] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde0a00 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.889196] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde0a00 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.889209] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde0a00 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.889221] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde0a00 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.889233] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde0a00 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.889245] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde0a00 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.889257] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde0a00 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.889269] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde0a00 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.889282] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde0a00 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.889294] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde0a00 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.889310] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde0a00 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.889323] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde0a00 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.889335] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde0a00 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.889347] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde0a00 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.889360] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde0a00 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.889372] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde0a00 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.889400] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde0a00 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.889412] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde0a00 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.889424] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde0a00 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.889436] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde0a00 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.889448] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde0a00 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.889460] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde0a00 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.889472] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde0a00 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.889484] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde0a00 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.889496] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde0a00 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.889507] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde0a00 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.889520] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde0a00 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.889531] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde0a00 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.889543] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde0a00 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.889556] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde0a00 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.889568] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde0a00 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.889579] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde0a00 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.889597] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde0a00 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.889609] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde0a00 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.889622] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde0a00 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.889634] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde0a00 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.889645] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde0a00 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.889661] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde0a00 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.889674] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde0a00 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.889685] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde0a00 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.889698] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde0a00 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.889709] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde0a00 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.889722] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde0a00 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.889733] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde0a00 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.889745] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde0a00 is same with the state(5) to be set 00:27:52.007 [2024-07-14 05:41:58.889758] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde0a00 is same with the state(5) to be set 00:27:52.008 [2024-07-14 05:41:58.889770] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde0a00 is same with the state(5) to be set 00:27:52.008 [2024-07-14 05:41:58.889781] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde0a00 is same with the state(5) to be set 00:27:52.008 [2024-07-14 05:41:58.889793] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde0a00 is same with the state(5) to be set 00:27:52.008 [2024-07-14 05:41:58.889804] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde0a00 is same with the state(5) to be set 00:27:52.008 [2024-07-14 05:41:58.892873] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1360 is same with the state(5) to be set 00:27:52.008 [2024-07-14 05:41:58.892920] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1360 is same with the state(5) to be set 00:27:52.008 [2024-07-14 05:41:58.892935] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1360 is same with the state(5) to be set 00:27:52.008 [2024-07-14 05:41:58.892948] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1360 is same with the state(5) to be set 00:27:52.008 [2024-07-14 05:41:58.892962] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1360 is same with the state(5) to be set 00:27:52.008 [2024-07-14 05:41:58.892974] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1360 is same with the state(5) to be set 00:27:52.008 [2024-07-14 05:41:58.892987] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1360 is same with the state(5) to be set 00:27:52.008 [2024-07-14 05:41:58.893000] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1360 is same with the state(5) to be set 00:27:52.008 [2024-07-14 05:41:58.893013] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1360 is same with the state(5) to be set 00:27:52.008 [2024-07-14 05:41:58.893025] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1360 is same with the state(5) to be set 00:27:52.008 [2024-07-14 05:41:58.893038] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1360 is same with the state(5) to be set 00:27:52.008 [2024-07-14 05:41:58.893051] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1360 is same with the state(5) to be set 00:27:52.008 [2024-07-14 05:41:58.893063] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1360 is same with the state(5) to be set 00:27:52.008 [2024-07-14 05:41:58.893076] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1360 is same with the state(5) to be set 00:27:52.008 [2024-07-14 05:41:58.893094] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1360 is same with the state(5) to be set 00:27:52.008 [2024-07-14 05:41:58.893108] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1360 is same with the state(5) to be set 00:27:52.008 [2024-07-14 05:41:58.893129] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1360 is same with the state(5) to be set 00:27:52.008 [2024-07-14 05:41:58.893141] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1360 is same with the state(5) to be set 00:27:52.008 [2024-07-14 05:41:58.893154] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1360 is same with the state(5) to be set 00:27:52.008 [2024-07-14 05:41:58.893167] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1360 is same with the state(5) to be set 00:27:52.008 [2024-07-14 05:41:58.893179] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1360 is same with the state(5) to be set 00:27:52.008 [2024-07-14 05:41:58.893192] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1360 is same with the state(5) to be set 00:27:52.008 [2024-07-14 05:41:58.893205] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1360 is same with the state(5) to be set 00:27:52.008 [2024-07-14 05:41:58.893217] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1360 is same with the state(5) to be set 00:27:52.008 [2024-07-14 05:41:58.893230] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1360 is same with the state(5) to be set 00:27:52.008 [2024-07-14 05:41:58.893243] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1360 is same with the state(5) to be set 00:27:52.008 [2024-07-14 05:41:58.893256] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1360 is same with the state(5) to be set 00:27:52.008 [2024-07-14 05:41:58.893269] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1360 is same with the state(5) to be set 00:27:52.008 [2024-07-14 05:41:58.893281] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1360 is same with the state(5) to be set 00:27:52.008 [2024-07-14 05:41:58.893293] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1360 is same with the state(5) to be set 00:27:52.008 [2024-07-14 05:41:58.893306] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1360 is same with the state(5) to be set 00:27:52.008 [2024-07-14 05:41:58.893319] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1360 is same with the state(5) to be set 00:27:52.008 [2024-07-14 05:41:58.893333] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1360 is same with the state(5) to be set 00:27:52.008 [2024-07-14 05:41:58.893345] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1360 is same with the state(5) to be set 00:27:52.008 [2024-07-14 05:41:58.893358] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1360 is same with the state(5) to be set 00:27:52.008 [2024-07-14 05:41:58.893371] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1360 is same with the state(5) to be set 00:27:52.008 [2024-07-14 05:41:58.893383] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1360 is same with the state(5) to be set 00:27:52.008 [2024-07-14 05:41:58.893397] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1360 is same with the state(5) to be set 00:27:52.008 [2024-07-14 05:41:58.893409] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1360 is same with the state(5) to be set 00:27:52.008 [2024-07-14 05:41:58.893422] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1360 is same with the state(5) to be set 00:27:52.008 [2024-07-14 05:41:58.893434] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1360 is same with the state(5) to be set 00:27:52.008 [2024-07-14 05:41:58.893450] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1360 is same with the state(5) to be set 00:27:52.008 [2024-07-14 05:41:58.893464] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1360 is same with the state(5) to be set 00:27:52.008 [2024-07-14 05:41:58.893477] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1360 is same with the state(5) to be set 00:27:52.008 [2024-07-14 05:41:58.893490] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1360 is same with the state(5) to be set 00:27:52.008 [2024-07-14 05:41:58.893482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:12[2024-07-14 05:41:58.893503] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1360 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.008 he state(5) to be set 00:27:52.008 [2024-07-14 05:41:58.893519] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1360 is same with the state(5) to be set 00:27:52.008 [2024-07-14 05:41:58.893527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.008 [2024-07-14 05:41:58.893532] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1360 is same with the state(5) to be set 00:27:52.008 [2024-07-14 05:41:58.893545] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1360 is same with the state(5) to be set 00:27:52.008 [2024-07-14 05:41:58.893557] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1360 is same with the state(5) to be set 00:27:52.008 [2024-07-14 05:41:58.893557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.008 [2024-07-14 05:41:58.893569] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1360 is same with the state(5) to be set 00:27:52.008 [2024-07-14 05:41:58.893574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.008 [2024-07-14 05:41:58.893583] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1360 is same with the state(5) to be set 00:27:52.008 [2024-07-14 05:41:58.893590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.008 [2024-07-14 05:41:58.893596] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1360 is same with the state(5) to be set 00:27:52.008 [2024-07-14 05:41:58.893605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.008 [2024-07-14 05:41:58.893609] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1360 is same with the state(5) to be set 00:27:52.008 [2024-07-14 05:41:58.893621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:12[2024-07-14 05:41:58.893622] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1360 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.008 he state(5) to be set 00:27:52.008 [2024-07-14 05:41:58.893638] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1360 is same with t[2024-07-14 05:41:58.893637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:27:52.008 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.008 [2024-07-14 05:41:58.893652] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1360 is same with the state(5) to be set 00:27:52.008 [2024-07-14 05:41:58.893655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.008 [2024-07-14 05:41:58.893666] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1360 is same with the state(5) to be set 00:27:52.008 [2024-07-14 05:41:58.893669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.008 [2024-07-14 05:41:58.893679] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1360 is same with the state(5) to be set 00:27:52.008 [2024-07-14 05:41:58.893692] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1360 is same with t[2024-07-14 05:41:58.893691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:12he state(5) to be set 00:27:52.008 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.008 [2024-07-14 05:41:58.893706] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1360 is same with t[2024-07-14 05:41:58.893708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:27:52.008 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.008 [2024-07-14 05:41:58.893721] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1360 is same with the state(5) to be set 00:27:52.008 [2024-07-14 05:41:58.893725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.008 [2024-07-14 05:41:58.893735] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1360 is same with the state(5) to be set 00:27:52.008 [2024-07-14 05:41:58.893739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.008 [2024-07-14 05:41:58.893755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.008 [2024-07-14 05:41:58.893768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.008 [2024-07-14 05:41:58.893783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.008 [2024-07-14 05:41:58.893796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.008 [2024-07-14 05:41:58.893811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.008 [2024-07-14 05:41:58.893824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.008 [2024-07-14 05:41:58.893857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.008 [2024-07-14 05:41:58.893877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.008 [2024-07-14 05:41:58.893894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.009 [2024-07-14 05:41:58.893924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.009 [2024-07-14 05:41:58.893940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.009 [2024-07-14 05:41:58.893953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.009 [2024-07-14 05:41:58.893969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.009 [2024-07-14 05:41:58.893982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.009 [2024-07-14 05:41:58.893997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.009 [2024-07-14 05:41:58.894010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.009 [2024-07-14 05:41:58.894029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.009 [2024-07-14 05:41:58.894043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.009 [2024-07-14 05:41:58.894058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.009 [2024-07-14 05:41:58.894072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.009 [2024-07-14 05:41:58.894087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.009 [2024-07-14 05:41:58.894100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.009 [2024-07-14 05:41:58.894115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.009 [2024-07-14 05:41:58.894132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.009 [2024-07-14 05:41:58.894147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.009 [2024-07-14 05:41:58.894160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.009 [2024-07-14 05:41:58.894176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.009 [2024-07-14 05:41:58.894198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.009 [2024-07-14 05:41:58.894229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.009 [2024-07-14 05:41:58.894242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.009 [2024-07-14 05:41:58.894256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.009 [2024-07-14 05:41:58.894269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.009 [2024-07-14 05:41:58.894283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.009 [2024-07-14 05:41:58.894296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.009 [2024-07-14 05:41:58.894311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.009 [2024-07-14 05:41:58.894325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.009 [2024-07-14 05:41:58.894339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.009 [2024-07-14 05:41:58.894352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.009 [2024-07-14 05:41:58.894366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.009 [2024-07-14 05:41:58.894379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.009 [2024-07-14 05:41:58.894394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.009 [2024-07-14 05:41:58.894410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.009 [2024-07-14 05:41:58.894425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.009 [2024-07-14 05:41:58.894439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.009 [2024-07-14 05:41:58.894453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.009 [2024-07-14 05:41:58.894467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.009 [2024-07-14 05:41:58.894481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.009 [2024-07-14 05:41:58.894494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.009 [2024-07-14 05:41:58.894508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.009 [2024-07-14 05:41:58.894521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.009 [2024-07-14 05:41:58.894535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.009 [2024-07-14 05:41:58.894549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.009 [2024-07-14 05:41:58.894565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.009 [2024-07-14 05:41:58.894579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.009 [2024-07-14 05:41:58.894594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.009 [2024-07-14 05:41:58.894607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.009 [2024-07-14 05:41:58.894621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.009 [2024-07-14 05:41:58.894634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.009 [2024-07-14 05:41:58.894648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.009 [2024-07-14 05:41:58.894661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.009 [2024-07-14 05:41:58.894676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.009 [2024-07-14 05:41:58.894680] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1820 is same with t[2024-07-14 05:41:58.894689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:27:52.009 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.009 [2024-07-14 05:41:58.894707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:1[2024-07-14 05:41:58.894707] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1820 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.009 he state(5) to be set 00:27:52.009 [2024-07-14 05:41:58.894723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-14 05:41:58.894724] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1820 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.009 he state(5) to be set 00:27:52.009 [2024-07-14 05:41:58.894744] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1820 is same with t[2024-07-14 05:41:58.894744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:1he state(5) to be set 00:27:52.009 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.009 [2024-07-14 05:41:58.894758] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1820 is same with the state(5) to be set 00:27:52.009 [2024-07-14 05:41:58.894760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.009 [2024-07-14 05:41:58.894771] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1820 is same with the state(5) to be set 00:27:52.009 [2024-07-14 05:41:58.894775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.009 [2024-07-14 05:41:58.894783] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1820 is same with the state(5) to be set 00:27:52.009 [2024-07-14 05:41:58.894789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.009 [2024-07-14 05:41:58.894796] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1820 is same with the state(5) to be set 00:27:52.009 [2024-07-14 05:41:58.894804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.009 [2024-07-14 05:41:58.894808] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1820 is same with the state(5) to be set 00:27:52.009 [2024-07-14 05:41:58.894817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.009 [2024-07-14 05:41:58.894821] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1820 is same with the state(5) to be set 00:27:52.009 [2024-07-14 05:41:58.894832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:1[2024-07-14 05:41:58.894834] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1820 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.009 he state(5) to be set 00:27:52.009 [2024-07-14 05:41:58.894847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-14 05:41:58.894848] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1820 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.009 he state(5) to be set 00:27:52.009 [2024-07-14 05:41:58.894870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:1[2024-07-14 05:41:58.894863] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1820 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.009 he state(5) to be set 00:27:52.009 [2024-07-14 05:41:58.894903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.009 [2024-07-14 05:41:58.894905] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1820 is same with the state(5) to be set 00:27:52.009 [2024-07-14 05:41:58.894919] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1820 is same with the state(5) to be set 00:27:52.009 [2024-07-14 05:41:58.894921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.009 [2024-07-14 05:41:58.894932] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1820 is same with the state(5) to be set 00:27:52.009 [2024-07-14 05:41:58.894935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.009 [2024-07-14 05:41:58.894945] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1820 is same with the state(5) to be set 00:27:52.009 [2024-07-14 05:41:58.894951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.009 [2024-07-14 05:41:58.894961] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1820 is same with the state(5) to be set 00:27:52.010 [2024-07-14 05:41:58.894965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.010 [2024-07-14 05:41:58.894978] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1820 is same with the state(5) to be set 00:27:52.010 [2024-07-14 05:41:58.894981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.010 [2024-07-14 05:41:58.894992] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1820 is same with the state(5) to be set 00:27:52.010 [2024-07-14 05:41:58.894994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.010 [2024-07-14 05:41:58.895006] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1820 is same with the state(5) to be set 00:27:52.010 [2024-07-14 05:41:58.895010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.010 [2024-07-14 05:41:58.895019] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1820 is same with the state(5) to be set 00:27:52.010 [2024-07-14 05:41:58.895024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.010 [2024-07-14 05:41:58.895033] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1820 is same with the state(5) to be set 00:27:52.010 [2024-07-14 05:41:58.895040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.010 [2024-07-14 05:41:58.895046] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1820 is same with the state(5) to be set 00:27:52.010 [2024-07-14 05:41:58.895054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.010 [2024-07-14 05:41:58.895059] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1820 is same with the state(5) to be set 00:27:52.010 [2024-07-14 05:41:58.895069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.010 [2024-07-14 05:41:58.895072] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1820 is same with the state(5) to be set 00:27:52.010 [2024-07-14 05:41:58.895083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.010 [2024-07-14 05:41:58.895085] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1820 is same with the state(5) to be set 00:27:52.010 [2024-07-14 05:41:58.895098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:1[2024-07-14 05:41:58.895098] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1820 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.010 he state(5) to be set 00:27:52.010 [2024-07-14 05:41:58.895114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-14 05:41:58.895115] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1820 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.010 he state(5) to be set 00:27:52.010 [2024-07-14 05:41:58.895134] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1820 is same with the state(5) to be set 00:27:52.010 [2024-07-14 05:41:58.895137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.010 [2024-07-14 05:41:58.895147] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1820 is same with the state(5) to be set 00:27:52.010 [2024-07-14 05:41:58.895155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.010 [2024-07-14 05:41:58.895161] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1820 is same with the state(5) to be set 00:27:52.010 [2024-07-14 05:41:58.895171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.010 [2024-07-14 05:41:58.895189] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1820 is same with the state(5) to be set 00:27:52.010 [2024-07-14 05:41:58.895201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.010 [2024-07-14 05:41:58.895203] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1820 is same with the state(5) to be set 00:27:52.010 [2024-07-14 05:41:58.895217] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1820 is same with t[2024-07-14 05:41:58.895217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:1he state(5) to be set 00:27:52.010 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.010 [2024-07-14 05:41:58.895232] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1820 is same with t[2024-07-14 05:41:58.895233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:27:52.010 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.010 [2024-07-14 05:41:58.895247] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1820 is same with t[2024-07-14 05:41:58.895249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:1he state(5) to be set 00:27:52.010 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.010 [2024-07-14 05:41:58.895264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-14 05:41:58.895265] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1820 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.010 he state(5) to be set 00:27:52.010 [2024-07-14 05:41:58.895279] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1820 is same with the state(5) to be set 00:27:52.010 [2024-07-14 05:41:58.895280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.010 [2024-07-14 05:41:58.895291] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1820 is same with the state(5) to be set 00:27:52.010 [2024-07-14 05:41:58.895295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.010 [2024-07-14 05:41:58.895305] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1820 is same with the state(5) to be set 00:27:52.010 [2024-07-14 05:41:58.895310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.010 [2024-07-14 05:41:58.895317] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1820 is same with the state(5) to be set 00:27:52.010 [2024-07-14 05:41:58.895323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.010 [2024-07-14 05:41:58.895332] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1820 is same with the state(5) to be set 00:27:52.010 [2024-07-14 05:41:58.895338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.010 [2024-07-14 05:41:58.895345] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1820 is same with the state(5) to be set 00:27:52.010 [2024-07-14 05:41:58.895352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.010 [2024-07-14 05:41:58.895361] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1820 is same with the state(5) to be set 00:27:52.010 [2024-07-14 05:41:58.895368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.010 [2024-07-14 05:41:58.895374] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1820 is same with the state(5) to be set 00:27:52.010 [2024-07-14 05:41:58.895381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.010 [2024-07-14 05:41:58.895387] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1820 is same with the state(5) to be set 00:27:52.010 [2024-07-14 05:41:58.895396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.010 [2024-07-14 05:41:58.895400] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1820 is same with the state(5) to be set 00:27:52.010 [2024-07-14 05:41:58.895409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.010 [2024-07-14 05:41:58.895413] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1820 is same with the state(5) to be set 00:27:52.010 [2024-07-14 05:41:58.895424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:1[2024-07-14 05:41:58.895426] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1820 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.010 he state(5) to be set 00:27:52.010 [2024-07-14 05:41:58.895439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-14 05:41:58.895440] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1820 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.010 he state(5) to be set 00:27:52.010 [2024-07-14 05:41:58.895454] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1820 is same with t[2024-07-14 05:41:58.895455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:1he state(5) to be set 00:27:52.010 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.010 [2024-07-14 05:41:58.895469] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1820 is same with t[2024-07-14 05:41:58.895470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:27:52.010 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.010 [2024-07-14 05:41:58.895483] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1820 is same with the state(5) to be set 00:27:52.010 [2024-07-14 05:41:58.895486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.011 [2024-07-14 05:41:58.895496] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1820 is same with the state(5) to be set 00:27:52.011 [2024-07-14 05:41:58.895500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.011 [2024-07-14 05:41:58.895509] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1820 is same with the state(5) to be set 00:27:52.011 [2024-07-14 05:41:58.895514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.011 [2024-07-14 05:41:58.895521] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1820 is same with the state(5) to be set 00:27:52.011 [2024-07-14 05:41:58.895528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.011 [2024-07-14 05:41:58.895534] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1820 is same with the state(5) to be set 00:27:52.011 [2024-07-14 05:41:58.895547] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1820 is same with the state(5) to be set 00:27:52.011 [2024-07-14 05:41:58.895559] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1820 is same with the state(5) to be set 00:27:52.011 [2024-07-14 05:41:58.895571] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1820 is same with the state(5) to be set 00:27:52.011 [2024-07-14 05:41:58.895576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.011 [2024-07-14 05:41:58.895583] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1820 is same with the state(5) to be set 00:27:52.011 [2024-07-14 05:41:58.895596] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1820 is same with the state(5) to be set 00:27:52.011 [2024-07-14 05:41:58.895659] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x29dbcd0 was disconnected and freed. reset controller. 00:27:52.011 [2024-07-14 05:41:58.896255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.011 [2024-07-14 05:41:58.896281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.011 [2024-07-14 05:41:58.896297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.011 [2024-07-14 05:41:58.896311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.011 [2024-07-14 05:41:58.896325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.011 [2024-07-14 05:41:58.896339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.011 [2024-07-14 05:41:58.896353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.011 [2024-07-14 05:41:58.896366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.011 [2024-07-14 05:41:58.896379] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28ddf90 is same with the state(5) to be set 00:27:52.011 [2024-07-14 05:41:58.896432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.011 [2024-07-14 05:41:58.896453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.011 [2024-07-14 05:41:58.896468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.011 [2024-07-14 05:41:58.896481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.011 [2024-07-14 05:41:58.896495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.011 [2024-07-14 05:41:58.896508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.011 [2024-07-14 05:41:58.896522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.011 [2024-07-14 05:41:58.896535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.011 [2024-07-14 05:41:58.896549] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28a86b0 is same with the state(5) to be set 00:27:52.011 [2024-07-14 05:41:58.896619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.011 [2024-07-14 05:41:58.896641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.011 [2024-07-14 05:41:58.896656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.011 [2024-07-14 05:41:58.896669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.011 [2024-07-14 05:41:58.896684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.011 [2024-07-14 05:41:58.896697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.011 [2024-07-14 05:41:58.896711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.011 [2024-07-14 05:41:58.896724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.011 [2024-07-14 05:41:58.896737] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28b0810 is same with the state(5) to be set 00:27:52.011 [2024-07-14 05:41:58.896778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.011 [2024-07-14 05:41:58.896798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.011 [2024-07-14 05:41:58.896796] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1cc0 is same with the state(5) to be set 00:27:52.011 [2024-07-14 05:41:58.896816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.011 [2024-07-14 05:41:58.896823] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1cc0 is same with the state(5) to be set 00:27:52.011 [2024-07-14 05:41:58.896831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.011 [2024-07-14 05:41:58.896838] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1cc0 is same with the state(5) to be set 00:27:52.011 [2024-07-14 05:41:58.896846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.011 [2024-07-14 05:41:58.896851] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1cc0 is same with the state(5) to be set 00:27:52.011 [2024-07-14 05:41:58.896860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.011 [2024-07-14 05:41:58.896871] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1cc0 is same with the state(5) to be set 00:27:52.011 [2024-07-14 05:41:58.896884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-07-14 05:41:58.896887] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1cc0 is same with tid:0 cdw10:00000000 cdw11:00000000 00:27:52.011 he state(5) to be set 00:27:52.011 [2024-07-14 05:41:58.896902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.011 [2024-07-14 05:41:58.896914] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1cc0 is same with the state(5) to be set 00:27:52.011 [2024-07-14 05:41:58.896926] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2885300 is same [2024-07-14 05:41:58.896928] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1cc0 is same with twith the state(5) to be set 00:27:52.011 he state(5) to be set 00:27:52.011 [2024-07-14 05:41:58.896952] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1cc0 is same with the state(5) to be set 00:27:52.011 [2024-07-14 05:41:58.896965] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1cc0 is same with the state(5) to be set 00:27:52.011 [2024-07-14 05:41:58.896978] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1cc0 is same with the state(5) to be set 00:27:52.011 [2024-07-14 05:41:58.896983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.011 [2024-07-14 05:41:58.896991] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1cc0 is same with the state(5) to be set 00:27:52.011 [2024-07-14 05:41:58.897003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-14 05:41:58.897004] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1cc0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.011 he state(5) to be set 00:27:52.011 [2024-07-14 05:41:58.897020] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1cc0 is same with t[2024-07-14 05:41:58.897021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nshe state(5) to be set 00:27:52.011 id:0 cdw10:00000000 cdw11:00000000 00:27:52.011 [2024-07-14 05:41:58.897035] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1cc0 is same with t[2024-07-14 05:41:58.897036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(5) to be set 00:27:52.011 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.011 [2024-07-14 05:41:58.897051] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1cc0 is same with t[2024-07-14 05:41:58.897052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nshe state(5) to be set 00:27:52.011 id:0 cdw10:00000000 cdw11:00000000 00:27:52.011 [2024-07-14 05:41:58.897065] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1cc0 is same with t[2024-07-14 05:41:58.897066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(5) to be set 00:27:52.011 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.011 [2024-07-14 05:41:58.897080] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1cc0 is same with t[2024-07-14 05:41:58.897082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nshe state(5) to be set 00:27:52.011 id:0 cdw10:00000000 cdw11:00000000 00:27:52.011 [2024-07-14 05:41:58.897095] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1cc0 is same with t[2024-07-14 05:41:58.897096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(5) to be set 00:27:52.011 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.011 [2024-07-14 05:41:58.897110] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1cc0 is same with t[2024-07-14 05:41:58.897111] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2883190 is same he state(5) to be set 00:27:52.011 with the state(5) to be set 00:27:52.011 [2024-07-14 05:41:58.897133] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1cc0 is same with the state(5) to be set 00:27:52.011 [2024-07-14 05:41:58.897147] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1cc0 is same with the state(5) to be set 00:27:52.011 [2024-07-14 05:41:58.897159] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1cc0 is same with the state(5) to be set 00:27:52.011 [2024-07-14 05:41:58.897163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.011 [2024-07-14 05:41:58.897172] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1cc0 is same with the state(5) to be set 00:27:52.011 [2024-07-14 05:41:58.897184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-14 05:41:58.897186] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1cc0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.011 he state(5) to be set 00:27:52.011 [2024-07-14 05:41:58.897207] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1cc0 is same with t[2024-07-14 05:41:58.897208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nshe state(5) to be set 00:27:52.011 id:0 cdw10:00000000 cdw11:00000000 00:27:52.012 [2024-07-14 05:41:58.897222] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1cc0 is same with t[2024-07-14 05:41:58.897224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(5) to be set 00:27:52.012 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.012 [2024-07-14 05:41:58.897237] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1cc0 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.897239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.012 [2024-07-14 05:41:58.897250] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1cc0 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.897253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.012 [2024-07-14 05:41:58.897264] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1cc0 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.897267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.012 [2024-07-14 05:41:58.897277] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1cc0 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.897280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.012 [2024-07-14 05:41:58.897292] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1cc0 is same with t[2024-07-14 05:41:58.897293] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28b3f90 is same he state(5) to be set 00:27:52.012 with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.897307] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1cc0 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.897326] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1cc0 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.897340] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1cc0 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.897352] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1cc0 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.897365] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1cc0 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.897377] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1cc0 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.897390] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1cc0 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.897403] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1cc0 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.897415] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1cc0 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.897428] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1cc0 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.897440] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1cc0 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.897455] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1cc0 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.897469] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1cc0 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.897481] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1cc0 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.897494] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1cc0 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.897506] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1cc0 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.897519] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1cc0 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.897531] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1cc0 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.897543] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1cc0 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.897559] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1cc0 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.897573] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1cc0 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.897586] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1cc0 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.897601] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1cc0 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.897614] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1cc0 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.897632] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1cc0 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.897645] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1cc0 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.897658] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1cc0 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.897670] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1cc0 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.897682] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1cc0 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.897694] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1cc0 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.897710] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1cc0 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.899097] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2160 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.899143] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2160 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.899160] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2160 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.899173] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2160 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.899186] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2160 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.899198] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2160 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.899213] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2160 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.899230] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2160 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.899249] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2160 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.899262] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2160 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.899275] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2160 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.899287] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2160 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.899300] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2160 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.899312] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2160 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.899325] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2160 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.899337] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2160 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.899349] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2160 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.899362] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2160 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.899374] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2160 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.899386] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2160 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.899399] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2160 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.899411] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2160 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.899423] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2160 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.899437] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2160 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.899449] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2160 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.899465] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2160 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.899479] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2160 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.899493] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2160 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.899506] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2160 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.899518] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2160 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.899531] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2160 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.899543] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2160 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.899556] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2160 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.899569] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2160 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.899582] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2160 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.899598] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2160 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.899615] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2160 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.899629] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2160 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.899642] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2160 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.899654] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2160 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.899667] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2160 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.899679] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2160 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.899691] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2160 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.899704] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2160 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.899717] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2160 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.899730] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2160 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.899742] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2160 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.899755] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2160 is same with the state(5) to be set 00:27:52.012 [2024-07-14 05:41:58.899768] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2160 is same with the state(5) to be set 00:27:52.013 [2024-07-14 05:41:58.899780] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2160 is same with the state(5) to be set 00:27:52.013 [2024-07-14 05:41:58.899793] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2160 is same with the state(5) to be set 00:27:52.013 [2024-07-14 05:41:58.899805] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2160 is same with the state(5) to be set 00:27:52.013 [2024-07-14 05:41:58.899818] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2160 is same with the state(5) to be set 00:27:52.013 [2024-07-14 05:41:58.899831] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2160 is same with the state(5) to be set 00:27:52.013 [2024-07-14 05:41:58.899844] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2160 is same with the state(5) to be set 00:27:52.013 [2024-07-14 05:41:58.899856] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2160 is same with the state(5) to be set 00:27:52.013 [2024-07-14 05:41:58.899875] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2160 is same with the state(5) to be set 00:27:52.013 [2024-07-14 05:41:58.899889] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2160 is same with the state(5) to be set 00:27:52.013 [2024-07-14 05:41:58.900610] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:27:52.013 [2024-07-14 05:41:58.900651] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28b0810 (9): Bad file descriptor 00:27:52.013 [2024-07-14 05:41:58.900710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.013 [2024-07-14 05:41:58.900732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.013 [2024-07-14 05:41:58.900756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.013 [2024-07-14 05:41:58.900773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.013 [2024-07-14 05:41:58.900789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.013 [2024-07-14 05:41:58.900803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.013 [2024-07-14 05:41:58.900820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.013 [2024-07-14 05:41:58.900836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.013 [2024-07-14 05:41:58.900852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.013 [2024-07-14 05:41:58.900874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.013 [2024-07-14 05:41:58.900892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.013 [2024-07-14 05:41:58.900917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.013 [2024-07-14 05:41:58.900932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.013 [2024-07-14 05:41:58.900946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.013 [2024-07-14 05:41:58.900962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.013 [2024-07-14 05:41:58.900976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.013 [2024-07-14 05:41:58.900992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.013 [2024-07-14 05:41:58.901006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.013 [2024-07-14 05:41:58.901021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.013 [2024-07-14 05:41:58.901035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.013 [2024-07-14 05:41:58.901051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.013 [2024-07-14 05:41:58.901056] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2620 is same with t[2024-07-14 05:41:58.901065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:27:52.013 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.013 [2024-07-14 05:41:58.901084] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2620 is same with t[2024-07-14 05:41:58.901085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:1he state(5) to be set 00:27:52.013 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.013 [2024-07-14 05:41:58.901101] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2620 is same with t[2024-07-14 05:41:58.901102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:27:52.013 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.013 [2024-07-14 05:41:58.901117] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2620 is same with the state(5) to be set 00:27:52.013 [2024-07-14 05:41:58.901130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.013 [2024-07-14 05:41:58.901139] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2620 is same with the state(5) to be set 00:27:52.013 [2024-07-14 05:41:58.901145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.013 [2024-07-14 05:41:58.901152] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2620 is same with the state(5) to be set 00:27:52.013 [2024-07-14 05:41:58.901162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.013 [2024-07-14 05:41:58.901166] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2620 is same with the state(5) to be set 00:27:52.013 [2024-07-14 05:41:58.901176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.013 [2024-07-14 05:41:58.901179] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2620 is same with the state(5) to be set 00:27:52.013 [2024-07-14 05:41:58.901192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:1[2024-07-14 05:41:58.901192] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2620 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.013 he state(5) to be set 00:27:52.013 [2024-07-14 05:41:58.901208] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2620 is same with t[2024-07-14 05:41:58.901209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:27:52.013 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.013 [2024-07-14 05:41:58.901222] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2620 is same with the state(5) to be set 00:27:52.013 [2024-07-14 05:41:58.901227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.013 [2024-07-14 05:41:58.901236] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2620 is same with the state(5) to be set 00:27:52.013 [2024-07-14 05:41:58.901242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.013 [2024-07-14 05:41:58.901263] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2620 is same with the state(5) to be set 00:27:52.013 [2024-07-14 05:41:58.901275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.013 [2024-07-14 05:41:58.901277] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2620 is same with the state(5) to be set 00:27:52.013 [2024-07-14 05:41:58.901289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-14 05:41:58.901291] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2620 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.013 he state(5) to be set 00:27:52.013 [2024-07-14 05:41:58.901306] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2620 is same with the state(5) to be set 00:27:52.013 [2024-07-14 05:41:58.901307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.013 [2024-07-14 05:41:58.901318] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2620 is same with the state(5) to be set 00:27:52.013 [2024-07-14 05:41:58.901322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.013 [2024-07-14 05:41:58.901331] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2620 is same with the state(5) to be set 00:27:52.013 [2024-07-14 05:41:58.901338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.013 [2024-07-14 05:41:58.901347] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2620 is same with the state(5) to be set 00:27:52.013 [2024-07-14 05:41:58.901353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.013 [2024-07-14 05:41:58.901360] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2620 is same with the state(5) to be set 00:27:52.013 [2024-07-14 05:41:58.901368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.013 [2024-07-14 05:41:58.901374] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2620 is same with the state(5) to be set 00:27:52.013 [2024-07-14 05:41:58.901383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.013 [2024-07-14 05:41:58.901387] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2620 is same with the state(5) to be set 00:27:52.013 [2024-07-14 05:41:58.901399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:1[2024-07-14 05:41:58.901400] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2620 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.013 he state(5) to be set 00:27:52.013 [2024-07-14 05:41:58.901415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-14 05:41:58.901416] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2620 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.013 he state(5) to be set 00:27:52.013 [2024-07-14 05:41:58.901431] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2620 is same with the state(5) to be set 00:27:52.013 [2024-07-14 05:41:58.901434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.013 [2024-07-14 05:41:58.901444] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2620 is same with the state(5) to be set 00:27:52.013 [2024-07-14 05:41:58.901448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.013 [2024-07-14 05:41:58.901457] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2620 is same with the state(5) to be set 00:27:52.013 [2024-07-14 05:41:58.901464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.013 [2024-07-14 05:41:58.901470] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2620 is same with the state(5) to be set 00:27:52.013 [2024-07-14 05:41:58.901479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.013 [2024-07-14 05:41:58.901483] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2620 is same with the state(5) to be set 00:27:52.013 [2024-07-14 05:41:58.901495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:1[2024-07-14 05:41:58.901496] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2620 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.013 he state(5) to be set 00:27:52.014 [2024-07-14 05:41:58.901511] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2620 is same with t[2024-07-14 05:41:58.901512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:27:52.014 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.014 [2024-07-14 05:41:58.901525] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2620 is same with the state(5) to be set 00:27:52.014 [2024-07-14 05:41:58.901529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.014 [2024-07-14 05:41:58.901541] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2620 is same with the state(5) to be set 00:27:52.014 [2024-07-14 05:41:58.901544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.014 [2024-07-14 05:41:58.901555] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2620 is same with the state(5) to be set 00:27:52.014 [2024-07-14 05:41:58.901560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.014 [2024-07-14 05:41:58.901568] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2620 is same with the state(5) to be set 00:27:52.014 [2024-07-14 05:41:58.901575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.014 [2024-07-14 05:41:58.901581] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2620 is same with the state(5) to be set 00:27:52.014 [2024-07-14 05:41:58.901591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.014 [2024-07-14 05:41:58.901594] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2620 is same with the state(5) to be set 00:27:52.014 [2024-07-14 05:41:58.901604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-14 05:41:58.901606] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2620 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.014 he state(5) to be set 00:27:52.014 [2024-07-14 05:41:58.901620] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2620 is same with the state(5) to be set 00:27:52.014 [2024-07-14 05:41:58.901622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.014 [2024-07-14 05:41:58.901632] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2620 is same with the state(5) to be set 00:27:52.014 [2024-07-14 05:41:58.901636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.014 [2024-07-14 05:41:58.901644] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2620 is same with the state(5) to be set 00:27:52.014 [2024-07-14 05:41:58.901652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.014 [2024-07-14 05:41:58.901657] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2620 is same with the state(5) to be set 00:27:52.014 [2024-07-14 05:41:58.901667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.014 [2024-07-14 05:41:58.901669] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2620 is same with the state(5) to be set 00:27:52.014 [2024-07-14 05:41:58.901682] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2620 is same with t[2024-07-14 05:41:58.901682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:1he state(5) to be set 00:27:52.014 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.014 [2024-07-14 05:41:58.901695] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2620 is same with the state(5) to be set 00:27:52.014 [2024-07-14 05:41:58.901699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.014 [2024-07-14 05:41:58.901708] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2620 is same with the state(5) to be set 00:27:52.014 [2024-07-14 05:41:58.901715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.014 [2024-07-14 05:41:58.901724] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2620 is same with the state(5) to be set 00:27:52.014 [2024-07-14 05:41:58.901729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.014 [2024-07-14 05:41:58.901737] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2620 is same with the state(5) to be set 00:27:52.014 [2024-07-14 05:41:58.901745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.014 [2024-07-14 05:41:58.901750] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2620 is same with the state(5) to be set 00:27:52.014 [2024-07-14 05:41:58.901759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.014 [2024-07-14 05:41:58.901763] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2620 is same with the state(5) to be set 00:27:52.014 [2024-07-14 05:41:58.901775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:1[2024-07-14 05:41:58.901776] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2620 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.014 he state(5) to be set 00:27:52.014 [2024-07-14 05:41:58.901790] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2620 is same with t[2024-07-14 05:41:58.901791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:27:52.014 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.014 [2024-07-14 05:41:58.901804] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2620 is same with the state(5) to be set 00:27:52.014 [2024-07-14 05:41:58.901809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.014 [2024-07-14 05:41:58.901817] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2620 is same with the state(5) to be set 00:27:52.014 [2024-07-14 05:41:58.901824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.014 [2024-07-14 05:41:58.901830] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2620 is same with the state(5) to be set 00:27:52.014 [2024-07-14 05:41:58.901840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.014 [2024-07-14 05:41:58.901842] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2620 is same with the state(5) to be set 00:27:52.014 [2024-07-14 05:41:58.901854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-14 05:41:58.901855] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2620 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.014 he state(5) to be set 00:27:52.014 [2024-07-14 05:41:58.901891] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2620 is same with the state(5) to be set 00:27:52.014 [2024-07-14 05:41:58.901894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.014 [2024-07-14 05:41:58.901915] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2620 is same with the state(5) to be set 00:27:52.014 [2024-07-14 05:41:58.901921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.014 [2024-07-14 05:41:58.901929] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2620 is same with the state(5) to be set 00:27:52.014 [2024-07-14 05:41:58.901938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.014 [2024-07-14 05:41:58.901945] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2620 is same with the state(5) to be set 00:27:52.014 [2024-07-14 05:41:58.901953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.014 [2024-07-14 05:41:58.901958] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2620 is same with the state(5) to be set 00:27:52.014 [2024-07-14 05:41:58.901969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.014 [2024-07-14 05:41:58.901971] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2620 is same with the state(5) to be set 00:27:52.014 [2024-07-14 05:41:58.901987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.014 [2024-07-14 05:41:58.902004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.014 [2024-07-14 05:41:58.902018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.014 [2024-07-14 05:41:58.902032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.014 [2024-07-14 05:41:58.902046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.015 [2024-07-14 05:41:58.902061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.015 [2024-07-14 05:41:58.902075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.015 [2024-07-14 05:41:58.902089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.016 [2024-07-14 05:41:58.902103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.016 [2024-07-14 05:41:58.902118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.016 [2024-07-14 05:41:58.902133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.016 [2024-07-14 05:41:58.902147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.016 [2024-07-14 05:41:58.902161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.016 [2024-07-14 05:41:58.902176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.016 [2024-07-14 05:41:58.902204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.016 [2024-07-14 05:41:58.902219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.016 [2024-07-14 05:41:58.902233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.016 [2024-07-14 05:41:58.902248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.016 [2024-07-14 05:41:58.902262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.016 [2024-07-14 05:41:58.902276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.016 [2024-07-14 05:41:58.902292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.016 [2024-07-14 05:41:58.902307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.016 [2024-07-14 05:41:58.902320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.016 [2024-07-14 05:41:58.902335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.016 [2024-07-14 05:41:58.902348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.016 [2024-07-14 05:41:58.902362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.016 [2024-07-14 05:41:58.902375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.016 [2024-07-14 05:41:58.902390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.016 [2024-07-14 05:41:58.902409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.016 [2024-07-14 05:41:58.902423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.016 [2024-07-14 05:41:58.902436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.016 [2024-07-14 05:41:58.902465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.016 [2024-07-14 05:41:58.902481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.016 [2024-07-14 05:41:58.902497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.016 [2024-07-14 05:41:58.902510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.016 [2024-07-14 05:41:58.902525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.016 [2024-07-14 05:41:58.902539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.016 [2024-07-14 05:41:58.902554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.016 [2024-07-14 05:41:58.902567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.016 [2024-07-14 05:41:58.902582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.016 [2024-07-14 05:41:58.902596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.016 [2024-07-14 05:41:58.902611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.016 [2024-07-14 05:41:58.902624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.016 [2024-07-14 05:41:58.902639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.016 [2024-07-14 05:41:58.902652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.016 [2024-07-14 05:41:58.902671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.016 [2024-07-14 05:41:58.902685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.016 [2024-07-14 05:41:58.902701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.016 [2024-07-14 05:41:58.902714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.016 [2024-07-14 05:41:58.902729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.016 [2024-07-14 05:41:58.902742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.016 [2024-07-14 05:41:58.902757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.016 [2024-07-14 05:41:58.902770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.016 [2024-07-14 05:41:58.902853] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x295cef0 was disconnected and freed. reset controller. 00:27:52.016 [2024-07-14 05:41:58.903353] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x28813e0 was disconnected and freed. reset controller. 00:27:52.016 [2024-07-14 05:41:58.905468] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:52.016 [2024-07-14 05:41:58.905498] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:27:52.016 [2024-07-14 05:41:58.905551] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2a28f50 (9): Bad file descriptor 00:27:52.016 [2024-07-14 05:41:58.905578] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2885300 (9): Bad file descriptor 00:27:52.016 [2024-07-14 05:41:58.905819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.016 [2024-07-14 05:41:58.905847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x28b0810 with addr=10.0.0.2, port=4420 00:27:52.016 [2024-07-14 05:41:58.905864] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28b0810 is same with the state(5) to be set 00:27:52.016 [2024-07-14 05:41:58.905978] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:52.016 [2024-07-14 05:41:58.906063] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:52.016 [2024-07-14 05:41:58.906363] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28b0810 (9): Bad file descriptor 00:27:52.016 [2024-07-14 05:41:58.906393] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28ddf90 (9): Bad file descriptor 00:27:52.016 [2024-07-14 05:41:58.906426] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28a86b0 (9): Bad file descriptor 00:27:52.016 [2024-07-14 05:41:58.906476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.016 [2024-07-14 05:41:58.906498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.016 [2024-07-14 05:41:58.906514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.016 [2024-07-14 05:41:58.906528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.016 [2024-07-14 05:41:58.906543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.016 [2024-07-14 05:41:58.906562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.016 [2024-07-14 05:41:58.906576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.016 [2024-07-14 05:41:58.906590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.016 [2024-07-14 05:41:58.906603] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237d610 is same with the state(5) to be set 00:27:52.016 [2024-07-14 05:41:58.906649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.016 [2024-07-14 05:41:58.906670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.016 [2024-07-14 05:41:58.906686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.016 [2024-07-14 05:41:58.906699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.016 [2024-07-14 05:41:58.906713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.016 [2024-07-14 05:41:58.906727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.016 [2024-07-14 05:41:58.906741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.016 [2024-07-14 05:41:58.906755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.016 [2024-07-14 05:41:58.906768] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2a29f00 is same with the state(5) to be set 00:27:52.016 [2024-07-14 05:41:58.906801] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2883190 (9): Bad file descriptor 00:27:52.016 [2024-07-14 05:41:58.906832] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28b3f90 (9): Bad file descriptor 00:27:52.016 [2024-07-14 05:41:58.906888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.016 [2024-07-14 05:41:58.906913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.016 [2024-07-14 05:41:58.906928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.016 [2024-07-14 05:41:58.906941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.016 [2024-07-14 05:41:58.906955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.016 [2024-07-14 05:41:58.906969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.016 [2024-07-14 05:41:58.906983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.016 [2024-07-14 05:41:58.906996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.017 [2024-07-14 05:41:58.907009] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2a47ec0 is same with the state(5) to be set 00:27:52.017 [2024-07-14 05:41:58.907398] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:52.017 [2024-07-14 05:41:58.907475] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:52.017 [2024-07-14 05:41:58.907666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.017 [2024-07-14 05:41:58.907690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.017 [2024-07-14 05:41:58.907711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.017 [2024-07-14 05:41:58.907727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.017 [2024-07-14 05:41:58.907743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.017 [2024-07-14 05:41:58.907758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.017 [2024-07-14 05:41:58.907774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.017 [2024-07-14 05:41:58.907789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.017 [2024-07-14 05:41:58.907804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.017 [2024-07-14 05:41:58.907818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.017 [2024-07-14 05:41:58.907835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.017 [2024-07-14 05:41:58.907848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.017 [2024-07-14 05:41:58.907873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.017 [2024-07-14 05:41:58.907890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.017 [2024-07-14 05:41:58.907917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.017 [2024-07-14 05:41:58.907931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.017 [2024-07-14 05:41:58.907947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.017 [2024-07-14 05:41:58.907961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.017 [2024-07-14 05:41:58.907977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.017 [2024-07-14 05:41:58.907990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.017 [2024-07-14 05:41:58.908006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.017 [2024-07-14 05:41:58.908020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.017 [2024-07-14 05:41:58.908035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.017 [2024-07-14 05:41:58.908049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.017 [2024-07-14 05:41:58.908065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.017 [2024-07-14 05:41:58.908083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.017 [2024-07-14 05:41:58.908100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.017 [2024-07-14 05:41:58.908115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.017 [2024-07-14 05:41:58.908131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.017 [2024-07-14 05:41:58.908145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.017 [2024-07-14 05:41:58.908166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.017 [2024-07-14 05:41:58.908180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.017 [2024-07-14 05:41:58.908196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.017 [2024-07-14 05:41:58.908210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.017 [2024-07-14 05:41:58.908226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.017 [2024-07-14 05:41:58.908239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.017 [2024-07-14 05:41:58.908255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.017 [2024-07-14 05:41:58.908269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.017 [2024-07-14 05:41:58.908285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.017 [2024-07-14 05:41:58.908299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.017 [2024-07-14 05:41:58.908314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.017 [2024-07-14 05:41:58.931739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.017 [2024-07-14 05:41:58.931825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.017 [2024-07-14 05:41:58.931843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.017 [2024-07-14 05:41:58.931860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.017 [2024-07-14 05:41:58.931885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.017 [2024-07-14 05:41:58.931904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.017 [2024-07-14 05:41:58.931919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.017 [2024-07-14 05:41:58.931935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.017 [2024-07-14 05:41:58.931950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.017 [2024-07-14 05:41:58.931965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.017 [2024-07-14 05:41:58.931991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.017 [2024-07-14 05:41:58.932008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.017 [2024-07-14 05:41:58.932022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.017 [2024-07-14 05:41:58.932039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.017 [2024-07-14 05:41:58.932052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.017 [2024-07-14 05:41:58.932068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.017 [2024-07-14 05:41:58.932083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.017 [2024-07-14 05:41:58.932099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.017 [2024-07-14 05:41:58.932113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.017 [2024-07-14 05:41:58.932128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.017 [2024-07-14 05:41:58.932143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.017 [2024-07-14 05:41:58.932159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.017 [2024-07-14 05:41:58.932173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.017 [2024-07-14 05:41:58.932189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.017 [2024-07-14 05:41:58.932203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.017 [2024-07-14 05:41:58.932219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.017 [2024-07-14 05:41:58.932232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.017 [2024-07-14 05:41:58.932248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.017 [2024-07-14 05:41:58.932262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.017 [2024-07-14 05:41:58.932278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.017 [2024-07-14 05:41:58.932291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.017 [2024-07-14 05:41:58.932309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.017 [2024-07-14 05:41:58.932324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.017 [2024-07-14 05:41:58.932339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.017 [2024-07-14 05:41:58.932353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.017 [2024-07-14 05:41:58.932372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.017 [2024-07-14 05:41:58.932387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.017 [2024-07-14 05:41:58.932402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.017 [2024-07-14 05:41:58.932416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.017 [2024-07-14 05:41:58.932432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.017 [2024-07-14 05:41:58.932446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.017 [2024-07-14 05:41:58.932463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.018 [2024-07-14 05:41:58.932477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.018 [2024-07-14 05:41:58.932492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.018 [2024-07-14 05:41:58.932507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.018 [2024-07-14 05:41:58.932522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.018 [2024-07-14 05:41:58.932536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.018 [2024-07-14 05:41:58.932552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.018 [2024-07-14 05:41:58.932567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.018 [2024-07-14 05:41:58.932583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.018 [2024-07-14 05:41:58.932597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.018 [2024-07-14 05:41:58.932613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.018 [2024-07-14 05:41:58.932628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.018 [2024-07-14 05:41:58.932643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.018 [2024-07-14 05:41:58.932657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.018 [2024-07-14 05:41:58.932672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.018 [2024-07-14 05:41:58.932686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.018 [2024-07-14 05:41:58.932702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.018 [2024-07-14 05:41:58.932716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.018 [2024-07-14 05:41:58.932733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.018 [2024-07-14 05:41:58.932750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.018 [2024-07-14 05:41:58.932768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.018 [2024-07-14 05:41:58.932783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.018 [2024-07-14 05:41:58.932799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.018 [2024-07-14 05:41:58.932813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.018 [2024-07-14 05:41:58.932828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.018 [2024-07-14 05:41:58.932842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.018 [2024-07-14 05:41:58.932858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.018 [2024-07-14 05:41:58.932879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.018 [2024-07-14 05:41:58.932896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.018 [2024-07-14 05:41:58.932911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.018 [2024-07-14 05:41:58.932927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.018 [2024-07-14 05:41:58.932941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.018 [2024-07-14 05:41:58.932956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.018 [2024-07-14 05:41:58.932971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.018 [2024-07-14 05:41:58.932986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.018 [2024-07-14 05:41:58.933000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.018 [2024-07-14 05:41:58.933016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.018 [2024-07-14 05:41:58.933030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.018 [2024-07-14 05:41:58.933046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.018 [2024-07-14 05:41:58.933060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.018 [2024-07-14 05:41:58.933076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.018 [2024-07-14 05:41:58.933090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.018 [2024-07-14 05:41:58.933106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.018 [2024-07-14 05:41:58.933120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.018 [2024-07-14 05:41:58.933140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.018 [2024-07-14 05:41:58.933155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.018 [2024-07-14 05:41:58.933170] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29558b0 is same with the state(5) to be set 00:27:52.018 [2024-07-14 05:41:58.933714] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x29558b0 was disconnected and freed. reset controller. 00:27:52.018 [2024-07-14 05:41:58.934118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.018 [2024-07-14 05:41:58.934150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2885300 with addr=10.0.0.2, port=4420 00:27:52.018 [2024-07-14 05:41:58.934171] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2885300 is same with the state(5) to be set 00:27:52.018 [2024-07-14 05:41:58.934335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.018 [2024-07-14 05:41:58.934360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2a28f50 with addr=10.0.0.2, port=4420 00:27:52.018 [2024-07-14 05:41:58.934376] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2a28f50 is same with the state(5) to be set 00:27:52.018 [2024-07-14 05:41:58.934392] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:27:52.018 [2024-07-14 05:41:58.934406] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:27:52.018 [2024-07-14 05:41:58.934421] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:27:52.018 [2024-07-14 05:41:58.934477] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:52.018 [2024-07-14 05:41:58.934502] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:52.018 [2024-07-14 05:41:58.934539] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x237d610 (9): Bad file descriptor 00:27:52.018 [2024-07-14 05:41:58.934572] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2a29f00 (9): Bad file descriptor 00:27:52.018 [2024-07-14 05:41:58.934616] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2a47ec0 (9): Bad file descriptor 00:27:52.018 [2024-07-14 05:41:58.934647] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2a28f50 (9): Bad file descriptor 00:27:52.018 [2024-07-14 05:41:58.934672] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2885300 (9): Bad file descriptor 00:27:52.018 [2024-07-14 05:41:58.934912] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:52.018 [2024-07-14 05:41:58.936104] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:52.018 [2024-07-14 05:41:58.936145] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:27:52.018 [2024-07-14 05:41:58.936243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.018 [2024-07-14 05:41:58.936266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.018 [2024-07-14 05:41:58.936288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.018 [2024-07-14 05:41:58.936303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.018 [2024-07-14 05:41:58.936320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.018 [2024-07-14 05:41:58.936339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.018 [2024-07-14 05:41:58.936356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.018 [2024-07-14 05:41:58.936371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.018 [2024-07-14 05:41:58.936386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.018 [2024-07-14 05:41:58.936400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.018 [2024-07-14 05:41:58.936416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.018 [2024-07-14 05:41:58.936430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.018 [2024-07-14 05:41:58.936446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.018 [2024-07-14 05:41:58.936460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.018 [2024-07-14 05:41:58.936475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.018 [2024-07-14 05:41:58.936489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.018 [2024-07-14 05:41:58.936504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.018 [2024-07-14 05:41:58.936518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.018 [2024-07-14 05:41:58.936534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.018 [2024-07-14 05:41:58.936548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.018 [2024-07-14 05:41:58.936564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.019 [2024-07-14 05:41:58.936577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.019 [2024-07-14 05:41:58.936593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.019 [2024-07-14 05:41:58.936607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.019 [2024-07-14 05:41:58.936623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.019 [2024-07-14 05:41:58.936636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.019 [2024-07-14 05:41:58.936652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.019 [2024-07-14 05:41:58.936666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.019 [2024-07-14 05:41:58.936681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.019 [2024-07-14 05:41:58.936695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.019 [2024-07-14 05:41:58.936714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.019 [2024-07-14 05:41:58.936729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.019 [2024-07-14 05:41:58.936745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.019 [2024-07-14 05:41:58.936758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.019 [2024-07-14 05:41:58.936774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.019 [2024-07-14 05:41:58.936787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.019 [2024-07-14 05:41:58.936804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.019 [2024-07-14 05:41:58.936818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.019 [2024-07-14 05:41:58.936834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.019 [2024-07-14 05:41:58.936848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.019 [2024-07-14 05:41:58.936864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.019 [2024-07-14 05:41:58.936888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.019 [2024-07-14 05:41:58.936904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.019 [2024-07-14 05:41:58.936918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.019 [2024-07-14 05:41:58.936934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.019 [2024-07-14 05:41:58.936947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.019 [2024-07-14 05:41:58.936963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.019 [2024-07-14 05:41:58.936977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.019 [2024-07-14 05:41:58.936993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.019 [2024-07-14 05:41:58.937007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.019 [2024-07-14 05:41:58.937023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.019 [2024-07-14 05:41:58.937037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.019 [2024-07-14 05:41:58.937052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.019 [2024-07-14 05:41:58.937066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.019 [2024-07-14 05:41:58.937082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.019 [2024-07-14 05:41:58.937099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.019 [2024-07-14 05:41:58.937116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.019 [2024-07-14 05:41:58.937130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.019 [2024-07-14 05:41:58.937145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.019 [2024-07-14 05:41:58.937159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.019 [2024-07-14 05:41:58.937175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.019 [2024-07-14 05:41:58.937189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.019 [2024-07-14 05:41:58.937204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.019 [2024-07-14 05:41:58.937219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.019 [2024-07-14 05:41:58.937235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.019 [2024-07-14 05:41:58.937249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.019 [2024-07-14 05:41:58.937264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.019 [2024-07-14 05:41:58.937278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.019 [2024-07-14 05:41:58.937295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.019 [2024-07-14 05:41:58.937309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.019 [2024-07-14 05:41:58.937325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.019 [2024-07-14 05:41:58.937339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.019 [2024-07-14 05:41:58.937355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.019 [2024-07-14 05:41:58.937368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.019 [2024-07-14 05:41:58.937384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.019 [2024-07-14 05:41:58.937397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.019 [2024-07-14 05:41:58.937413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.019 [2024-07-14 05:41:58.937426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.019 [2024-07-14 05:41:58.937442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.019 [2024-07-14 05:41:58.937455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.019 [2024-07-14 05:41:58.937475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.019 [2024-07-14 05:41:58.937490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.019 [2024-07-14 05:41:58.937506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.019 [2024-07-14 05:41:58.937520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.019 [2024-07-14 05:41:58.937535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.019 [2024-07-14 05:41:58.937549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.019 [2024-07-14 05:41:58.937565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.019 [2024-07-14 05:41:58.937579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.019 [2024-07-14 05:41:58.937594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.019 [2024-07-14 05:41:58.937608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.019 [2024-07-14 05:41:58.937623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.019 [2024-07-14 05:41:58.937637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.019 [2024-07-14 05:41:58.937653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.019 [2024-07-14 05:41:58.937667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.019 [2024-07-14 05:41:58.937683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.019 [2024-07-14 05:41:58.937696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.019 [2024-07-14 05:41:58.937712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.019 [2024-07-14 05:41:58.937726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.019 [2024-07-14 05:41:58.937741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.019 [2024-07-14 05:41:58.937754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.019 [2024-07-14 05:41:58.937770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.019 [2024-07-14 05:41:58.937784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.019 [2024-07-14 05:41:58.937800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.019 [2024-07-14 05:41:58.937814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.019 [2024-07-14 05:41:58.937829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.019 [2024-07-14 05:41:58.937847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.020 [2024-07-14 05:41:58.937863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.020 [2024-07-14 05:41:58.937886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.020 [2024-07-14 05:41:58.937903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.020 [2024-07-14 05:41:58.937917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.020 [2024-07-14 05:41:58.937933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.020 [2024-07-14 05:41:58.937947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.020 [2024-07-14 05:41:58.937963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.020 [2024-07-14 05:41:58.937976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.020 [2024-07-14 05:41:58.937992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.020 [2024-07-14 05:41:58.938006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.020 [2024-07-14 05:41:58.938021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.020 [2024-07-14 05:41:58.938035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.020 [2024-07-14 05:41:58.938051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.020 [2024-07-14 05:41:58.938065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.020 [2024-07-14 05:41:58.938081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.020 [2024-07-14 05:41:58.938094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.020 [2024-07-14 05:41:58.938110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.020 [2024-07-14 05:41:58.938123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.020 [2024-07-14 05:41:58.938139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.020 [2024-07-14 05:41:58.938152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.020 [2024-07-14 05:41:58.938167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.020 [2024-07-14 05:41:58.938181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.020 [2024-07-14 05:41:58.938196] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x295e1d0 is same with the state(5) to be set 00:27:52.020 [2024-07-14 05:41:58.939437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.020 [2024-07-14 05:41:58.939465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.020 [2024-07-14 05:41:58.939487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.020 [2024-07-14 05:41:58.939503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.020 [2024-07-14 05:41:58.939519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.020 [2024-07-14 05:41:58.939533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.020 [2024-07-14 05:41:58.939549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.020 [2024-07-14 05:41:58.939563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.020 [2024-07-14 05:41:58.939578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.020 [2024-07-14 05:41:58.939592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.020 [2024-07-14 05:41:58.939608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.020 [2024-07-14 05:41:58.939621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.020 [2024-07-14 05:41:58.939638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.020 [2024-07-14 05:41:58.939651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.020 [2024-07-14 05:41:58.939666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.020 [2024-07-14 05:41:58.939680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.020 [2024-07-14 05:41:58.939695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.020 [2024-07-14 05:41:58.939709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.020 [2024-07-14 05:41:58.939725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.020 [2024-07-14 05:41:58.939738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.020 [2024-07-14 05:41:58.939754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.020 [2024-07-14 05:41:58.939768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.020 [2024-07-14 05:41:58.939784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.020 [2024-07-14 05:41:58.939798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.020 [2024-07-14 05:41:58.939814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.020 [2024-07-14 05:41:58.939827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.020 [2024-07-14 05:41:58.939847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.020 [2024-07-14 05:41:58.939862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.020 [2024-07-14 05:41:58.939885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.020 [2024-07-14 05:41:58.939900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.020 [2024-07-14 05:41:58.939919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.020 [2024-07-14 05:41:58.939933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.020 [2024-07-14 05:41:58.939948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.020 [2024-07-14 05:41:58.939962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.020 [2024-07-14 05:41:58.939978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.020 [2024-07-14 05:41:58.939992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.020 [2024-07-14 05:41:58.940008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.020 [2024-07-14 05:41:58.940022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.020 [2024-07-14 05:41:58.940037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.020 [2024-07-14 05:41:58.940051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.020 [2024-07-14 05:41:58.940067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.020 [2024-07-14 05:41:58.940081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.020 [2024-07-14 05:41:58.940097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.020 [2024-07-14 05:41:58.940111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.020 [2024-07-14 05:41:58.940127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.020 [2024-07-14 05:41:58.940141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.020 [2024-07-14 05:41:58.940156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.020 [2024-07-14 05:41:58.940170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.020 [2024-07-14 05:41:58.940185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.020 [2024-07-14 05:41:58.940199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.020 [2024-07-14 05:41:58.940214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.021 [2024-07-14 05:41:58.940232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.021 [2024-07-14 05:41:58.940248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.021 [2024-07-14 05:41:58.940262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.021 [2024-07-14 05:41:58.940279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.021 [2024-07-14 05:41:58.940292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.021 [2024-07-14 05:41:58.940308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.021 [2024-07-14 05:41:58.940321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.021 [2024-07-14 05:41:58.940337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.021 [2024-07-14 05:41:58.940352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.021 [2024-07-14 05:41:58.940368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.021 [2024-07-14 05:41:58.940381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.021 [2024-07-14 05:41:58.940397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.021 [2024-07-14 05:41:58.940411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.021 [2024-07-14 05:41:58.940426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.021 [2024-07-14 05:41:58.940440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.021 [2024-07-14 05:41:58.940456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.021 [2024-07-14 05:41:58.940471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.021 [2024-07-14 05:41:58.940486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.021 [2024-07-14 05:41:58.940500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.021 [2024-07-14 05:41:58.940516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.021 [2024-07-14 05:41:58.940530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.021 [2024-07-14 05:41:58.940545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.021 [2024-07-14 05:41:58.940560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.021 [2024-07-14 05:41:58.940576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.021 [2024-07-14 05:41:58.940590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.021 [2024-07-14 05:41:58.940606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.021 [2024-07-14 05:41:58.940624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.021 [2024-07-14 05:41:58.940640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.021 [2024-07-14 05:41:58.940654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.021 [2024-07-14 05:41:58.940669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.021 [2024-07-14 05:41:58.940683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.021 [2024-07-14 05:41:58.940699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.021 [2024-07-14 05:41:58.940712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.021 [2024-07-14 05:41:58.940727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.021 [2024-07-14 05:41:58.940741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.021 [2024-07-14 05:41:58.940757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.021 [2024-07-14 05:41:58.940770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.021 [2024-07-14 05:41:58.940786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.021 [2024-07-14 05:41:58.940800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.021 [2024-07-14 05:41:58.940815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.021 [2024-07-14 05:41:58.940829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.021 [2024-07-14 05:41:58.940844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.021 [2024-07-14 05:41:58.940858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.021 [2024-07-14 05:41:58.940879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.021 [2024-07-14 05:41:58.940894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.021 [2024-07-14 05:41:58.940911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.021 [2024-07-14 05:41:58.940925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.021 [2024-07-14 05:41:58.940940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.021 [2024-07-14 05:41:58.940955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.021 [2024-07-14 05:41:58.940971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.021 [2024-07-14 05:41:58.940985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.021 [2024-07-14 05:41:58.941004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.021 [2024-07-14 05:41:58.941018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.021 [2024-07-14 05:41:58.941034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.021 [2024-07-14 05:41:58.941048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.021 [2024-07-14 05:41:58.941064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.021 [2024-07-14 05:41:58.941078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.021 [2024-07-14 05:41:58.941093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.021 [2024-07-14 05:41:58.941107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.021 [2024-07-14 05:41:58.941122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.021 [2024-07-14 05:41:58.941136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.021 [2024-07-14 05:41:58.941151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.021 [2024-07-14 05:41:58.941165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.021 [2024-07-14 05:41:58.941180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.021 [2024-07-14 05:41:58.941194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.021 [2024-07-14 05:41:58.941209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.021 [2024-07-14 05:41:58.941223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.021 [2024-07-14 05:41:58.941239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.021 [2024-07-14 05:41:58.941253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.021 [2024-07-14 05:41:58.941269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.021 [2024-07-14 05:41:58.941283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.021 [2024-07-14 05:41:58.941298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.021 [2024-07-14 05:41:58.941312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.021 [2024-07-14 05:41:58.941328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.021 [2024-07-14 05:41:58.941342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.021 [2024-07-14 05:41:58.941357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.021 [2024-07-14 05:41:58.941374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.021 [2024-07-14 05:41:58.941390] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29dd1f0 is same with the state(5) to be set 00:27:52.021 [2024-07-14 05:41:58.942635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.021 [2024-07-14 05:41:58.942659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.021 [2024-07-14 05:41:58.942680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.021 [2024-07-14 05:41:58.942696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.021 [2024-07-14 05:41:58.942712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.021 [2024-07-14 05:41:58.942726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.021 [2024-07-14 05:41:58.942742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.022 [2024-07-14 05:41:58.942756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.022 [2024-07-14 05:41:58.942772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.022 [2024-07-14 05:41:58.942786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.022 [2024-07-14 05:41:58.942802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.022 [2024-07-14 05:41:58.942815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.022 [2024-07-14 05:41:58.942831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.022 [2024-07-14 05:41:58.942845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.022 [2024-07-14 05:41:58.942861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.022 [2024-07-14 05:41:58.942882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.022 [2024-07-14 05:41:58.942911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.022 [2024-07-14 05:41:58.942924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.022 [2024-07-14 05:41:58.942941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.022 [2024-07-14 05:41:58.942956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.022 [2024-07-14 05:41:58.942971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.022 [2024-07-14 05:41:58.942984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.022 [2024-07-14 05:41:58.943000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.022 [2024-07-14 05:41:58.943019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.022 [2024-07-14 05:41:58.943036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.022 [2024-07-14 05:41:58.943051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.022 [2024-07-14 05:41:58.943066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.022 [2024-07-14 05:41:58.943080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.022 [2024-07-14 05:41:58.943095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.022 [2024-07-14 05:41:58.943109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.022 [2024-07-14 05:41:58.943125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.022 [2024-07-14 05:41:58.943139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.022 [2024-07-14 05:41:58.943155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.022 [2024-07-14 05:41:58.943169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.022 [2024-07-14 05:41:58.943185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.022 [2024-07-14 05:41:58.943199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.022 [2024-07-14 05:41:58.943215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.022 [2024-07-14 05:41:58.943229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.022 [2024-07-14 05:41:58.943244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.022 [2024-07-14 05:41:58.943258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.022 [2024-07-14 05:41:58.943274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.022 [2024-07-14 05:41:58.943288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.022 [2024-07-14 05:41:58.943303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.022 [2024-07-14 05:41:58.943317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.022 [2024-07-14 05:41:58.943333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.022 [2024-07-14 05:41:58.943347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.022 [2024-07-14 05:41:58.943363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.022 [2024-07-14 05:41:58.943376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.022 [2024-07-14 05:41:58.943396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.022 [2024-07-14 05:41:58.943411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.022 [2024-07-14 05:41:58.943427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.022 [2024-07-14 05:41:58.943441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.022 [2024-07-14 05:41:58.943456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.022 [2024-07-14 05:41:58.943470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.022 [2024-07-14 05:41:58.943486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.022 [2024-07-14 05:41:58.943499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.022 [2024-07-14 05:41:58.943515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.022 [2024-07-14 05:41:58.943529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.022 [2024-07-14 05:41:58.943544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.022 [2024-07-14 05:41:58.943558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.022 [2024-07-14 05:41:58.943573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.022 [2024-07-14 05:41:58.943587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.022 [2024-07-14 05:41:58.943602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.022 [2024-07-14 05:41:58.943616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.022 [2024-07-14 05:41:58.943632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.022 [2024-07-14 05:41:58.943653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.022 [2024-07-14 05:41:58.943670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.022 [2024-07-14 05:41:58.943684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.022 [2024-07-14 05:41:58.943699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.022 [2024-07-14 05:41:58.943712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.022 [2024-07-14 05:41:58.943727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.022 [2024-07-14 05:41:58.943741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.022 [2024-07-14 05:41:58.943757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.022 [2024-07-14 05:41:58.943774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.022 [2024-07-14 05:41:58.943790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.022 [2024-07-14 05:41:58.943804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.022 [2024-07-14 05:41:58.943820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.022 [2024-07-14 05:41:58.943835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.022 [2024-07-14 05:41:58.943851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.022 [2024-07-14 05:41:58.943870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.022 [2024-07-14 05:41:58.943888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.022 [2024-07-14 05:41:58.943903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.022 [2024-07-14 05:41:58.943918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.022 [2024-07-14 05:41:58.943933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.022 [2024-07-14 05:41:58.943948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.022 [2024-07-14 05:41:58.943962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.022 [2024-07-14 05:41:58.943979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.022 [2024-07-14 05:41:58.943993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.022 [2024-07-14 05:41:58.944009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.022 [2024-07-14 05:41:58.944023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.022 [2024-07-14 05:41:58.944038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.022 [2024-07-14 05:41:58.944052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.022 [2024-07-14 05:41:58.944067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.023 [2024-07-14 05:41:58.944081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.023 [2024-07-14 05:41:58.944096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.023 [2024-07-14 05:41:58.944110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.023 [2024-07-14 05:41:58.944126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.023 [2024-07-14 05:41:58.944145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.023 [2024-07-14 05:41:58.944169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.023 [2024-07-14 05:41:58.944185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.023 [2024-07-14 05:41:58.952649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.023 [2024-07-14 05:41:58.952707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.023 [2024-07-14 05:41:58.952724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.023 [2024-07-14 05:41:58.952740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.023 [2024-07-14 05:41:58.952755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.023 [2024-07-14 05:41:58.952770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.023 [2024-07-14 05:41:58.952787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.023 [2024-07-14 05:41:58.952801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.023 [2024-07-14 05:41:58.952817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.023 [2024-07-14 05:41:58.952831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.023 [2024-07-14 05:41:58.952847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.023 [2024-07-14 05:41:58.952861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.023 [2024-07-14 05:41:58.952889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.023 [2024-07-14 05:41:58.952914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.023 [2024-07-14 05:41:58.952930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.023 [2024-07-14 05:41:58.952944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.023 [2024-07-14 05:41:58.952960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.023 [2024-07-14 05:41:58.952974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.023 [2024-07-14 05:41:58.952990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.023 [2024-07-14 05:41:58.953004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.023 [2024-07-14 05:41:58.953019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.023 [2024-07-14 05:41:58.953033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.023 [2024-07-14 05:41:58.953050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.023 [2024-07-14 05:41:58.953074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.023 [2024-07-14 05:41:58.953091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.023 [2024-07-14 05:41:58.953106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.023 [2024-07-14 05:41:58.953122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.023 [2024-07-14 05:41:58.953135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.023 [2024-07-14 05:41:58.953150] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29de6f0 is same with the state(5) to be set 00:27:52.023 [2024-07-14 05:41:58.954528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.023 [2024-07-14 05:41:58.954553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.023 [2024-07-14 05:41:58.954577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.023 [2024-07-14 05:41:58.954593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.023 [2024-07-14 05:41:58.954609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.023 [2024-07-14 05:41:58.954623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.023 [2024-07-14 05:41:58.954639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.023 [2024-07-14 05:41:58.954654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.023 [2024-07-14 05:41:58.954669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.023 [2024-07-14 05:41:58.954684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.023 [2024-07-14 05:41:58.954699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.023 [2024-07-14 05:41:58.954713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.023 [2024-07-14 05:41:58.954729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.023 [2024-07-14 05:41:58.954743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.023 [2024-07-14 05:41:58.954759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.023 [2024-07-14 05:41:58.954772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.023 [2024-07-14 05:41:58.954788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.023 [2024-07-14 05:41:58.954802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.023 [2024-07-14 05:41:58.954818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.023 [2024-07-14 05:41:58.954837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.023 [2024-07-14 05:41:58.954853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.023 [2024-07-14 05:41:58.954874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.023 [2024-07-14 05:41:58.954893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.023 [2024-07-14 05:41:58.954907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.023 [2024-07-14 05:41:58.954923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.023 [2024-07-14 05:41:58.954937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.023 [2024-07-14 05:41:58.954953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.023 [2024-07-14 05:41:58.954967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.023 [2024-07-14 05:41:58.954983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.023 [2024-07-14 05:41:58.954996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.023 [2024-07-14 05:41:58.955012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.023 [2024-07-14 05:41:58.955026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.023 [2024-07-14 05:41:58.955043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.023 [2024-07-14 05:41:58.955057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.023 [2024-07-14 05:41:58.955072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.023 [2024-07-14 05:41:58.955086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.023 [2024-07-14 05:41:58.955102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.023 [2024-07-14 05:41:58.955116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.023 [2024-07-14 05:41:58.955132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.023 [2024-07-14 05:41:58.955145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.023 [2024-07-14 05:41:58.955161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.023 [2024-07-14 05:41:58.955175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.023 [2024-07-14 05:41:58.955191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.023 [2024-07-14 05:41:58.955204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.023 [2024-07-14 05:41:58.955224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.023 [2024-07-14 05:41:58.955239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.023 [2024-07-14 05:41:58.955255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.023 [2024-07-14 05:41:58.955269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.023 [2024-07-14 05:41:58.955285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.024 [2024-07-14 05:41:58.955298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.024 [2024-07-14 05:41:58.955314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.024 [2024-07-14 05:41:58.955328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.024 [2024-07-14 05:41:58.955344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.024 [2024-07-14 05:41:58.955359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.024 [2024-07-14 05:41:58.955375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.024 [2024-07-14 05:41:58.955389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.024 [2024-07-14 05:41:58.955404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.024 [2024-07-14 05:41:58.955418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.024 [2024-07-14 05:41:58.955433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.024 [2024-07-14 05:41:58.955448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.024 [2024-07-14 05:41:58.955463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.024 [2024-07-14 05:41:58.955477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.024 [2024-07-14 05:41:58.955493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.024 [2024-07-14 05:41:58.955507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.024 [2024-07-14 05:41:58.955523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.024 [2024-07-14 05:41:58.955538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.024 [2024-07-14 05:41:58.955553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.024 [2024-07-14 05:41:58.955567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.024 [2024-07-14 05:41:58.955582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.024 [2024-07-14 05:41:58.955600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.024 [2024-07-14 05:41:58.955616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.024 [2024-07-14 05:41:58.955630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.024 [2024-07-14 05:41:58.955646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.024 [2024-07-14 05:41:58.955659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.024 [2024-07-14 05:41:58.955675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.024 [2024-07-14 05:41:58.955689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.024 [2024-07-14 05:41:58.955705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.024 [2024-07-14 05:41:58.955719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.024 [2024-07-14 05:41:58.955734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.024 [2024-07-14 05:41:58.955748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.024 [2024-07-14 05:41:58.955764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.024 [2024-07-14 05:41:58.955778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.024 [2024-07-14 05:41:58.955794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.024 [2024-07-14 05:41:58.955808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.024 [2024-07-14 05:41:58.955823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.024 [2024-07-14 05:41:58.955837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.024 [2024-07-14 05:41:58.955852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.024 [2024-07-14 05:41:58.955872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.024 [2024-07-14 05:41:58.955889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.024 [2024-07-14 05:41:58.955909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.024 [2024-07-14 05:41:58.955924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.024 [2024-07-14 05:41:58.955939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.024 [2024-07-14 05:41:58.955954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.024 [2024-07-14 05:41:58.955968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.024 [2024-07-14 05:41:58.955987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.024 [2024-07-14 05:41:58.956002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.024 [2024-07-14 05:41:58.956019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.024 [2024-07-14 05:41:58.956033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.024 [2024-07-14 05:41:58.956048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.024 [2024-07-14 05:41:58.956062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.024 [2024-07-14 05:41:58.956080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.024 [2024-07-14 05:41:58.956094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.024 [2024-07-14 05:41:58.956110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.024 [2024-07-14 05:41:58.956124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.024 [2024-07-14 05:41:58.956139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.024 [2024-07-14 05:41:58.956153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.024 [2024-07-14 05:41:58.956168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.024 [2024-07-14 05:41:58.956182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.024 [2024-07-14 05:41:58.956198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.024 [2024-07-14 05:41:58.956211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.024 [2024-07-14 05:41:58.956227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.024 [2024-07-14 05:41:58.956240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.024 [2024-07-14 05:41:58.956256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.024 [2024-07-14 05:41:58.956270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.024 [2024-07-14 05:41:58.956286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.024 [2024-07-14 05:41:58.956299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.024 [2024-07-14 05:41:58.956315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.024 [2024-07-14 05:41:58.956329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.024 [2024-07-14 05:41:58.956344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.024 [2024-07-14 05:41:58.956361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.024 [2024-07-14 05:41:58.956378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.024 [2024-07-14 05:41:58.956392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.025 [2024-07-14 05:41:58.956408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.025 [2024-07-14 05:41:58.956421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.025 [2024-07-14 05:41:58.956437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.025 [2024-07-14 05:41:58.956451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.025 [2024-07-14 05:41:58.956467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.025 [2024-07-14 05:41:58.956481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.025 [2024-07-14 05:41:58.956496] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x287eb10 is same with the state(5) to be set 00:27:52.025 [2024-07-14 05:41:58.956571] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x287eb10 was disconnected and freed. reset controller. 00:27:52.025 [2024-07-14 05:41:58.956678] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:27:52.025 [2024-07-14 05:41:58.956709] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:27:52.025 [2024-07-14 05:41:58.956728] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:27:52.025 [2024-07-14 05:41:58.957005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.025 [2024-07-14 05:41:58.957035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x28ddf90 with addr=10.0.0.2, port=4420 00:27:52.025 [2024-07-14 05:41:58.957052] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28ddf90 is same with the state(5) to be set 00:27:52.025 [2024-07-14 05:41:58.957069] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:52.025 [2024-07-14 05:41:58.957082] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:52.025 [2024-07-14 05:41:58.957097] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:52.025 [2024-07-14 05:41:58.957119] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:27:52.025 [2024-07-14 05:41:58.957133] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:27:52.025 [2024-07-14 05:41:58.957147] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:27:52.025 [2024-07-14 05:41:58.957211] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:52.025 [2024-07-14 05:41:58.957252] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:52.025 [2024-07-14 05:41:58.957272] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:52.025 [2024-07-14 05:41:58.957300] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28ddf90 (9): Bad file descriptor 00:27:52.025 [2024-07-14 05:41:58.958750] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:52.025 [2024-07-14 05:41:58.958781] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:52.025 [2024-07-14 05:41:58.958809] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:27:52.025 [2024-07-14 05:41:58.958997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.025 [2024-07-14 05:41:58.959024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2883190 with addr=10.0.0.2, port=4420 00:27:52.025 [2024-07-14 05:41:58.959040] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2883190 is same with the state(5) to be set 00:27:52.025 [2024-07-14 05:41:58.959195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.025 [2024-07-14 05:41:58.959220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x28b3f90 with addr=10.0.0.2, port=4420 00:27:52.025 [2024-07-14 05:41:58.959235] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28b3f90 is same with the state(5) to be set 00:27:52.025 [2024-07-14 05:41:58.959389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.025 [2024-07-14 05:41:58.959414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x28a86b0 with addr=10.0.0.2, port=4420 00:27:52.025 [2024-07-14 05:41:58.959429] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28a86b0 is same with the state(5) to be set 00:27:52.025 [2024-07-14 05:41:58.960247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.025 [2024-07-14 05:41:58.960271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.025 [2024-07-14 05:41:58.960294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.025 [2024-07-14 05:41:58.960309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.025 [2024-07-14 05:41:58.960325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.025 [2024-07-14 05:41:58.960339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.025 [2024-07-14 05:41:58.960356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.025 [2024-07-14 05:41:58.960371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.025 [2024-07-14 05:41:58.960387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.025 [2024-07-14 05:41:58.960401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.025 [2024-07-14 05:41:58.960417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.025 [2024-07-14 05:41:58.960431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.025 [2024-07-14 05:41:58.960446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.025 [2024-07-14 05:41:58.960460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.025 [2024-07-14 05:41:58.960476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.025 [2024-07-14 05:41:58.960491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.025 [2024-07-14 05:41:58.960507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.025 [2024-07-14 05:41:58.960525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.025 [2024-07-14 05:41:58.960541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.025 [2024-07-14 05:41:58.960556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.025 [2024-07-14 05:41:58.960572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.025 [2024-07-14 05:41:58.960585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.025 [2024-07-14 05:41:58.960601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.025 [2024-07-14 05:41:58.960614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.025 [2024-07-14 05:41:58.960630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.025 [2024-07-14 05:41:58.960644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.025 [2024-07-14 05:41:58.960660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.025 [2024-07-14 05:41:58.960673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.025 [2024-07-14 05:41:58.960689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.025 [2024-07-14 05:41:58.960703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.025 [2024-07-14 05:41:58.960719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.025 [2024-07-14 05:41:58.960732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.025 [2024-07-14 05:41:58.960748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.025 [2024-07-14 05:41:58.960762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.025 [2024-07-14 05:41:58.960778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.025 [2024-07-14 05:41:58.960792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.025 [2024-07-14 05:41:58.960808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.025 [2024-07-14 05:41:58.960822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.025 [2024-07-14 05:41:58.960837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.025 [2024-07-14 05:41:58.960851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.025 [2024-07-14 05:41:58.960875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.025 [2024-07-14 05:41:58.960892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.025 [2024-07-14 05:41:58.960912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.025 [2024-07-14 05:41:58.960927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.025 [2024-07-14 05:41:58.960944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.025 [2024-07-14 05:41:58.960958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.025 [2024-07-14 05:41:58.960974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.025 [2024-07-14 05:41:58.960987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.025 [2024-07-14 05:41:58.961003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.025 [2024-07-14 05:41:58.961017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.025 [2024-07-14 05:41:58.961033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.025 [2024-07-14 05:41:58.961047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.025 [2024-07-14 05:41:58.961062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.026 [2024-07-14 05:41:58.961076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.026 [2024-07-14 05:41:58.961092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.026 [2024-07-14 05:41:58.961106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.026 [2024-07-14 05:41:58.961122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.026 [2024-07-14 05:41:58.961135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.026 [2024-07-14 05:41:58.961151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.026 [2024-07-14 05:41:58.961165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.026 [2024-07-14 05:41:58.961180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.026 [2024-07-14 05:41:58.961194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.026 [2024-07-14 05:41:58.961209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.026 [2024-07-14 05:41:58.961223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.026 [2024-07-14 05:41:58.961238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.026 [2024-07-14 05:41:58.961252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.026 [2024-07-14 05:41:58.961267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.026 [2024-07-14 05:41:58.961285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.026 [2024-07-14 05:41:58.961301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.026 [2024-07-14 05:41:58.961315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.026 [2024-07-14 05:41:58.961332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.026 [2024-07-14 05:41:58.961346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.026 [2024-07-14 05:41:58.961363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.026 [2024-07-14 05:41:58.961377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.026 [2024-07-14 05:41:58.961392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.026 [2024-07-14 05:41:58.961406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.026 [2024-07-14 05:41:58.961422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.026 [2024-07-14 05:41:58.961436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.026 [2024-07-14 05:41:58.961452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.026 [2024-07-14 05:41:58.961465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.026 [2024-07-14 05:41:58.961480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.026 [2024-07-14 05:41:58.961494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.026 [2024-07-14 05:41:58.961510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.026 [2024-07-14 05:41:58.961524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.026 [2024-07-14 05:41:58.961539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.026 [2024-07-14 05:41:58.961553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.026 [2024-07-14 05:41:58.961569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.026 [2024-07-14 05:41:58.961583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.026 [2024-07-14 05:41:58.961598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.026 [2024-07-14 05:41:58.961611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.026 [2024-07-14 05:41:58.961627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.026 [2024-07-14 05:41:58.961640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.026 [2024-07-14 05:41:58.961659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.026 [2024-07-14 05:41:58.961674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.026 [2024-07-14 05:41:58.961689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.026 [2024-07-14 05:41:58.961703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.026 [2024-07-14 05:41:58.961719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.026 [2024-07-14 05:41:58.961733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.026 [2024-07-14 05:41:58.961750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.026 [2024-07-14 05:41:58.961764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.026 [2024-07-14 05:41:58.961780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.026 [2024-07-14 05:41:58.961793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.026 [2024-07-14 05:41:58.961809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.026 [2024-07-14 05:41:58.961823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.026 [2024-07-14 05:41:58.961838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.026 [2024-07-14 05:41:58.961852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.026 [2024-07-14 05:41:58.961875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.026 [2024-07-14 05:41:58.961891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.026 [2024-07-14 05:41:58.961912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.026 [2024-07-14 05:41:58.961926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.026 [2024-07-14 05:41:58.961941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.026 [2024-07-14 05:41:58.961955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.026 [2024-07-14 05:41:58.961971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.026 [2024-07-14 05:41:58.961984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.026 [2024-07-14 05:41:58.962000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.026 [2024-07-14 05:41:58.962014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.026 [2024-07-14 05:41:58.962029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.026 [2024-07-14 05:41:58.962047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.026 [2024-07-14 05:41:58.962063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.026 [2024-07-14 05:41:58.962077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.026 [2024-07-14 05:41:58.962094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.026 [2024-07-14 05:41:58.962108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.026 [2024-07-14 05:41:58.962125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.026 [2024-07-14 05:41:58.962139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.026 [2024-07-14 05:41:58.962155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.026 [2024-07-14 05:41:58.962168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.026 [2024-07-14 05:41:58.962184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.026 [2024-07-14 05:41:58.962197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.026 [2024-07-14 05:41:58.962212] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x287d630 is same with the state(5) to be set 00:27:52.026 [2024-07-14 05:41:58.963493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.026 [2024-07-14 05:41:58.963517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.026 [2024-07-14 05:41:58.963539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.026 [2024-07-14 05:41:58.963554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.026 [2024-07-14 05:41:58.963570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.026 [2024-07-14 05:41:58.963584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.026 [2024-07-14 05:41:58.963600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.026 [2024-07-14 05:41:58.963614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.026 [2024-07-14 05:41:58.963629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.026 [2024-07-14 05:41:58.963643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.027 [2024-07-14 05:41:58.963659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.027 [2024-07-14 05:41:58.963673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.027 [2024-07-14 05:41:58.963689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.027 [2024-07-14 05:41:58.963709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.027 [2024-07-14 05:41:58.963725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.027 [2024-07-14 05:41:58.963739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.027 [2024-07-14 05:41:58.963754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.027 [2024-07-14 05:41:58.963768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.027 [2024-07-14 05:41:58.963784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.027 [2024-07-14 05:41:58.963797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.027 [2024-07-14 05:41:58.963813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.027 [2024-07-14 05:41:58.963826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.027 [2024-07-14 05:41:58.963842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.027 [2024-07-14 05:41:58.963856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.027 [2024-07-14 05:41:58.963879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.027 [2024-07-14 05:41:58.963895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.027 [2024-07-14 05:41:58.963911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.027 [2024-07-14 05:41:58.963925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.027 [2024-07-14 05:41:58.963941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.027 [2024-07-14 05:41:58.963954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.027 [2024-07-14 05:41:58.963970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.027 [2024-07-14 05:41:58.963983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.027 [2024-07-14 05:41:58.964000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.027 [2024-07-14 05:41:58.964014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.027 [2024-07-14 05:41:58.964029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.027 [2024-07-14 05:41:58.964043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.027 [2024-07-14 05:41:58.964059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.027 [2024-07-14 05:41:58.964073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.027 [2024-07-14 05:41:58.964093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.027 [2024-07-14 05:41:58.964107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.027 [2024-07-14 05:41:58.964123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.027 [2024-07-14 05:41:58.964136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.027 [2024-07-14 05:41:58.964152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.027 [2024-07-14 05:41:58.964167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.027 [2024-07-14 05:41:58.964182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.027 [2024-07-14 05:41:58.964195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.027 [2024-07-14 05:41:58.964211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.027 [2024-07-14 05:41:58.964224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.027 [2024-07-14 05:41:58.964240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.027 [2024-07-14 05:41:58.964254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.027 [2024-07-14 05:41:58.964269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.027 [2024-07-14 05:41:58.964283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.027 [2024-07-14 05:41:58.964298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.027 [2024-07-14 05:41:58.964311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.027 [2024-07-14 05:41:58.964327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.027 [2024-07-14 05:41:58.964341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.027 [2024-07-14 05:41:58.964356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.027 [2024-07-14 05:41:58.964370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.027 [2024-07-14 05:41:58.964386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.027 [2024-07-14 05:41:58.964399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.027 [2024-07-14 05:41:58.964415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.027 [2024-07-14 05:41:58.964428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.027 [2024-07-14 05:41:58.964444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.027 [2024-07-14 05:41:58.964465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.027 [2024-07-14 05:41:58.964483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.027 [2024-07-14 05:41:58.964498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.027 [2024-07-14 05:41:58.964514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.027 [2024-07-14 05:41:58.964527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.027 [2024-07-14 05:41:58.964543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.027 [2024-07-14 05:41:58.964557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.027 [2024-07-14 05:41:58.964572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.027 [2024-07-14 05:41:58.964586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.027 [2024-07-14 05:41:58.964601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.027 [2024-07-14 05:41:58.964615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.027 [2024-07-14 05:41:58.964630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.027 [2024-07-14 05:41:58.964643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.027 [2024-07-14 05:41:58.964659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.027 [2024-07-14 05:41:58.964673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.027 [2024-07-14 05:41:58.964689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.027 [2024-07-14 05:41:58.964702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.027 [2024-07-14 05:41:58.964718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.027 [2024-07-14 05:41:58.964731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.027 [2024-07-14 05:41:58.964746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.027 [2024-07-14 05:41:58.964760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.027 [2024-07-14 05:41:58.964775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.027 [2024-07-14 05:41:58.964789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.027 [2024-07-14 05:41:58.964805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.027 [2024-07-14 05:41:58.964818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.027 [2024-07-14 05:41:58.964838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.027 [2024-07-14 05:41:58.964852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.027 [2024-07-14 05:41:58.964874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.027 [2024-07-14 05:41:58.964890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.027 [2024-07-14 05:41:58.964905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.027 [2024-07-14 05:41:58.964919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.028 [2024-07-14 05:41:58.964935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.028 [2024-07-14 05:41:58.964948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.028 [2024-07-14 05:41:58.964965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.028 [2024-07-14 05:41:58.964980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.028 [2024-07-14 05:41:58.964996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.028 [2024-07-14 05:41:58.965009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.028 [2024-07-14 05:41:58.965024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.028 [2024-07-14 05:41:58.965038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.028 [2024-07-14 05:41:58.965054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.028 [2024-07-14 05:41:58.965067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.028 [2024-07-14 05:41:58.965083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.028 [2024-07-14 05:41:58.965097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.028 [2024-07-14 05:41:58.965112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.028 [2024-07-14 05:41:58.965126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.028 [2024-07-14 05:41:58.965141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.028 [2024-07-14 05:41:58.965154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.028 [2024-07-14 05:41:58.965170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.028 [2024-07-14 05:41:58.965183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.028 [2024-07-14 05:41:58.965199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.028 [2024-07-14 05:41:58.965217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.028 [2024-07-14 05:41:58.965233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.028 [2024-07-14 05:41:58.965247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.028 [2024-07-14 05:41:58.965262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.028 [2024-07-14 05:41:58.965276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.028 [2024-07-14 05:41:58.965291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.028 [2024-07-14 05:41:58.965305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.028 [2024-07-14 05:41:58.965320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.028 [2024-07-14 05:41:58.965334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.028 [2024-07-14 05:41:58.965350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.028 [2024-07-14 05:41:58.965364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.028 [2024-07-14 05:41:58.965379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.028 [2024-07-14 05:41:58.965393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.028 [2024-07-14 05:41:58.965409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.028 [2024-07-14 05:41:58.965423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.028 [2024-07-14 05:41:58.965437] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2880010 is same with the state(5) to be set 00:27:52.028 [2024-07-14 05:41:58.967049] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:27:52.028 [2024-07-14 05:41:58.967082] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:27:52.028 task offset: 24704 on job bdev=Nvme3n1 fails 00:27:52.028 00:27:52.028 Latency(us) 00:27:52.028 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:52.028 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:52.028 Job: Nvme1n1 ended in about 1.03 seconds with error 00:27:52.028 Verification LBA range: start 0x0 length 0x400 00:27:52.028 Nvme1n1 : 1.03 186.63 11.66 62.21 0.00 254785.14 10534.31 265639.25 00:27:52.028 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:52.028 Job: Nvme2n1 ended in about 1.06 seconds with error 00:27:52.028 Verification LBA range: start 0x0 length 0x400 00:27:52.028 Nvme2n1 : 1.06 180.53 11.28 60.18 0.00 259006.39 20971.52 243891.01 00:27:52.028 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:52.028 Job: Nvme3n1 ended in about 1.02 seconds with error 00:27:52.028 Verification LBA range: start 0x0 length 0x400 00:27:52.028 Nvme3n1 : 1.02 187.70 11.73 62.57 0.00 244427.43 4393.34 256318.58 00:27:52.028 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:52.028 Job: Nvme4n1 ended in about 1.07 seconds with error 00:27:52.028 Verification LBA range: start 0x0 length 0x400 00:27:52.028 Nvme4n1 : 1.07 180.00 11.25 60.00 0.00 250965.52 21456.97 254765.13 00:27:52.028 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:52.028 Job: Nvme5n1 ended in about 1.08 seconds with error 00:27:52.028 Verification LBA range: start 0x0 length 0x400 00:27:52.028 Nvme5n1 : 1.08 178.03 11.13 59.34 0.00 249430.47 19903.53 251658.24 00:27:52.028 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:52.028 Job: Nvme6n1 ended in about 1.09 seconds with error 00:27:52.028 Verification LBA range: start 0x0 length 0x400 00:27:52.028 Nvme6n1 : 1.09 117.70 7.36 58.85 0.00 329784.13 26991.12 309135.74 00:27:52.028 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:52.028 Job: Nvme7n1 ended in about 1.08 seconds with error 00:27:52.028 Verification LBA range: start 0x0 length 0x400 00:27:52.028 Nvme7n1 : 1.08 181.97 11.37 59.12 0.00 236959.84 20097.71 250104.79 00:27:52.028 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:52.028 Job: Nvme8n1 ended in about 1.09 seconds with error 00:27:52.028 Verification LBA range: start 0x0 length 0x400 00:27:52.028 Nvme8n1 : 1.09 176.04 11.00 58.68 0.00 239209.24 21942.42 253211.69 00:27:52.028 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:52.028 Verification LBA range: start 0x0 length 0x400 00:27:52.028 Nvme9n1 : 1.02 187.35 11.71 0.00 0.00 291215.99 21068.61 270299.59 00:27:52.028 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:52.028 Job: Nvme10n1 ended in about 1.06 seconds with error 00:27:52.028 Verification LBA range: start 0x0 length 0x400 00:27:52.028 Nvme10n1 : 1.06 181.09 11.32 60.36 0.00 222930.87 21165.70 264085.81 00:27:52.028 =================================================================================================================== 00:27:52.028 Total : 1757.03 109.81 541.30 0.00 255064.36 4393.34 309135.74 00:27:52.028 [2024-07-14 05:41:58.993526] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:52.028 [2024-07-14 05:41:58.993617] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:27:52.028 [2024-07-14 05:41:58.994058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.028 [2024-07-14 05:41:58.994097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x237d610 with addr=10.0.0.2, port=4420 00:27:52.028 [2024-07-14 05:41:58.994117] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237d610 is same with the state(5) to be set 00:27:52.028 [2024-07-14 05:41:58.994147] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2883190 (9): Bad file descriptor 00:27:52.028 [2024-07-14 05:41:58.994175] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28b3f90 (9): Bad file descriptor 00:27:52.028 [2024-07-14 05:41:58.994193] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28a86b0 (9): Bad file descriptor 00:27:52.028 [2024-07-14 05:41:58.994210] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:27:52.028 [2024-07-14 05:41:58.994223] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:27:52.028 [2024-07-14 05:41:58.994239] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:27:52.028 [2024-07-14 05:41:58.994323] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:52.029 [2024-07-14 05:41:58.994347] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:52.029 [2024-07-14 05:41:58.994368] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:52.029 [2024-07-14 05:41:58.994387] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:52.029 [2024-07-14 05:41:58.995113] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:52.029 [2024-07-14 05:41:58.995366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.029 [2024-07-14 05:41:58.995397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x28b0810 with addr=10.0.0.2, port=4420 00:27:52.029 [2024-07-14 05:41:58.995415] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28b0810 is same with the state(5) to be set 00:27:52.029 [2024-07-14 05:41:58.995581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.029 [2024-07-14 05:41:58.995607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2a47ec0 with addr=10.0.0.2, port=4420 00:27:52.029 [2024-07-14 05:41:58.995624] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2a47ec0 is same with the state(5) to be set 00:27:52.029 [2024-07-14 05:41:58.995787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.029 [2024-07-14 05:41:58.995813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2a29f00 with addr=10.0.0.2, port=4420 00:27:52.029 [2024-07-14 05:41:58.995829] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2a29f00 is same with the state(5) to be set 00:27:52.029 [2024-07-14 05:41:58.995848] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x237d610 (9): Bad file descriptor 00:27:52.029 [2024-07-14 05:41:58.995872] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:27:52.029 [2024-07-14 05:41:58.995888] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:27:52.029 [2024-07-14 05:41:58.995902] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:52.029 [2024-07-14 05:41:58.995953] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:27:52.029 [2024-07-14 05:41:58.995968] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:27:52.029 [2024-07-14 05:41:58.995982] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:27:52.029 [2024-07-14 05:41:58.995999] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:27:52.029 [2024-07-14 05:41:58.996012] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:27:52.029 [2024-07-14 05:41:58.996025] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:27:52.029 [2024-07-14 05:41:58.996049] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:52.029 [2024-07-14 05:41:58.996084] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:52.029 [2024-07-14 05:41:58.996105] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:52.029 [2024-07-14 05:41:58.996129] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:52.029 [2024-07-14 05:41:58.996148] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:52.029 [2024-07-14 05:41:58.996166] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:52.029 [2024-07-14 05:41:58.996719] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:27:52.029 [2024-07-14 05:41:58.996752] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:52.029 [2024-07-14 05:41:58.996793] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:52.029 [2024-07-14 05:41:58.996810] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:52.029 [2024-07-14 05:41:58.996827] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:52.029 [2024-07-14 05:41:58.996863] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28b0810 (9): Bad file descriptor 00:27:52.029 [2024-07-14 05:41:58.996894] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2a47ec0 (9): Bad file descriptor 00:27:52.029 [2024-07-14 05:41:58.996913] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2a29f00 (9): Bad file descriptor 00:27:52.029 [2024-07-14 05:41:58.996929] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:27:52.029 [2024-07-14 05:41:58.996942] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:27:52.029 [2024-07-14 05:41:58.996955] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:27:52.029 [2024-07-14 05:41:58.997028] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:27:52.029 [2024-07-14 05:41:58.997052] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:52.029 [2024-07-14 05:41:58.997209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.029 [2024-07-14 05:41:58.997235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2a28f50 with addr=10.0.0.2, port=4420 00:27:52.029 [2024-07-14 05:41:58.997252] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2a28f50 is same with the state(5) to be set 00:27:52.029 [2024-07-14 05:41:58.997397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.029 [2024-07-14 05:41:58.997422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2885300 with addr=10.0.0.2, port=4420 00:27:52.029 [2024-07-14 05:41:58.997438] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2885300 is same with the state(5) to be set 00:27:52.029 [2024-07-14 05:41:58.997452] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:27:52.029 [2024-07-14 05:41:58.997465] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:27:52.029 [2024-07-14 05:41:58.997478] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:27:52.029 [2024-07-14 05:41:58.997496] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:27:52.029 [2024-07-14 05:41:58.997511] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:27:52.029 [2024-07-14 05:41:58.997523] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:27:52.029 [2024-07-14 05:41:58.997539] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:27:52.029 [2024-07-14 05:41:58.997553] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:27:52.029 [2024-07-14 05:41:58.997566] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:27:52.029 [2024-07-14 05:41:58.997622] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:52.029 [2024-07-14 05:41:58.997642] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:52.029 [2024-07-14 05:41:58.997655] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:52.029 [2024-07-14 05:41:58.997793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.029 [2024-07-14 05:41:58.997819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x28ddf90 with addr=10.0.0.2, port=4420 00:27:52.029 [2024-07-14 05:41:58.997835] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28ddf90 is same with the state(5) to be set 00:27:52.029 [2024-07-14 05:41:58.997858] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2a28f50 (9): Bad file descriptor 00:27:52.029 [2024-07-14 05:41:58.997888] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2885300 (9): Bad file descriptor 00:27:52.029 [2024-07-14 05:41:58.997935] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28ddf90 (9): Bad file descriptor 00:27:52.029 [2024-07-14 05:41:58.997957] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:27:52.029 [2024-07-14 05:41:58.997970] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:27:52.029 [2024-07-14 05:41:58.997983] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:27:52.029 [2024-07-14 05:41:58.997999] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:52.029 [2024-07-14 05:41:58.998013] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:52.029 [2024-07-14 05:41:58.998026] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:52.029 [2024-07-14 05:41:58.998068] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:52.029 [2024-07-14 05:41:58.998087] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:52.029 [2024-07-14 05:41:58.998099] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:27:52.029 [2024-07-14 05:41:58.998112] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:27:52.029 [2024-07-14 05:41:58.998125] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:27:52.029 [2024-07-14 05:41:58.998160] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:52.597 05:41:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:27:52.597 05:41:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:27:53.535 05:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 3321221 00:27:53.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (3321221) - No such process 00:27:53.535 05:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:27:53.535 05:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:27:53.535 05:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:53.535 05:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:53.535 05:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:53.535 05:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:53.535 05:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:53.535 05:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:27:53.535 05:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:53.535 05:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:27:53.535 05:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:53.535 05:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:53.535 rmmod nvme_tcp 00:27:53.535 rmmod nvme_fabrics 00:27:53.535 rmmod nvme_keyring 00:27:53.535 05:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:53.535 05:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:27:53.535 05:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:27:53.535 05:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:27:53.535 05:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:53.535 05:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:53.535 05:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:53.535 05:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:53.535 05:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:53.535 05:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:53.535 05:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:53.535 05:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:55.439 05:42:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:55.439 00:27:55.439 real 0m7.414s 00:27:55.439 user 0m17.887s 00:27:55.439 sys 0m1.503s 00:27:55.439 05:42:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:55.439 05:42:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:55.439 ************************************ 00:27:55.439 END TEST nvmf_shutdown_tc3 00:27:55.439 ************************************ 00:27:55.698 05:42:02 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:27:55.698 00:27:55.698 real 0m27.543s 00:27:55.698 user 1m17.464s 00:27:55.698 sys 0m6.515s 00:27:55.698 05:42:02 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:55.698 05:42:02 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:55.698 ************************************ 00:27:55.698 END TEST nvmf_shutdown 00:27:55.698 ************************************ 00:27:55.698 05:42:02 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:27:55.698 05:42:02 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:55.698 05:42:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:55.698 05:42:02 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:27:55.698 05:42:02 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:55.698 05:42:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:55.698 05:42:02 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:27:55.698 05:42:02 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:55.698 05:42:02 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:55.698 05:42:02 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:55.698 05:42:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:55.698 ************************************ 00:27:55.698 START TEST nvmf_multicontroller 00:27:55.698 ************************************ 00:27:55.698 05:42:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:55.698 * Looking for test storage... 00:27:55.698 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:55.698 05:42:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:55.698 05:42:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:27:55.698 05:42:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:55.698 05:42:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:55.698 05:42:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:55.698 05:42:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:55.698 05:42:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:55.698 05:42:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:55.698 05:42:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:55.698 05:42:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:55.698 05:42:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:55.698 05:42:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:55.698 05:42:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:55.698 05:42:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:55.698 05:42:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:55.698 05:42:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:55.698 05:42:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:55.698 05:42:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:55.698 05:42:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:55.698 05:42:02 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:55.698 05:42:02 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:55.698 05:42:02 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:55.698 05:42:02 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.698 05:42:02 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.698 05:42:02 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.699 05:42:02 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:27:55.699 05:42:02 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.699 05:42:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:27:55.699 05:42:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:55.699 05:42:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:55.699 05:42:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:55.699 05:42:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:55.699 05:42:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:55.699 05:42:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:55.699 05:42:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:55.699 05:42:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:55.699 05:42:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:55.699 05:42:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:55.699 05:42:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:27:55.699 05:42:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:27:55.699 05:42:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:55.699 05:42:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:27:55.699 05:42:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:27:55.699 05:42:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:55.699 05:42:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:55.699 05:42:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:55.699 05:42:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:55.699 05:42:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:55.699 05:42:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:55.699 05:42:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:55.699 05:42:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:55.699 05:42:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:55.699 05:42:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:55.699 05:42:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:27:55.699 05:42:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:57.607 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:57.607 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:27:57.607 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:57.607 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:57.607 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:57.607 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:57.607 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:57.607 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:27:57.607 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:57.607 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:27:57.607 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:27:57.607 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:27:57.607 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:27:57.607 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:27:57.607 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:27:57.607 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:57.607 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:57.607 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:57.607 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:57.607 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:57.607 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:57.607 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:57.607 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:57.607 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:57.607 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:57.607 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:57.607 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:57.607 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:57.608 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:57.608 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:57.608 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:57.608 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:57.608 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:57.608 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:57.608 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:57.608 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:57.608 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:57.608 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:57.608 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:57.608 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:57.608 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:57.608 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:57.608 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:57.608 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:57.608 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:57.608 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:57.608 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:57.608 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:57.608 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:57.608 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:57.608 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:57.608 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:57.608 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:57.608 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:57.608 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:57.608 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:57.608 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:57.608 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:57.608 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:57.608 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:57.608 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:57.608 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:57.608 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:57.608 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:57.608 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:57.608 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:57.608 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:57.608 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:57.608 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:57.608 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:57.608 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:57.608 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:57.608 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:27:57.608 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:57.608 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:57.608 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:57.608 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:57.608 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:57.608 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:57.608 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:57.608 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:57.608 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:57.608 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:57.608 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:57.608 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:57.608 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:57.608 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:57.608 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:57.608 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:57.867 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:57.867 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:57.867 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:57.867 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:57.867 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:57.867 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:57.867 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:57.867 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:57.867 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:27:57.867 00:27:57.867 --- 10.0.0.2 ping statistics --- 00:27:57.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:57.867 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:27:57.867 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:57.867 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:57.867 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:27:57.867 00:27:57.867 --- 10.0.0.1 ping statistics --- 00:27:57.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:57.867 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:27:57.867 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:57.867 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:27:57.867 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:57.867 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:57.867 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:57.867 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:57.867 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:57.867 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:57.867 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:57.867 05:42:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:27:57.867 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:57.867 05:42:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:57.867 05:42:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:57.867 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=3323732 00:27:57.867 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:57.867 05:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 3323732 00:27:57.868 05:42:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 3323732 ']' 00:27:57.868 05:42:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:57.868 05:42:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:57.868 05:42:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:57.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:57.868 05:42:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:57.868 05:42:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:57.868 [2024-07-14 05:42:04.887206] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:27:57.868 [2024-07-14 05:42:04.887300] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:57.868 EAL: No free 2048 kB hugepages reported on node 1 00:27:57.868 [2024-07-14 05:42:04.954573] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:58.148 [2024-07-14 05:42:05.040467] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:58.148 [2024-07-14 05:42:05.040523] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:58.148 [2024-07-14 05:42:05.040551] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:58.148 [2024-07-14 05:42:05.040562] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:58.148 [2024-07-14 05:42:05.040572] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:58.148 [2024-07-14 05:42:05.040889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:58.148 [2024-07-14 05:42:05.040988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:58.148 [2024-07-14 05:42:05.040992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:58.148 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:58.149 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:27:58.149 05:42:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:58.149 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:58.149 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:58.149 05:42:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:58.149 05:42:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:58.149 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.149 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:58.149 [2024-07-14 05:42:05.171380] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:58.149 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.149 05:42:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:58.149 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.149 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:58.149 Malloc0 00:27:58.149 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.149 05:42:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:58.149 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.149 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:58.149 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.149 05:42:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:58.149 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.149 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:58.149 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.149 05:42:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:58.149 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.149 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:58.149 [2024-07-14 05:42:05.235908] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:58.416 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.416 05:42:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:58.416 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.416 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:58.416 [2024-07-14 05:42:05.243776] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:58.416 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.416 05:42:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:58.416 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.416 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:58.416 Malloc1 00:27:58.416 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.416 05:42:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:27:58.416 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.416 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:58.416 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.416 05:42:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:27:58.416 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.416 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:58.416 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.416 05:42:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:58.416 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.416 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:58.416 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.416 05:42:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:27:58.416 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.416 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:58.416 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.416 05:42:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3323877 00:27:58.416 05:42:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:27:58.416 05:42:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:58.416 05:42:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3323877 /var/tmp/bdevperf.sock 00:27:58.416 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 3323877 ']' 00:27:58.416 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:58.416 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:58.416 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:58.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:58.416 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:58.416 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:58.674 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:58.674 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:27:58.674 05:42:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:27:58.674 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.674 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:58.674 NVMe0n1 00:27:58.674 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.674 05:42:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:58.674 05:42:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:27:58.674 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.674 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:58.674 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.674 1 00:27:58.674 05:42:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:58.674 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:58.674 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:58.674 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:58.674 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:58.674 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:58.674 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:58.674 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:58.674 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.674 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:58.674 request: 00:27:58.674 { 00:27:58.674 "name": "NVMe0", 00:27:58.674 "trtype": "tcp", 00:27:58.674 "traddr": "10.0.0.2", 00:27:58.674 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:27:58.674 "hostaddr": "10.0.0.2", 00:27:58.674 "hostsvcid": "60000", 00:27:58.674 "adrfam": "ipv4", 00:27:58.674 "trsvcid": "4420", 00:27:58.674 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:58.674 "method": "bdev_nvme_attach_controller", 00:27:58.674 "req_id": 1 00:27:58.674 } 00:27:58.674 Got JSON-RPC error response 00:27:58.674 response: 00:27:58.674 { 00:27:58.674 "code": -114, 00:27:58.674 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:58.674 } 00:27:58.674 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:58.933 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:58.933 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:58.933 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:58.933 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:58.933 05:42:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:58.933 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:58.933 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:58.933 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:58.933 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:58.933 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:58.933 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:58.933 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:58.933 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.933 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:58.933 request: 00:27:58.933 { 00:27:58.933 "name": "NVMe0", 00:27:58.933 "trtype": "tcp", 00:27:58.933 "traddr": "10.0.0.2", 00:27:58.933 "hostaddr": "10.0.0.2", 00:27:58.933 "hostsvcid": "60000", 00:27:58.933 "adrfam": "ipv4", 00:27:58.933 "trsvcid": "4420", 00:27:58.933 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:58.933 "method": "bdev_nvme_attach_controller", 00:27:58.933 "req_id": 1 00:27:58.933 } 00:27:58.933 Got JSON-RPC error response 00:27:58.933 response: 00:27:58.933 { 00:27:58.933 "code": -114, 00:27:58.933 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:58.933 } 00:27:58.933 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:58.933 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:58.933 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:58.933 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:58.933 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:58.933 05:42:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:58.933 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:58.933 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:58.933 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:58.933 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:58.933 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:58.933 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:58.933 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:58.933 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.933 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:58.933 request: 00:27:58.933 { 00:27:58.933 "name": "NVMe0", 00:27:58.933 "trtype": "tcp", 00:27:58.933 "traddr": "10.0.0.2", 00:27:58.933 "hostaddr": "10.0.0.2", 00:27:58.933 "hostsvcid": "60000", 00:27:58.933 "adrfam": "ipv4", 00:27:58.933 "trsvcid": "4420", 00:27:58.933 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:58.933 "multipath": "disable", 00:27:58.933 "method": "bdev_nvme_attach_controller", 00:27:58.933 "req_id": 1 00:27:58.933 } 00:27:58.933 Got JSON-RPC error response 00:27:58.933 response: 00:27:58.933 { 00:27:58.933 "code": -114, 00:27:58.933 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:27:58.933 } 00:27:58.933 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:58.933 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:58.933 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:58.933 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:58.933 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:58.933 05:42:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:58.933 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:58.933 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:58.933 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:58.933 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:58.933 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:58.933 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:58.933 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:58.933 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.933 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:58.933 request: 00:27:58.933 { 00:27:58.933 "name": "NVMe0", 00:27:58.933 "trtype": "tcp", 00:27:58.933 "traddr": "10.0.0.2", 00:27:58.933 "hostaddr": "10.0.0.2", 00:27:58.933 "hostsvcid": "60000", 00:27:58.933 "adrfam": "ipv4", 00:27:58.933 "trsvcid": "4420", 00:27:58.933 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:58.933 "multipath": "failover", 00:27:58.933 "method": "bdev_nvme_attach_controller", 00:27:58.933 "req_id": 1 00:27:58.933 } 00:27:58.933 Got JSON-RPC error response 00:27:58.933 response: 00:27:58.933 { 00:27:58.933 "code": -114, 00:27:58.934 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:58.934 } 00:27:58.934 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:58.934 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:58.934 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:58.934 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:58.934 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:58.934 05:42:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:58.934 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.934 05:42:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:59.192 00:27:59.192 05:42:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.192 05:42:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:59.192 05:42:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.192 05:42:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:59.192 05:42:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.192 05:42:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:27:59.192 05:42:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.192 05:42:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:59.192 00:27:59.192 05:42:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.192 05:42:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:59.192 05:42:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.192 05:42:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:27:59.192 05:42:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:59.192 05:42:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.192 05:42:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:27:59.192 05:42:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:00.566 0 00:28:00.566 05:42:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:28:00.566 05:42:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.566 05:42:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:00.566 05:42:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.566 05:42:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 3323877 00:28:00.566 05:42:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 3323877 ']' 00:28:00.566 05:42:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 3323877 00:28:00.566 05:42:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:28:00.566 05:42:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:00.566 05:42:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3323877 00:28:00.566 05:42:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:00.566 05:42:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:00.566 05:42:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3323877' 00:28:00.566 killing process with pid 3323877 00:28:00.566 05:42:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 3323877 00:28:00.566 05:42:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 3323877 00:28:00.566 05:42:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:00.566 05:42:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.566 05:42:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:00.566 05:42:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.566 05:42:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:00.566 05:42:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.566 05:42:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:00.566 05:42:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.566 05:42:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:28:00.566 05:42:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:00.566 05:42:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:28:00.566 05:42:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:28:00.566 05:42:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # sort -u 00:28:00.566 05:42:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # cat 00:28:00.566 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:00.566 [2024-07-14 05:42:05.346342] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:00.566 [2024-07-14 05:42:05.346438] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3323877 ] 00:28:00.566 EAL: No free 2048 kB hugepages reported on node 1 00:28:00.566 [2024-07-14 05:42:05.408980] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:00.566 [2024-07-14 05:42:05.494329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:00.566 [2024-07-14 05:42:06.122650] bdev.c:4580:bdev_name_add: *ERROR*: Bdev name d58da84a-5b49-4fcd-b4ab-5e33fe673d24 already exists 00:28:00.566 [2024-07-14 05:42:06.122690] bdev.c:7696:bdev_register: *ERROR*: Unable to add uuid:d58da84a-5b49-4fcd-b4ab-5e33fe673d24 alias for bdev NVMe1n1 00:28:00.566 [2024-07-14 05:42:06.122723] bdev_nvme.c:4314:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:28:00.566 Running I/O for 1 seconds... 00:28:00.566 00:28:00.566 Latency(us) 00:28:00.566 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:00.566 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:28:00.566 NVMe0n1 : 1.00 19330.34 75.51 0.00 0.00 6603.76 3543.80 11165.39 00:28:00.566 =================================================================================================================== 00:28:00.566 Total : 19330.34 75.51 0.00 0.00 6603.76 3543.80 11165.39 00:28:00.566 Received shutdown signal, test time was about 1.000000 seconds 00:28:00.566 00:28:00.566 Latency(us) 00:28:00.566 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:00.566 =================================================================================================================== 00:28:00.566 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:00.566 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:00.566 05:42:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1614 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:00.566 05:42:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:28:00.566 05:42:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:28:00.566 05:42:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:00.566 05:42:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:28:00.566 05:42:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:00.566 05:42:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:28:00.566 05:42:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:00.566 05:42:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:00.566 rmmod nvme_tcp 00:28:00.566 rmmod nvme_fabrics 00:28:00.566 rmmod nvme_keyring 00:28:00.566 05:42:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:00.566 05:42:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:28:00.566 05:42:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:28:00.566 05:42:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 3323732 ']' 00:28:00.566 05:42:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 3323732 00:28:00.566 05:42:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 3323732 ']' 00:28:00.566 05:42:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 3323732 00:28:00.566 05:42:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:28:00.566 05:42:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:00.566 05:42:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3323732 00:28:00.566 05:42:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:28:00.567 05:42:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:28:00.567 05:42:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3323732' 00:28:00.567 killing process with pid 3323732 00:28:00.567 05:42:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 3323732 00:28:00.567 05:42:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 3323732 00:28:00.825 05:42:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:00.825 05:42:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:00.825 05:42:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:00.825 05:42:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:00.825 05:42:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:00.825 05:42:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:00.825 05:42:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:00.825 05:42:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:03.355 05:42:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:03.355 00:28:03.355 real 0m7.294s 00:28:03.355 user 0m11.435s 00:28:03.355 sys 0m2.206s 00:28:03.356 05:42:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:03.356 05:42:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:03.356 ************************************ 00:28:03.356 END TEST nvmf_multicontroller 00:28:03.356 ************************************ 00:28:03.356 05:42:09 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:03.356 05:42:09 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:03.356 05:42:09 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:03.356 05:42:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:03.356 ************************************ 00:28:03.356 START TEST nvmf_aer 00:28:03.356 ************************************ 00:28:03.356 05:42:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:03.356 * Looking for test storage... 00:28:03.356 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:03.356 05:42:10 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:03.356 05:42:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:28:03.356 05:42:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:03.356 05:42:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:03.356 05:42:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:03.356 05:42:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:03.356 05:42:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:03.356 05:42:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:03.356 05:42:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:03.356 05:42:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:03.356 05:42:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:03.356 05:42:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:03.356 05:42:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:03.356 05:42:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:03.356 05:42:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:03.356 05:42:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:03.356 05:42:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:03.356 05:42:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:03.356 05:42:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:03.356 05:42:10 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:03.356 05:42:10 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:03.356 05:42:10 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:03.356 05:42:10 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.356 05:42:10 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.356 05:42:10 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.356 05:42:10 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:28:03.356 05:42:10 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.356 05:42:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:28:03.356 05:42:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:03.356 05:42:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:03.356 05:42:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:03.356 05:42:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:03.356 05:42:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:03.356 05:42:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:03.356 05:42:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:03.356 05:42:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:03.356 05:42:10 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:28:03.356 05:42:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:03.356 05:42:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:03.356 05:42:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:03.356 05:42:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:03.356 05:42:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:03.356 05:42:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:03.356 05:42:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:03.356 05:42:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:03.356 05:42:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:03.356 05:42:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:03.356 05:42:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:28:03.356 05:42:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:05.252 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:05.252 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:05.252 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:05.252 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:05.252 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:05.253 05:42:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:05.253 05:42:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:05.253 05:42:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:05.253 05:42:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:05.253 05:42:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:05.253 05:42:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:05.253 05:42:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:05.253 05:42:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:05.253 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:05.253 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.145 ms 00:28:05.253 00:28:05.253 --- 10.0.0.2 ping statistics --- 00:28:05.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:05.253 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:28:05.253 05:42:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:05.253 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:05.253 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:28:05.253 00:28:05.253 --- 10.0.0.1 ping statistics --- 00:28:05.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:05.253 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:28:05.253 05:42:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:05.253 05:42:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:28:05.253 05:42:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:05.253 05:42:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:05.253 05:42:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:05.253 05:42:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:05.253 05:42:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:05.253 05:42:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:05.253 05:42:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:05.253 05:42:12 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:28:05.253 05:42:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:05.253 05:42:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:05.253 05:42:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:05.253 05:42:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=3326592 00:28:05.253 05:42:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:05.253 05:42:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 3326592 00:28:05.253 05:42:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@827 -- # '[' -z 3326592 ']' 00:28:05.253 05:42:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:05.253 05:42:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:05.253 05:42:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:05.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:05.253 05:42:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:05.253 05:42:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:05.253 [2024-07-14 05:42:12.162751] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:05.253 [2024-07-14 05:42:12.162843] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:05.253 EAL: No free 2048 kB hugepages reported on node 1 00:28:05.253 [2024-07-14 05:42:12.228555] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:05.253 [2024-07-14 05:42:12.318433] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:05.253 [2024-07-14 05:42:12.318491] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:05.253 [2024-07-14 05:42:12.318504] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:05.253 [2024-07-14 05:42:12.318515] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:05.253 [2024-07-14 05:42:12.318524] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:05.253 [2024-07-14 05:42:12.318572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:05.253 [2024-07-14 05:42:12.318628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:05.253 [2024-07-14 05:42:12.318698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:05.253 [2024-07-14 05:42:12.318701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:05.510 05:42:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:05.511 05:42:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@860 -- # return 0 00:28:05.511 05:42:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:05.511 05:42:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:05.511 05:42:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:05.511 05:42:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:05.511 05:42:12 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:05.511 05:42:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.511 05:42:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:05.511 [2024-07-14 05:42:12.485728] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:05.511 05:42:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.511 05:42:12 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:28:05.511 05:42:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.511 05:42:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:05.511 Malloc0 00:28:05.511 05:42:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.511 05:42:12 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:28:05.511 05:42:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.511 05:42:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:05.511 05:42:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.511 05:42:12 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:05.511 05:42:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.511 05:42:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:05.511 05:42:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.511 05:42:12 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:05.511 05:42:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.511 05:42:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:05.511 [2024-07-14 05:42:12.539608] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:05.511 05:42:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.511 05:42:12 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:28:05.511 05:42:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.511 05:42:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:05.511 [ 00:28:05.511 { 00:28:05.511 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:05.511 "subtype": "Discovery", 00:28:05.511 "listen_addresses": [], 00:28:05.511 "allow_any_host": true, 00:28:05.511 "hosts": [] 00:28:05.511 }, 00:28:05.511 { 00:28:05.511 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:05.511 "subtype": "NVMe", 00:28:05.511 "listen_addresses": [ 00:28:05.511 { 00:28:05.511 "trtype": "TCP", 00:28:05.511 "adrfam": "IPv4", 00:28:05.511 "traddr": "10.0.0.2", 00:28:05.511 "trsvcid": "4420" 00:28:05.511 } 00:28:05.511 ], 00:28:05.511 "allow_any_host": true, 00:28:05.511 "hosts": [], 00:28:05.511 "serial_number": "SPDK00000000000001", 00:28:05.511 "model_number": "SPDK bdev Controller", 00:28:05.511 "max_namespaces": 2, 00:28:05.511 "min_cntlid": 1, 00:28:05.511 "max_cntlid": 65519, 00:28:05.511 "namespaces": [ 00:28:05.511 { 00:28:05.511 "nsid": 1, 00:28:05.511 "bdev_name": "Malloc0", 00:28:05.511 "name": "Malloc0", 00:28:05.511 "nguid": "87FE1B3E353A40EABD46796AC8687884", 00:28:05.511 "uuid": "87fe1b3e-353a-40ea-bd46-796ac8687884" 00:28:05.511 } 00:28:05.511 ] 00:28:05.511 } 00:28:05.511 ] 00:28:05.511 05:42:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.511 05:42:12 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:28:05.511 05:42:12 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:28:05.511 05:42:12 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=3326621 00:28:05.511 05:42:12 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:28:05.511 05:42:12 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:28:05.511 05:42:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1261 -- # local i=0 00:28:05.511 05:42:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:05.511 05:42:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 0 -lt 200 ']' 00:28:05.511 05:42:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=1 00:28:05.511 05:42:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:28:05.511 EAL: No free 2048 kB hugepages reported on node 1 00:28:05.768 05:42:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:05.768 05:42:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 1 -lt 200 ']' 00:28:05.768 05:42:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=2 00:28:05.769 05:42:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:28:05.769 05:42:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:05.769 05:42:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 2 -lt 200 ']' 00:28:05.769 05:42:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=3 00:28:05.769 05:42:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:28:05.769 05:42:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:05.769 05:42:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:05.769 05:42:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # return 0 00:28:05.769 05:42:12 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:28:05.769 05:42:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.769 05:42:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:06.027 Malloc1 00:28:06.027 05:42:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.027 05:42:12 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:28:06.027 05:42:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.027 05:42:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:06.027 05:42:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.027 05:42:12 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:28:06.027 05:42:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.027 05:42:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:06.027 Asynchronous Event Request test 00:28:06.027 Attaching to 10.0.0.2 00:28:06.027 Attached to 10.0.0.2 00:28:06.027 Registering asynchronous event callbacks... 00:28:06.027 Starting namespace attribute notice tests for all controllers... 00:28:06.027 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:28:06.027 aer_cb - Changed Namespace 00:28:06.027 Cleaning up... 00:28:06.027 [ 00:28:06.027 { 00:28:06.027 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:06.027 "subtype": "Discovery", 00:28:06.027 "listen_addresses": [], 00:28:06.027 "allow_any_host": true, 00:28:06.027 "hosts": [] 00:28:06.027 }, 00:28:06.027 { 00:28:06.027 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:06.027 "subtype": "NVMe", 00:28:06.027 "listen_addresses": [ 00:28:06.027 { 00:28:06.027 "trtype": "TCP", 00:28:06.027 "adrfam": "IPv4", 00:28:06.027 "traddr": "10.0.0.2", 00:28:06.027 "trsvcid": "4420" 00:28:06.027 } 00:28:06.027 ], 00:28:06.027 "allow_any_host": true, 00:28:06.027 "hosts": [], 00:28:06.027 "serial_number": "SPDK00000000000001", 00:28:06.027 "model_number": "SPDK bdev Controller", 00:28:06.027 "max_namespaces": 2, 00:28:06.027 "min_cntlid": 1, 00:28:06.027 "max_cntlid": 65519, 00:28:06.027 "namespaces": [ 00:28:06.027 { 00:28:06.027 "nsid": 1, 00:28:06.027 "bdev_name": "Malloc0", 00:28:06.027 "name": "Malloc0", 00:28:06.027 "nguid": "87FE1B3E353A40EABD46796AC8687884", 00:28:06.027 "uuid": "87fe1b3e-353a-40ea-bd46-796ac8687884" 00:28:06.027 }, 00:28:06.027 { 00:28:06.027 "nsid": 2, 00:28:06.027 "bdev_name": "Malloc1", 00:28:06.027 "name": "Malloc1", 00:28:06.027 "nguid": "D3966FFE16C449579DB70902F021AC8C", 00:28:06.027 "uuid": "d3966ffe-16c4-4957-9db7-0902f021ac8c" 00:28:06.027 } 00:28:06.027 ] 00:28:06.027 } 00:28:06.027 ] 00:28:06.027 05:42:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.027 05:42:12 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 3326621 00:28:06.027 05:42:12 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:28:06.027 05:42:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.027 05:42:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:06.027 05:42:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.027 05:42:12 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:28:06.027 05:42:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.027 05:42:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:06.027 05:42:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.027 05:42:12 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:06.027 05:42:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.027 05:42:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:06.027 05:42:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.027 05:42:13 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:28:06.027 05:42:13 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:28:06.027 05:42:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:06.027 05:42:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:28:06.027 05:42:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:06.027 05:42:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:28:06.027 05:42:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:06.027 05:42:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:06.027 rmmod nvme_tcp 00:28:06.027 rmmod nvme_fabrics 00:28:06.027 rmmod nvme_keyring 00:28:06.027 05:42:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:06.027 05:42:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:28:06.027 05:42:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:28:06.027 05:42:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 3326592 ']' 00:28:06.027 05:42:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 3326592 00:28:06.028 05:42:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@946 -- # '[' -z 3326592 ']' 00:28:06.028 05:42:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@950 -- # kill -0 3326592 00:28:06.028 05:42:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # uname 00:28:06.028 05:42:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:06.028 05:42:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3326592 00:28:06.028 05:42:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:06.028 05:42:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:06.028 05:42:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3326592' 00:28:06.028 killing process with pid 3326592 00:28:06.028 05:42:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@965 -- # kill 3326592 00:28:06.028 05:42:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@970 -- # wait 3326592 00:28:06.285 05:42:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:06.285 05:42:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:06.285 05:42:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:06.285 05:42:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:06.285 05:42:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:06.285 05:42:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:06.285 05:42:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:06.285 05:42:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:08.814 05:42:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:08.814 00:28:08.814 real 0m5.370s 00:28:08.814 user 0m4.542s 00:28:08.814 sys 0m1.865s 00:28:08.814 05:42:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:08.814 05:42:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:08.814 ************************************ 00:28:08.814 END TEST nvmf_aer 00:28:08.814 ************************************ 00:28:08.814 05:42:15 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:08.814 05:42:15 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:08.814 05:42:15 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:08.814 05:42:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:08.814 ************************************ 00:28:08.814 START TEST nvmf_async_init 00:28:08.814 ************************************ 00:28:08.814 05:42:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:08.814 * Looking for test storage... 00:28:08.814 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:08.814 05:42:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:08.814 05:42:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:28:08.814 05:42:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:08.814 05:42:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:08.814 05:42:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:08.814 05:42:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:08.814 05:42:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:08.814 05:42:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:08.814 05:42:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:08.814 05:42:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:08.814 05:42:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:08.814 05:42:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:08.814 05:42:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:08.814 05:42:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:08.814 05:42:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:08.814 05:42:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:08.814 05:42:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:08.815 05:42:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:08.815 05:42:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:08.815 05:42:15 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:08.815 05:42:15 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:08.815 05:42:15 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:08.815 05:42:15 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.815 05:42:15 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.815 05:42:15 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.815 05:42:15 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:28:08.815 05:42:15 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.815 05:42:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:28:08.815 05:42:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:08.815 05:42:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:08.815 05:42:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:08.815 05:42:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:08.815 05:42:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:08.815 05:42:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:08.815 05:42:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:08.815 05:42:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:08.815 05:42:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:28:08.815 05:42:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:28:08.815 05:42:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:28:08.815 05:42:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:28:08.815 05:42:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:28:08.815 05:42:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:28:08.815 05:42:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=49cc684d408948359c8608f4a6d076ce 00:28:08.815 05:42:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:28:08.815 05:42:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:08.815 05:42:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:08.815 05:42:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:08.815 05:42:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:08.815 05:42:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:08.815 05:42:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:08.815 05:42:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:08.815 05:42:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:08.815 05:42:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:08.815 05:42:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:08.815 05:42:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:28:08.815 05:42:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:10.715 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:10.715 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:10.715 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:10.715 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:10.715 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:10.716 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:10.716 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:10.716 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:10.716 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:10.716 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:10.716 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:10.716 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:10.716 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:10.716 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:10.716 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:10.716 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:28:10.716 00:28:10.716 --- 10.0.0.2 ping statistics --- 00:28:10.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:10.716 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:28:10.716 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:10.716 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:10.716 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:28:10.716 00:28:10.716 --- 10.0.0.1 ping statistics --- 00:28:10.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:10.716 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:28:10.716 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:10.716 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:28:10.716 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:10.716 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:10.716 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:10.716 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:10.716 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:10.716 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:10.716 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:10.716 05:42:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:28:10.716 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:10.716 05:42:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:10.716 05:42:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:10.716 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=3328561 00:28:10.716 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:28:10.716 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 3328561 00:28:10.716 05:42:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@827 -- # '[' -z 3328561 ']' 00:28:10.716 05:42:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:10.716 05:42:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:10.716 05:42:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:10.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:10.716 05:42:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:10.716 05:42:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:10.716 [2024-07-14 05:42:17.556296] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:10.716 [2024-07-14 05:42:17.556368] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:10.716 EAL: No free 2048 kB hugepages reported on node 1 00:28:10.716 [2024-07-14 05:42:17.619447] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:10.716 [2024-07-14 05:42:17.704193] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:10.716 [2024-07-14 05:42:17.704245] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:10.716 [2024-07-14 05:42:17.704273] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:10.716 [2024-07-14 05:42:17.704284] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:10.716 [2024-07-14 05:42:17.704294] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:10.716 [2024-07-14 05:42:17.704323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:10.716 05:42:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:10.716 05:42:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@860 -- # return 0 00:28:10.716 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:10.716 05:42:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:10.716 05:42:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:10.975 05:42:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:10.975 05:42:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:10.975 05:42:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.975 05:42:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:10.975 [2024-07-14 05:42:17.838201] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:10.975 05:42:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.975 05:42:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:28:10.975 05:42:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.975 05:42:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:10.975 null0 00:28:10.975 05:42:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.975 05:42:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:28:10.975 05:42:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.975 05:42:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:10.975 05:42:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.975 05:42:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:28:10.975 05:42:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.975 05:42:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:10.975 05:42:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.975 05:42:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 49cc684d408948359c8608f4a6d076ce 00:28:10.975 05:42:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.975 05:42:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:10.975 05:42:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.975 05:42:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:10.975 05:42:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.975 05:42:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:10.975 [2024-07-14 05:42:17.878465] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:10.975 05:42:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.975 05:42:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:28:10.975 05:42:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.975 05:42:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:11.233 nvme0n1 00:28:11.233 05:42:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.233 05:42:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:11.233 05:42:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.233 05:42:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:11.233 [ 00:28:11.233 { 00:28:11.233 "name": "nvme0n1", 00:28:11.233 "aliases": [ 00:28:11.233 "49cc684d-4089-4835-9c86-08f4a6d076ce" 00:28:11.233 ], 00:28:11.234 "product_name": "NVMe disk", 00:28:11.234 "block_size": 512, 00:28:11.234 "num_blocks": 2097152, 00:28:11.234 "uuid": "49cc684d-4089-4835-9c86-08f4a6d076ce", 00:28:11.234 "assigned_rate_limits": { 00:28:11.234 "rw_ios_per_sec": 0, 00:28:11.234 "rw_mbytes_per_sec": 0, 00:28:11.234 "r_mbytes_per_sec": 0, 00:28:11.234 "w_mbytes_per_sec": 0 00:28:11.234 }, 00:28:11.234 "claimed": false, 00:28:11.234 "zoned": false, 00:28:11.234 "supported_io_types": { 00:28:11.234 "read": true, 00:28:11.234 "write": true, 00:28:11.234 "unmap": false, 00:28:11.234 "write_zeroes": true, 00:28:11.234 "flush": true, 00:28:11.234 "reset": true, 00:28:11.234 "compare": true, 00:28:11.234 "compare_and_write": true, 00:28:11.234 "abort": true, 00:28:11.234 "nvme_admin": true, 00:28:11.234 "nvme_io": true 00:28:11.234 }, 00:28:11.234 "memory_domains": [ 00:28:11.234 { 00:28:11.234 "dma_device_id": "system", 00:28:11.234 "dma_device_type": 1 00:28:11.234 } 00:28:11.234 ], 00:28:11.234 "driver_specific": { 00:28:11.234 "nvme": [ 00:28:11.234 { 00:28:11.234 "trid": { 00:28:11.234 "trtype": "TCP", 00:28:11.234 "adrfam": "IPv4", 00:28:11.234 "traddr": "10.0.0.2", 00:28:11.234 "trsvcid": "4420", 00:28:11.234 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:11.234 }, 00:28:11.234 "ctrlr_data": { 00:28:11.234 "cntlid": 1, 00:28:11.234 "vendor_id": "0x8086", 00:28:11.234 "model_number": "SPDK bdev Controller", 00:28:11.234 "serial_number": "00000000000000000000", 00:28:11.234 "firmware_revision": "24.05.1", 00:28:11.234 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:11.234 "oacs": { 00:28:11.234 "security": 0, 00:28:11.234 "format": 0, 00:28:11.234 "firmware": 0, 00:28:11.234 "ns_manage": 0 00:28:11.234 }, 00:28:11.234 "multi_ctrlr": true, 00:28:11.234 "ana_reporting": false 00:28:11.234 }, 00:28:11.234 "vs": { 00:28:11.234 "nvme_version": "1.3" 00:28:11.234 }, 00:28:11.234 "ns_data": { 00:28:11.234 "id": 1, 00:28:11.234 "can_share": true 00:28:11.234 } 00:28:11.234 } 00:28:11.234 ], 00:28:11.234 "mp_policy": "active_passive" 00:28:11.234 } 00:28:11.234 } 00:28:11.234 ] 00:28:11.234 05:42:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.234 05:42:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:28:11.234 05:42:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.234 05:42:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:11.234 [2024-07-14 05:42:18.130976] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:11.234 [2024-07-14 05:42:18.131051] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e51760 (9): Bad file descriptor 00:28:11.234 [2024-07-14 05:42:18.273016] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:11.234 05:42:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.234 05:42:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:11.234 05:42:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.234 05:42:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:11.234 [ 00:28:11.234 { 00:28:11.234 "name": "nvme0n1", 00:28:11.234 "aliases": [ 00:28:11.234 "49cc684d-4089-4835-9c86-08f4a6d076ce" 00:28:11.234 ], 00:28:11.234 "product_name": "NVMe disk", 00:28:11.234 "block_size": 512, 00:28:11.234 "num_blocks": 2097152, 00:28:11.234 "uuid": "49cc684d-4089-4835-9c86-08f4a6d076ce", 00:28:11.234 "assigned_rate_limits": { 00:28:11.234 "rw_ios_per_sec": 0, 00:28:11.234 "rw_mbytes_per_sec": 0, 00:28:11.234 "r_mbytes_per_sec": 0, 00:28:11.234 "w_mbytes_per_sec": 0 00:28:11.234 }, 00:28:11.234 "claimed": false, 00:28:11.234 "zoned": false, 00:28:11.234 "supported_io_types": { 00:28:11.234 "read": true, 00:28:11.234 "write": true, 00:28:11.234 "unmap": false, 00:28:11.234 "write_zeroes": true, 00:28:11.234 "flush": true, 00:28:11.234 "reset": true, 00:28:11.234 "compare": true, 00:28:11.234 "compare_and_write": true, 00:28:11.234 "abort": true, 00:28:11.234 "nvme_admin": true, 00:28:11.234 "nvme_io": true 00:28:11.234 }, 00:28:11.234 "memory_domains": [ 00:28:11.234 { 00:28:11.234 "dma_device_id": "system", 00:28:11.234 "dma_device_type": 1 00:28:11.234 } 00:28:11.234 ], 00:28:11.234 "driver_specific": { 00:28:11.234 "nvme": [ 00:28:11.234 { 00:28:11.234 "trid": { 00:28:11.234 "trtype": "TCP", 00:28:11.234 "adrfam": "IPv4", 00:28:11.234 "traddr": "10.0.0.2", 00:28:11.234 "trsvcid": "4420", 00:28:11.234 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:11.234 }, 00:28:11.234 "ctrlr_data": { 00:28:11.234 "cntlid": 2, 00:28:11.234 "vendor_id": "0x8086", 00:28:11.234 "model_number": "SPDK bdev Controller", 00:28:11.234 "serial_number": "00000000000000000000", 00:28:11.234 "firmware_revision": "24.05.1", 00:28:11.234 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:11.234 "oacs": { 00:28:11.234 "security": 0, 00:28:11.234 "format": 0, 00:28:11.234 "firmware": 0, 00:28:11.234 "ns_manage": 0 00:28:11.234 }, 00:28:11.234 "multi_ctrlr": true, 00:28:11.234 "ana_reporting": false 00:28:11.234 }, 00:28:11.234 "vs": { 00:28:11.234 "nvme_version": "1.3" 00:28:11.234 }, 00:28:11.234 "ns_data": { 00:28:11.234 "id": 1, 00:28:11.234 "can_share": true 00:28:11.234 } 00:28:11.234 } 00:28:11.234 ], 00:28:11.234 "mp_policy": "active_passive" 00:28:11.234 } 00:28:11.234 } 00:28:11.234 ] 00:28:11.234 05:42:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.234 05:42:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.234 05:42:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.234 05:42:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:11.234 05:42:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.234 05:42:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:28:11.234 05:42:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.kG6T66Nv2v 00:28:11.234 05:42:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:28:11.234 05:42:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.kG6T66Nv2v 00:28:11.234 05:42:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:28:11.234 05:42:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.234 05:42:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:11.234 05:42:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.234 05:42:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:28:11.234 05:42:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.234 05:42:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:11.234 [2024-07-14 05:42:18.323615] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:11.234 [2024-07-14 05:42:18.323743] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:11.234 05:42:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.234 05:42:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.kG6T66Nv2v 00:28:11.234 05:42:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.234 05:42:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:11.234 [2024-07-14 05:42:18.331637] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:28:11.234 05:42:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.234 05:42:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.kG6T66Nv2v 00:28:11.234 05:42:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.234 05:42:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:11.493 [2024-07-14 05:42:18.339651] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:11.493 [2024-07-14 05:42:18.339709] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:28:11.493 nvme0n1 00:28:11.493 05:42:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.493 05:42:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:11.493 05:42:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.493 05:42:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:11.493 [ 00:28:11.493 { 00:28:11.493 "name": "nvme0n1", 00:28:11.493 "aliases": [ 00:28:11.493 "49cc684d-4089-4835-9c86-08f4a6d076ce" 00:28:11.493 ], 00:28:11.493 "product_name": "NVMe disk", 00:28:11.493 "block_size": 512, 00:28:11.493 "num_blocks": 2097152, 00:28:11.493 "uuid": "49cc684d-4089-4835-9c86-08f4a6d076ce", 00:28:11.493 "assigned_rate_limits": { 00:28:11.493 "rw_ios_per_sec": 0, 00:28:11.493 "rw_mbytes_per_sec": 0, 00:28:11.493 "r_mbytes_per_sec": 0, 00:28:11.493 "w_mbytes_per_sec": 0 00:28:11.493 }, 00:28:11.493 "claimed": false, 00:28:11.493 "zoned": false, 00:28:11.493 "supported_io_types": { 00:28:11.493 "read": true, 00:28:11.493 "write": true, 00:28:11.493 "unmap": false, 00:28:11.493 "write_zeroes": true, 00:28:11.493 "flush": true, 00:28:11.493 "reset": true, 00:28:11.493 "compare": true, 00:28:11.493 "compare_and_write": true, 00:28:11.493 "abort": true, 00:28:11.493 "nvme_admin": true, 00:28:11.493 "nvme_io": true 00:28:11.493 }, 00:28:11.493 "memory_domains": [ 00:28:11.493 { 00:28:11.493 "dma_device_id": "system", 00:28:11.493 "dma_device_type": 1 00:28:11.494 } 00:28:11.494 ], 00:28:11.494 "driver_specific": { 00:28:11.494 "nvme": [ 00:28:11.494 { 00:28:11.494 "trid": { 00:28:11.494 "trtype": "TCP", 00:28:11.494 "adrfam": "IPv4", 00:28:11.494 "traddr": "10.0.0.2", 00:28:11.494 "trsvcid": "4421", 00:28:11.494 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:11.494 }, 00:28:11.494 "ctrlr_data": { 00:28:11.494 "cntlid": 3, 00:28:11.494 "vendor_id": "0x8086", 00:28:11.494 "model_number": "SPDK bdev Controller", 00:28:11.494 "serial_number": "00000000000000000000", 00:28:11.494 "firmware_revision": "24.05.1", 00:28:11.494 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:11.494 "oacs": { 00:28:11.494 "security": 0, 00:28:11.494 "format": 0, 00:28:11.494 "firmware": 0, 00:28:11.494 "ns_manage": 0 00:28:11.494 }, 00:28:11.494 "multi_ctrlr": true, 00:28:11.494 "ana_reporting": false 00:28:11.494 }, 00:28:11.494 "vs": { 00:28:11.494 "nvme_version": "1.3" 00:28:11.494 }, 00:28:11.494 "ns_data": { 00:28:11.494 "id": 1, 00:28:11.494 "can_share": true 00:28:11.494 } 00:28:11.494 } 00:28:11.494 ], 00:28:11.494 "mp_policy": "active_passive" 00:28:11.494 } 00:28:11.494 } 00:28:11.494 ] 00:28:11.494 05:42:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.494 05:42:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.494 05:42:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.494 05:42:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:11.494 05:42:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.494 05:42:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.kG6T66Nv2v 00:28:11.494 05:42:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:28:11.494 05:42:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:28:11.494 05:42:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:11.494 05:42:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:28:11.494 05:42:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:11.494 05:42:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:28:11.494 05:42:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:11.494 05:42:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:11.494 rmmod nvme_tcp 00:28:11.494 rmmod nvme_fabrics 00:28:11.494 rmmod nvme_keyring 00:28:11.494 05:42:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:11.494 05:42:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:28:11.494 05:42:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:28:11.494 05:42:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 3328561 ']' 00:28:11.494 05:42:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 3328561 00:28:11.494 05:42:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@946 -- # '[' -z 3328561 ']' 00:28:11.494 05:42:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@950 -- # kill -0 3328561 00:28:11.494 05:42:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # uname 00:28:11.494 05:42:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:11.494 05:42:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3328561 00:28:11.494 05:42:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:11.494 05:42:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:11.494 05:42:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3328561' 00:28:11.494 killing process with pid 3328561 00:28:11.494 05:42:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@965 -- # kill 3328561 00:28:11.494 [2024-07-14 05:42:18.534650] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:28:11.494 [2024-07-14 05:42:18.534692] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:28:11.494 05:42:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@970 -- # wait 3328561 00:28:11.753 05:42:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:11.753 05:42:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:11.753 05:42:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:11.753 05:42:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:11.753 05:42:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:11.753 05:42:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:11.753 05:42:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:11.753 05:42:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:14.287 05:42:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:14.287 00:28:14.287 real 0m5.395s 00:28:14.287 user 0m2.046s 00:28:14.287 sys 0m1.734s 00:28:14.287 05:42:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:14.287 05:42:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:14.287 ************************************ 00:28:14.287 END TEST nvmf_async_init 00:28:14.287 ************************************ 00:28:14.287 05:42:20 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:14.287 05:42:20 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:14.287 05:42:20 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:14.287 05:42:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:14.287 ************************************ 00:28:14.287 START TEST dma 00:28:14.287 ************************************ 00:28:14.287 05:42:20 nvmf_tcp.dma -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:14.287 * Looking for test storage... 00:28:14.287 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:14.287 05:42:20 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:14.287 05:42:20 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:28:14.287 05:42:20 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:14.287 05:42:20 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:14.287 05:42:20 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:14.287 05:42:20 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:14.287 05:42:20 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:14.287 05:42:20 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:14.287 05:42:20 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:14.287 05:42:20 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:14.287 05:42:20 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:14.287 05:42:20 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:14.287 05:42:20 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:14.287 05:42:20 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:14.287 05:42:20 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:14.287 05:42:20 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:14.287 05:42:20 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:14.287 05:42:20 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:14.287 05:42:20 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:14.287 05:42:20 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:14.287 05:42:20 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:14.287 05:42:20 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:14.287 05:42:20 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.287 05:42:20 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.287 05:42:20 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.287 05:42:20 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:28:14.287 05:42:20 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.287 05:42:20 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:28:14.287 05:42:20 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:14.287 05:42:20 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:14.287 05:42:20 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:14.287 05:42:20 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:14.287 05:42:20 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:14.287 05:42:20 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:14.288 05:42:20 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:14.288 05:42:20 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:14.288 05:42:20 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:28:14.288 05:42:20 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:28:14.288 00:28:14.288 real 0m0.061s 00:28:14.288 user 0m0.025s 00:28:14.288 sys 0m0.041s 00:28:14.288 05:42:20 nvmf_tcp.dma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:14.288 05:42:20 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:28:14.288 ************************************ 00:28:14.288 END TEST dma 00:28:14.288 ************************************ 00:28:14.288 05:42:20 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:14.288 05:42:20 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:14.288 05:42:20 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:14.288 05:42:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:14.288 ************************************ 00:28:14.288 START TEST nvmf_identify 00:28:14.288 ************************************ 00:28:14.288 05:42:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:14.288 * Looking for test storage... 00:28:14.288 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:14.288 05:42:20 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:14.288 05:42:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:28:14.288 05:42:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:14.288 05:42:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:14.288 05:42:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:14.288 05:42:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:14.288 05:42:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:14.288 05:42:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:14.288 05:42:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:14.288 05:42:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:14.288 05:42:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:14.288 05:42:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:14.288 05:42:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:14.288 05:42:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:14.288 05:42:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:14.288 05:42:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:14.288 05:42:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:14.288 05:42:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:14.288 05:42:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:14.288 05:42:20 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:14.288 05:42:20 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:14.288 05:42:20 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:14.288 05:42:20 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.288 05:42:20 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.288 05:42:21 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.288 05:42:21 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:28:14.288 05:42:21 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.288 05:42:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:28:14.288 05:42:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:14.288 05:42:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:14.288 05:42:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:14.288 05:42:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:14.288 05:42:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:14.288 05:42:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:14.288 05:42:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:14.288 05:42:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:14.288 05:42:21 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:14.288 05:42:21 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:14.288 05:42:21 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:28:14.288 05:42:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:14.288 05:42:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:14.288 05:42:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:14.288 05:42:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:14.288 05:42:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:14.288 05:42:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:14.288 05:42:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:14.288 05:42:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:14.288 05:42:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:14.288 05:42:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:14.288 05:42:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:28:14.288 05:42:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:16.191 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:16.191 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:16.191 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:16.191 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:16.191 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:16.192 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:16.192 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:16.192 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:16.192 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:16.192 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:16.192 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:16.192 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:28:16.192 00:28:16.192 --- 10.0.0.2 ping statistics --- 00:28:16.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:16.192 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:28:16.192 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:16.192 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:16.192 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:28:16.192 00:28:16.192 --- 10.0.0.1 ping statistics --- 00:28:16.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:16.192 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:28:16.192 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:16.192 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:28:16.192 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:16.192 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:16.192 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:16.192 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:16.192 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:16.192 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:16.192 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:16.192 05:42:23 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:28:16.192 05:42:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:16.192 05:42:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:16.192 05:42:23 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3330680 00:28:16.192 05:42:23 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:16.192 05:42:23 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:16.192 05:42:23 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3330680 00:28:16.192 05:42:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@827 -- # '[' -z 3330680 ']' 00:28:16.192 05:42:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:16.192 05:42:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:16.192 05:42:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:16.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:16.192 05:42:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:16.192 05:42:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:16.192 [2024-07-14 05:42:23.250388] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:16.192 [2024-07-14 05:42:23.250471] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:16.192 EAL: No free 2048 kB hugepages reported on node 1 00:28:16.450 [2024-07-14 05:42:23.316761] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:16.450 [2024-07-14 05:42:23.408778] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:16.450 [2024-07-14 05:42:23.408827] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:16.450 [2024-07-14 05:42:23.408854] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:16.450 [2024-07-14 05:42:23.408871] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:16.450 [2024-07-14 05:42:23.408882] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:16.450 [2024-07-14 05:42:23.408974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:16.450 [2024-07-14 05:42:23.409003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:16.450 [2024-07-14 05:42:23.409062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:16.450 [2024-07-14 05:42:23.409064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:16.450 05:42:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:16.450 05:42:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@860 -- # return 0 00:28:16.450 05:42:23 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:16.450 05:42:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.450 05:42:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:16.450 [2024-07-14 05:42:23.549691] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:16.737 05:42:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.737 05:42:23 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:28:16.737 05:42:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:16.737 05:42:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:16.737 05:42:23 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:16.737 05:42:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.737 05:42:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:16.737 Malloc0 00:28:16.737 05:42:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.737 05:42:23 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:16.737 05:42:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.737 05:42:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:16.737 05:42:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.737 05:42:23 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:28:16.737 05:42:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.737 05:42:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:16.737 05:42:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.737 05:42:23 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:16.737 05:42:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.737 05:42:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:16.737 [2024-07-14 05:42:23.627549] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:16.737 05:42:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.737 05:42:23 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:16.737 05:42:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.737 05:42:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:16.737 05:42:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.737 05:42:23 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:28:16.737 05:42:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.737 05:42:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:16.737 [ 00:28:16.737 { 00:28:16.737 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:16.737 "subtype": "Discovery", 00:28:16.737 "listen_addresses": [ 00:28:16.737 { 00:28:16.737 "trtype": "TCP", 00:28:16.737 "adrfam": "IPv4", 00:28:16.737 "traddr": "10.0.0.2", 00:28:16.737 "trsvcid": "4420" 00:28:16.737 } 00:28:16.737 ], 00:28:16.738 "allow_any_host": true, 00:28:16.738 "hosts": [] 00:28:16.738 }, 00:28:16.738 { 00:28:16.738 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:16.738 "subtype": "NVMe", 00:28:16.738 "listen_addresses": [ 00:28:16.738 { 00:28:16.738 "trtype": "TCP", 00:28:16.738 "adrfam": "IPv4", 00:28:16.738 "traddr": "10.0.0.2", 00:28:16.738 "trsvcid": "4420" 00:28:16.738 } 00:28:16.738 ], 00:28:16.738 "allow_any_host": true, 00:28:16.738 "hosts": [], 00:28:16.738 "serial_number": "SPDK00000000000001", 00:28:16.738 "model_number": "SPDK bdev Controller", 00:28:16.738 "max_namespaces": 32, 00:28:16.738 "min_cntlid": 1, 00:28:16.738 "max_cntlid": 65519, 00:28:16.738 "namespaces": [ 00:28:16.738 { 00:28:16.738 "nsid": 1, 00:28:16.738 "bdev_name": "Malloc0", 00:28:16.738 "name": "Malloc0", 00:28:16.738 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:28:16.738 "eui64": "ABCDEF0123456789", 00:28:16.738 "uuid": "369e74e5-d666-4539-a0f3-67aa468d905e" 00:28:16.738 } 00:28:16.738 ] 00:28:16.738 } 00:28:16.738 ] 00:28:16.738 05:42:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.738 05:42:23 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:28:16.738 [2024-07-14 05:42:23.667833] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:16.738 [2024-07-14 05:42:23.667891] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3330824 ] 00:28:16.738 EAL: No free 2048 kB hugepages reported on node 1 00:28:16.738 [2024-07-14 05:42:23.705496] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:28:16.738 [2024-07-14 05:42:23.705557] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:16.738 [2024-07-14 05:42:23.705567] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:16.738 [2024-07-14 05:42:23.705584] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:16.738 [2024-07-14 05:42:23.705598] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:16.738 [2024-07-14 05:42:23.705995] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:28:16.738 [2024-07-14 05:42:23.706061] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x88b980 0 00:28:16.738 [2024-07-14 05:42:23.711886] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:16.738 [2024-07-14 05:42:23.711907] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:16.738 [2024-07-14 05:42:23.711915] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:16.738 [2024-07-14 05:42:23.711921] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:16.738 [2024-07-14 05:42:23.711975] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.738 [2024-07-14 05:42:23.711988] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.738 [2024-07-14 05:42:23.711995] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x88b980) 00:28:16.738 [2024-07-14 05:42:23.712013] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:16.738 [2024-07-14 05:42:23.712040] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f34c0, cid 0, qid 0 00:28:16.738 [2024-07-14 05:42:23.719879] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.738 [2024-07-14 05:42:23.719896] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.738 [2024-07-14 05:42:23.719903] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.738 [2024-07-14 05:42:23.719911] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8f34c0) on tqpair=0x88b980 00:28:16.738 [2024-07-14 05:42:23.719932] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:16.738 [2024-07-14 05:42:23.719944] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:28:16.738 [2024-07-14 05:42:23.719954] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:28:16.738 [2024-07-14 05:42:23.719975] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.738 [2024-07-14 05:42:23.719983] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.738 [2024-07-14 05:42:23.719990] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x88b980) 00:28:16.738 [2024-07-14 05:42:23.720001] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.738 [2024-07-14 05:42:23.720025] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f34c0, cid 0, qid 0 00:28:16.738 [2024-07-14 05:42:23.720218] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.738 [2024-07-14 05:42:23.720230] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.738 [2024-07-14 05:42:23.720237] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.738 [2024-07-14 05:42:23.720244] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8f34c0) on tqpair=0x88b980 00:28:16.738 [2024-07-14 05:42:23.720257] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:28:16.738 [2024-07-14 05:42:23.720271] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:28:16.738 [2024-07-14 05:42:23.720283] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.738 [2024-07-14 05:42:23.720290] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.738 [2024-07-14 05:42:23.720296] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x88b980) 00:28:16.738 [2024-07-14 05:42:23.720307] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.738 [2024-07-14 05:42:23.720328] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f34c0, cid 0, qid 0 00:28:16.738 [2024-07-14 05:42:23.720518] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.738 [2024-07-14 05:42:23.720533] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.738 [2024-07-14 05:42:23.720540] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.738 [2024-07-14 05:42:23.720547] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8f34c0) on tqpair=0x88b980 00:28:16.738 [2024-07-14 05:42:23.720556] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:28:16.738 [2024-07-14 05:42:23.720570] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:28:16.738 [2024-07-14 05:42:23.720582] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.738 [2024-07-14 05:42:23.720589] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.738 [2024-07-14 05:42:23.720596] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x88b980) 00:28:16.738 [2024-07-14 05:42:23.720606] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.738 [2024-07-14 05:42:23.720627] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f34c0, cid 0, qid 0 00:28:16.738 [2024-07-14 05:42:23.720805] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.738 [2024-07-14 05:42:23.720817] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.738 [2024-07-14 05:42:23.720824] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.738 [2024-07-14 05:42:23.720831] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8f34c0) on tqpair=0x88b980 00:28:16.738 [2024-07-14 05:42:23.720843] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:16.738 [2024-07-14 05:42:23.720861] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.738 [2024-07-14 05:42:23.720878] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.738 [2024-07-14 05:42:23.720885] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x88b980) 00:28:16.738 [2024-07-14 05:42:23.720896] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.738 [2024-07-14 05:42:23.720917] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f34c0, cid 0, qid 0 00:28:16.738 [2024-07-14 05:42:23.721074] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.738 [2024-07-14 05:42:23.721089] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.738 [2024-07-14 05:42:23.721096] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.738 [2024-07-14 05:42:23.721103] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8f34c0) on tqpair=0x88b980 00:28:16.738 [2024-07-14 05:42:23.721111] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:28:16.738 [2024-07-14 05:42:23.721120] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:28:16.738 [2024-07-14 05:42:23.721133] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:16.738 [2024-07-14 05:42:23.721242] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:28:16.738 [2024-07-14 05:42:23.721250] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:16.738 [2024-07-14 05:42:23.721264] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.738 [2024-07-14 05:42:23.721271] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.738 [2024-07-14 05:42:23.721278] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x88b980) 00:28:16.738 [2024-07-14 05:42:23.721288] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.738 [2024-07-14 05:42:23.721310] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f34c0, cid 0, qid 0 00:28:16.738 [2024-07-14 05:42:23.721495] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.738 [2024-07-14 05:42:23.721506] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.738 [2024-07-14 05:42:23.721513] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.738 [2024-07-14 05:42:23.721520] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8f34c0) on tqpair=0x88b980 00:28:16.738 [2024-07-14 05:42:23.721528] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:16.738 [2024-07-14 05:42:23.721544] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.738 [2024-07-14 05:42:23.721552] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.738 [2024-07-14 05:42:23.721559] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x88b980) 00:28:16.738 [2024-07-14 05:42:23.721569] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.738 [2024-07-14 05:42:23.721589] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f34c0, cid 0, qid 0 00:28:16.738 [2024-07-14 05:42:23.721733] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.738 [2024-07-14 05:42:23.721745] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.738 [2024-07-14 05:42:23.721751] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.738 [2024-07-14 05:42:23.721762] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8f34c0) on tqpair=0x88b980 00:28:16.739 [2024-07-14 05:42:23.721771] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:16.739 [2024-07-14 05:42:23.721779] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:28:16.739 [2024-07-14 05:42:23.721792] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:28:16.739 [2024-07-14 05:42:23.721810] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:28:16.739 [2024-07-14 05:42:23.721827] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.739 [2024-07-14 05:42:23.721835] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x88b980) 00:28:16.739 [2024-07-14 05:42:23.721846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.739 [2024-07-14 05:42:23.721873] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f34c0, cid 0, qid 0 00:28:16.739 [2024-07-14 05:42:23.722092] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:16.739 [2024-07-14 05:42:23.722107] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:16.739 [2024-07-14 05:42:23.722114] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:16.739 [2024-07-14 05:42:23.722121] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x88b980): datao=0, datal=4096, cccid=0 00:28:16.739 [2024-07-14 05:42:23.722128] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8f34c0) on tqpair(0x88b980): expected_datao=0, payload_size=4096 00:28:16.739 [2024-07-14 05:42:23.722136] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.739 [2024-07-14 05:42:23.722166] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:16.739 [2024-07-14 05:42:23.722176] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:16.739 [2024-07-14 05:42:23.764889] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.739 [2024-07-14 05:42:23.764910] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.739 [2024-07-14 05:42:23.764918] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.739 [2024-07-14 05:42:23.764926] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8f34c0) on tqpair=0x88b980 00:28:16.739 [2024-07-14 05:42:23.764945] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:28:16.739 [2024-07-14 05:42:23.764956] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:28:16.739 [2024-07-14 05:42:23.764965] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:28:16.739 [2024-07-14 05:42:23.764973] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:28:16.739 [2024-07-14 05:42:23.764982] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:28:16.739 [2024-07-14 05:42:23.764990] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:28:16.739 [2024-07-14 05:42:23.765006] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:28:16.739 [2024-07-14 05:42:23.765019] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.739 [2024-07-14 05:42:23.765026] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.739 [2024-07-14 05:42:23.765033] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x88b980) 00:28:16.739 [2024-07-14 05:42:23.765045] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:16.739 [2024-07-14 05:42:23.765073] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f34c0, cid 0, qid 0 00:28:16.739 [2024-07-14 05:42:23.765266] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.739 [2024-07-14 05:42:23.765282] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.739 [2024-07-14 05:42:23.765289] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.739 [2024-07-14 05:42:23.765295] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8f34c0) on tqpair=0x88b980 00:28:16.739 [2024-07-14 05:42:23.765309] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.739 [2024-07-14 05:42:23.765316] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.739 [2024-07-14 05:42:23.765323] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x88b980) 00:28:16.739 [2024-07-14 05:42:23.765333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.739 [2024-07-14 05:42:23.765344] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.739 [2024-07-14 05:42:23.765350] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.739 [2024-07-14 05:42:23.765357] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x88b980) 00:28:16.739 [2024-07-14 05:42:23.765365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.739 [2024-07-14 05:42:23.765398] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.739 [2024-07-14 05:42:23.765405] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.739 [2024-07-14 05:42:23.765411] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x88b980) 00:28:16.739 [2024-07-14 05:42:23.765419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.739 [2024-07-14 05:42:23.765428] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.739 [2024-07-14 05:42:23.765435] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.739 [2024-07-14 05:42:23.765441] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88b980) 00:28:16.739 [2024-07-14 05:42:23.765449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.739 [2024-07-14 05:42:23.765458] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:28:16.739 [2024-07-14 05:42:23.765477] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:16.739 [2024-07-14 05:42:23.765489] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.739 [2024-07-14 05:42:23.765496] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x88b980) 00:28:16.739 [2024-07-14 05:42:23.765506] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.739 [2024-07-14 05:42:23.765528] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f34c0, cid 0, qid 0 00:28:16.739 [2024-07-14 05:42:23.765555] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f3620, cid 1, qid 0 00:28:16.739 [2024-07-14 05:42:23.765563] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f3780, cid 2, qid 0 00:28:16.739 [2024-07-14 05:42:23.765571] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f38e0, cid 3, qid 0 00:28:16.739 [2024-07-14 05:42:23.765578] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f3a40, cid 4, qid 0 00:28:16.739 [2024-07-14 05:42:23.765783] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.739 [2024-07-14 05:42:23.765795] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.739 [2024-07-14 05:42:23.765805] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.739 [2024-07-14 05:42:23.765813] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8f3a40) on tqpair=0x88b980 00:28:16.739 [2024-07-14 05:42:23.765823] nvme_ctrlr.c:2904:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:28:16.739 [2024-07-14 05:42:23.765832] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:28:16.739 [2024-07-14 05:42:23.765849] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.739 [2024-07-14 05:42:23.765858] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x88b980) 00:28:16.739 [2024-07-14 05:42:23.765876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.739 [2024-07-14 05:42:23.765899] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f3a40, cid 4, qid 0 00:28:16.739 [2024-07-14 05:42:23.766091] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:16.739 [2024-07-14 05:42:23.766103] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:16.739 [2024-07-14 05:42:23.766110] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:16.739 [2024-07-14 05:42:23.766116] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x88b980): datao=0, datal=4096, cccid=4 00:28:16.739 [2024-07-14 05:42:23.766124] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8f3a40) on tqpair(0x88b980): expected_datao=0, payload_size=4096 00:28:16.739 [2024-07-14 05:42:23.766131] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.739 [2024-07-14 05:42:23.766142] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:16.739 [2024-07-14 05:42:23.766149] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:16.739 [2024-07-14 05:42:23.766204] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.739 [2024-07-14 05:42:23.766215] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.739 [2024-07-14 05:42:23.766221] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.739 [2024-07-14 05:42:23.766228] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8f3a40) on tqpair=0x88b980 00:28:16.739 [2024-07-14 05:42:23.766246] nvme_ctrlr.c:4038:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:28:16.739 [2024-07-14 05:42:23.766283] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.739 [2024-07-14 05:42:23.766293] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x88b980) 00:28:16.739 [2024-07-14 05:42:23.766305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.739 [2024-07-14 05:42:23.766316] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.739 [2024-07-14 05:42:23.766323] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.739 [2024-07-14 05:42:23.766329] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x88b980) 00:28:16.739 [2024-07-14 05:42:23.766338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.739 [2024-07-14 05:42:23.766383] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f3a40, cid 4, qid 0 00:28:16.739 [2024-07-14 05:42:23.766395] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f3ba0, cid 5, qid 0 00:28:16.739 [2024-07-14 05:42:23.766670] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:16.739 [2024-07-14 05:42:23.766685] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:16.739 [2024-07-14 05:42:23.766692] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:16.739 [2024-07-14 05:42:23.766699] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x88b980): datao=0, datal=1024, cccid=4 00:28:16.739 [2024-07-14 05:42:23.766706] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8f3a40) on tqpair(0x88b980): expected_datao=0, payload_size=1024 00:28:16.739 [2024-07-14 05:42:23.766718] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.739 [2024-07-14 05:42:23.766728] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:16.739 [2024-07-14 05:42:23.766735] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:16.739 [2024-07-14 05:42:23.766759] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.739 [2024-07-14 05:42:23.766767] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.739 [2024-07-14 05:42:23.766774] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.739 [2024-07-14 05:42:23.766780] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8f3ba0) on tqpair=0x88b980 00:28:16.740 [2024-07-14 05:42:23.807039] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.740 [2024-07-14 05:42:23.807060] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.740 [2024-07-14 05:42:23.807068] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.740 [2024-07-14 05:42:23.807075] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8f3a40) on tqpair=0x88b980 00:28:16.740 [2024-07-14 05:42:23.807093] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.740 [2024-07-14 05:42:23.807103] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x88b980) 00:28:16.740 [2024-07-14 05:42:23.807115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.740 [2024-07-14 05:42:23.807145] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f3a40, cid 4, qid 0 00:28:16.740 [2024-07-14 05:42:23.807325] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:16.740 [2024-07-14 05:42:23.807337] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:16.740 [2024-07-14 05:42:23.807343] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:16.740 [2024-07-14 05:42:23.807350] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x88b980): datao=0, datal=3072, cccid=4 00:28:16.740 [2024-07-14 05:42:23.807357] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8f3a40) on tqpair(0x88b980): expected_datao=0, payload_size=3072 00:28:16.740 [2024-07-14 05:42:23.807365] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.740 [2024-07-14 05:42:23.807375] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:16.740 [2024-07-14 05:42:23.807382] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:16.740 [2024-07-14 05:42:23.807437] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.740 [2024-07-14 05:42:23.807448] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.740 [2024-07-14 05:42:23.807454] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.740 [2024-07-14 05:42:23.807461] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8f3a40) on tqpair=0x88b980 00:28:16.740 [2024-07-14 05:42:23.807476] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.740 [2024-07-14 05:42:23.807484] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x88b980) 00:28:16.740 [2024-07-14 05:42:23.807495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.740 [2024-07-14 05:42:23.807522] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f3a40, cid 4, qid 0 00:28:16.740 [2024-07-14 05:42:23.807691] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:16.740 [2024-07-14 05:42:23.807707] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:16.740 [2024-07-14 05:42:23.807713] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:16.740 [2024-07-14 05:42:23.807720] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x88b980): datao=0, datal=8, cccid=4 00:28:16.740 [2024-07-14 05:42:23.807727] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8f3a40) on tqpair(0x88b980): expected_datao=0, payload_size=8 00:28:16.740 [2024-07-14 05:42:23.807739] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.740 [2024-07-14 05:42:23.807750] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:16.740 [2024-07-14 05:42:23.807757] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:17.003 [2024-07-14 05:42:23.851894] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.003 [2024-07-14 05:42:23.851927] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.003 [2024-07-14 05:42:23.851936] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.003 [2024-07-14 05:42:23.851943] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8f3a40) on tqpair=0x88b980 00:28:17.003 ===================================================== 00:28:17.003 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:17.003 ===================================================== 00:28:17.003 Controller Capabilities/Features 00:28:17.003 ================================ 00:28:17.003 Vendor ID: 0000 00:28:17.003 Subsystem Vendor ID: 0000 00:28:17.003 Serial Number: .................... 00:28:17.003 Model Number: ........................................ 00:28:17.003 Firmware Version: 24.05.1 00:28:17.003 Recommended Arb Burst: 0 00:28:17.003 IEEE OUI Identifier: 00 00 00 00:28:17.003 Multi-path I/O 00:28:17.003 May have multiple subsystem ports: No 00:28:17.003 May have multiple controllers: No 00:28:17.003 Associated with SR-IOV VF: No 00:28:17.003 Max Data Transfer Size: 131072 00:28:17.003 Max Number of Namespaces: 0 00:28:17.003 Max Number of I/O Queues: 1024 00:28:17.003 NVMe Specification Version (VS): 1.3 00:28:17.003 NVMe Specification Version (Identify): 1.3 00:28:17.003 Maximum Queue Entries: 128 00:28:17.003 Contiguous Queues Required: Yes 00:28:17.003 Arbitration Mechanisms Supported 00:28:17.003 Weighted Round Robin: Not Supported 00:28:17.003 Vendor Specific: Not Supported 00:28:17.003 Reset Timeout: 15000 ms 00:28:17.003 Doorbell Stride: 4 bytes 00:28:17.003 NVM Subsystem Reset: Not Supported 00:28:17.003 Command Sets Supported 00:28:17.003 NVM Command Set: Supported 00:28:17.003 Boot Partition: Not Supported 00:28:17.003 Memory Page Size Minimum: 4096 bytes 00:28:17.003 Memory Page Size Maximum: 4096 bytes 00:28:17.003 Persistent Memory Region: Not Supported 00:28:17.003 Optional Asynchronous Events Supported 00:28:17.003 Namespace Attribute Notices: Not Supported 00:28:17.003 Firmware Activation Notices: Not Supported 00:28:17.003 ANA Change Notices: Not Supported 00:28:17.003 PLE Aggregate Log Change Notices: Not Supported 00:28:17.003 LBA Status Info Alert Notices: Not Supported 00:28:17.003 EGE Aggregate Log Change Notices: Not Supported 00:28:17.003 Normal NVM Subsystem Shutdown event: Not Supported 00:28:17.003 Zone Descriptor Change Notices: Not Supported 00:28:17.003 Discovery Log Change Notices: Supported 00:28:17.003 Controller Attributes 00:28:17.003 128-bit Host Identifier: Not Supported 00:28:17.003 Non-Operational Permissive Mode: Not Supported 00:28:17.003 NVM Sets: Not Supported 00:28:17.003 Read Recovery Levels: Not Supported 00:28:17.003 Endurance Groups: Not Supported 00:28:17.003 Predictable Latency Mode: Not Supported 00:28:17.003 Traffic Based Keep ALive: Not Supported 00:28:17.003 Namespace Granularity: Not Supported 00:28:17.003 SQ Associations: Not Supported 00:28:17.003 UUID List: Not Supported 00:28:17.003 Multi-Domain Subsystem: Not Supported 00:28:17.003 Fixed Capacity Management: Not Supported 00:28:17.003 Variable Capacity Management: Not Supported 00:28:17.003 Delete Endurance Group: Not Supported 00:28:17.003 Delete NVM Set: Not Supported 00:28:17.003 Extended LBA Formats Supported: Not Supported 00:28:17.003 Flexible Data Placement Supported: Not Supported 00:28:17.003 00:28:17.003 Controller Memory Buffer Support 00:28:17.003 ================================ 00:28:17.003 Supported: No 00:28:17.003 00:28:17.003 Persistent Memory Region Support 00:28:17.003 ================================ 00:28:17.003 Supported: No 00:28:17.003 00:28:17.003 Admin Command Set Attributes 00:28:17.003 ============================ 00:28:17.003 Security Send/Receive: Not Supported 00:28:17.003 Format NVM: Not Supported 00:28:17.003 Firmware Activate/Download: Not Supported 00:28:17.003 Namespace Management: Not Supported 00:28:17.003 Device Self-Test: Not Supported 00:28:17.003 Directives: Not Supported 00:28:17.003 NVMe-MI: Not Supported 00:28:17.003 Virtualization Management: Not Supported 00:28:17.003 Doorbell Buffer Config: Not Supported 00:28:17.003 Get LBA Status Capability: Not Supported 00:28:17.003 Command & Feature Lockdown Capability: Not Supported 00:28:17.003 Abort Command Limit: 1 00:28:17.003 Async Event Request Limit: 4 00:28:17.003 Number of Firmware Slots: N/A 00:28:17.003 Firmware Slot 1 Read-Only: N/A 00:28:17.003 Firmware Activation Without Reset: N/A 00:28:17.003 Multiple Update Detection Support: N/A 00:28:17.004 Firmware Update Granularity: No Information Provided 00:28:17.004 Per-Namespace SMART Log: No 00:28:17.004 Asymmetric Namespace Access Log Page: Not Supported 00:28:17.004 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:17.004 Command Effects Log Page: Not Supported 00:28:17.004 Get Log Page Extended Data: Supported 00:28:17.004 Telemetry Log Pages: Not Supported 00:28:17.004 Persistent Event Log Pages: Not Supported 00:28:17.004 Supported Log Pages Log Page: May Support 00:28:17.004 Commands Supported & Effects Log Page: Not Supported 00:28:17.004 Feature Identifiers & Effects Log Page:May Support 00:28:17.004 NVMe-MI Commands & Effects Log Page: May Support 00:28:17.004 Data Area 4 for Telemetry Log: Not Supported 00:28:17.004 Error Log Page Entries Supported: 128 00:28:17.004 Keep Alive: Not Supported 00:28:17.004 00:28:17.004 NVM Command Set Attributes 00:28:17.004 ========================== 00:28:17.004 Submission Queue Entry Size 00:28:17.004 Max: 1 00:28:17.004 Min: 1 00:28:17.004 Completion Queue Entry Size 00:28:17.004 Max: 1 00:28:17.004 Min: 1 00:28:17.004 Number of Namespaces: 0 00:28:17.004 Compare Command: Not Supported 00:28:17.004 Write Uncorrectable Command: Not Supported 00:28:17.004 Dataset Management Command: Not Supported 00:28:17.004 Write Zeroes Command: Not Supported 00:28:17.004 Set Features Save Field: Not Supported 00:28:17.004 Reservations: Not Supported 00:28:17.004 Timestamp: Not Supported 00:28:17.004 Copy: Not Supported 00:28:17.004 Volatile Write Cache: Not Present 00:28:17.004 Atomic Write Unit (Normal): 1 00:28:17.004 Atomic Write Unit (PFail): 1 00:28:17.004 Atomic Compare & Write Unit: 1 00:28:17.004 Fused Compare & Write: Supported 00:28:17.004 Scatter-Gather List 00:28:17.004 SGL Command Set: Supported 00:28:17.004 SGL Keyed: Supported 00:28:17.004 SGL Bit Bucket Descriptor: Not Supported 00:28:17.004 SGL Metadata Pointer: Not Supported 00:28:17.004 Oversized SGL: Not Supported 00:28:17.004 SGL Metadata Address: Not Supported 00:28:17.004 SGL Offset: Supported 00:28:17.004 Transport SGL Data Block: Not Supported 00:28:17.004 Replay Protected Memory Block: Not Supported 00:28:17.004 00:28:17.004 Firmware Slot Information 00:28:17.004 ========================= 00:28:17.004 Active slot: 0 00:28:17.004 00:28:17.004 00:28:17.004 Error Log 00:28:17.004 ========= 00:28:17.004 00:28:17.004 Active Namespaces 00:28:17.004 ================= 00:28:17.004 Discovery Log Page 00:28:17.004 ================== 00:28:17.004 Generation Counter: 2 00:28:17.004 Number of Records: 2 00:28:17.004 Record Format: 0 00:28:17.004 00:28:17.004 Discovery Log Entry 0 00:28:17.004 ---------------------- 00:28:17.004 Transport Type: 3 (TCP) 00:28:17.004 Address Family: 1 (IPv4) 00:28:17.004 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:17.004 Entry Flags: 00:28:17.004 Duplicate Returned Information: 1 00:28:17.004 Explicit Persistent Connection Support for Discovery: 1 00:28:17.004 Transport Requirements: 00:28:17.004 Secure Channel: Not Required 00:28:17.004 Port ID: 0 (0x0000) 00:28:17.004 Controller ID: 65535 (0xffff) 00:28:17.004 Admin Max SQ Size: 128 00:28:17.004 Transport Service Identifier: 4420 00:28:17.004 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:17.004 Transport Address: 10.0.0.2 00:28:17.004 Discovery Log Entry 1 00:28:17.004 ---------------------- 00:28:17.004 Transport Type: 3 (TCP) 00:28:17.004 Address Family: 1 (IPv4) 00:28:17.004 Subsystem Type: 2 (NVM Subsystem) 00:28:17.004 Entry Flags: 00:28:17.004 Duplicate Returned Information: 0 00:28:17.004 Explicit Persistent Connection Support for Discovery: 0 00:28:17.004 Transport Requirements: 00:28:17.004 Secure Channel: Not Required 00:28:17.004 Port ID: 0 (0x0000) 00:28:17.004 Controller ID: 65535 (0xffff) 00:28:17.004 Admin Max SQ Size: 128 00:28:17.004 Transport Service Identifier: 4420 00:28:17.004 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:28:17.004 Transport Address: 10.0.0.2 [2024-07-14 05:42:23.852068] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:28:17.004 [2024-07-14 05:42:23.852093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.004 [2024-07-14 05:42:23.852106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.004 [2024-07-14 05:42:23.852115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.004 [2024-07-14 05:42:23.852125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.004 [2024-07-14 05:42:23.852143] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.004 [2024-07-14 05:42:23.852152] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.004 [2024-07-14 05:42:23.852159] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88b980) 00:28:17.004 [2024-07-14 05:42:23.852170] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.004 [2024-07-14 05:42:23.852201] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f38e0, cid 3, qid 0 00:28:17.004 [2024-07-14 05:42:23.852381] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.004 [2024-07-14 05:42:23.852397] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.004 [2024-07-14 05:42:23.852404] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.004 [2024-07-14 05:42:23.852411] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8f38e0) on tqpair=0x88b980 00:28:17.004 [2024-07-14 05:42:23.852423] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.004 [2024-07-14 05:42:23.852430] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.004 [2024-07-14 05:42:23.852437] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88b980) 00:28:17.004 [2024-07-14 05:42:23.852448] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.004 [2024-07-14 05:42:23.852475] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f38e0, cid 3, qid 0 00:28:17.004 [2024-07-14 05:42:23.852640] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.004 [2024-07-14 05:42:23.852656] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.004 [2024-07-14 05:42:23.852663] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.004 [2024-07-14 05:42:23.852670] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8f38e0) on tqpair=0x88b980 00:28:17.004 [2024-07-14 05:42:23.852679] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:28:17.004 [2024-07-14 05:42:23.852687] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:28:17.004 [2024-07-14 05:42:23.852704] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.004 [2024-07-14 05:42:23.852712] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.004 [2024-07-14 05:42:23.852719] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88b980) 00:28:17.004 [2024-07-14 05:42:23.852734] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.004 [2024-07-14 05:42:23.852756] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f38e0, cid 3, qid 0 00:28:17.004 [2024-07-14 05:42:23.852945] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.004 [2024-07-14 05:42:23.852969] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.004 [2024-07-14 05:42:23.852985] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.004 [2024-07-14 05:42:23.852996] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8f38e0) on tqpair=0x88b980 00:28:17.004 [2024-07-14 05:42:23.853016] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.004 [2024-07-14 05:42:23.853025] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.004 [2024-07-14 05:42:23.853032] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88b980) 00:28:17.004 [2024-07-14 05:42:23.853043] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.004 [2024-07-14 05:42:23.853065] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f38e0, cid 3, qid 0 00:28:17.004 [2024-07-14 05:42:23.853242] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.004 [2024-07-14 05:42:23.853254] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.004 [2024-07-14 05:42:23.853261] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.004 [2024-07-14 05:42:23.853268] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8f38e0) on tqpair=0x88b980 00:28:17.004 [2024-07-14 05:42:23.853284] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.004 [2024-07-14 05:42:23.853293] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.004 [2024-07-14 05:42:23.853299] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88b980) 00:28:17.004 [2024-07-14 05:42:23.853309] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.004 [2024-07-14 05:42:23.853330] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f38e0, cid 3, qid 0 00:28:17.004 [2024-07-14 05:42:23.853474] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.004 [2024-07-14 05:42:23.853490] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.004 [2024-07-14 05:42:23.853497] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.004 [2024-07-14 05:42:23.853503] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8f38e0) on tqpair=0x88b980 00:28:17.004 [2024-07-14 05:42:23.853520] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.004 [2024-07-14 05:42:23.853529] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.004 [2024-07-14 05:42:23.853535] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88b980) 00:28:17.004 [2024-07-14 05:42:23.853546] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.004 [2024-07-14 05:42:23.853567] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f38e0, cid 3, qid 0 00:28:17.004 [2024-07-14 05:42:23.853712] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.004 [2024-07-14 05:42:23.853727] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.004 [2024-07-14 05:42:23.853734] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.005 [2024-07-14 05:42:23.853741] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8f38e0) on tqpair=0x88b980 00:28:17.005 [2024-07-14 05:42:23.853757] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.005 [2024-07-14 05:42:23.853772] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.005 [2024-07-14 05:42:23.853788] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88b980) 00:28:17.005 [2024-07-14 05:42:23.853809] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.005 [2024-07-14 05:42:23.853841] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f38e0, cid 3, qid 0 00:28:17.005 [2024-07-14 05:42:23.854027] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.005 [2024-07-14 05:42:23.854043] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.005 [2024-07-14 05:42:23.854049] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.005 [2024-07-14 05:42:23.854056] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8f38e0) on tqpair=0x88b980 00:28:17.005 [2024-07-14 05:42:23.854073] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.005 [2024-07-14 05:42:23.854082] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.005 [2024-07-14 05:42:23.854089] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88b980) 00:28:17.005 [2024-07-14 05:42:23.854099] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.005 [2024-07-14 05:42:23.854120] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f38e0, cid 3, qid 0 00:28:17.005 [2024-07-14 05:42:23.854280] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.005 [2024-07-14 05:42:23.854296] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.005 [2024-07-14 05:42:23.854303] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.005 [2024-07-14 05:42:23.854310] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8f38e0) on tqpair=0x88b980 00:28:17.005 [2024-07-14 05:42:23.854326] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.005 [2024-07-14 05:42:23.854335] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.005 [2024-07-14 05:42:23.854342] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88b980) 00:28:17.005 [2024-07-14 05:42:23.854352] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.005 [2024-07-14 05:42:23.854373] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f38e0, cid 3, qid 0 00:28:17.005 [2024-07-14 05:42:23.854536] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.005 [2024-07-14 05:42:23.854548] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.005 [2024-07-14 05:42:23.854555] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.005 [2024-07-14 05:42:23.854561] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8f38e0) on tqpair=0x88b980 00:28:17.005 [2024-07-14 05:42:23.854577] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.005 [2024-07-14 05:42:23.854586] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.005 [2024-07-14 05:42:23.854592] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88b980) 00:28:17.005 [2024-07-14 05:42:23.854603] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.005 [2024-07-14 05:42:23.854623] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f38e0, cid 3, qid 0 00:28:17.005 [2024-07-14 05:42:23.854785] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.005 [2024-07-14 05:42:23.854801] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.005 [2024-07-14 05:42:23.854807] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.005 [2024-07-14 05:42:23.854814] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8f38e0) on tqpair=0x88b980 00:28:17.005 [2024-07-14 05:42:23.854830] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.005 [2024-07-14 05:42:23.854839] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.005 [2024-07-14 05:42:23.854846] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88b980) 00:28:17.005 [2024-07-14 05:42:23.854856] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.005 [2024-07-14 05:42:23.854889] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f38e0, cid 3, qid 0 00:28:17.005 [2024-07-14 05:42:23.855072] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.005 [2024-07-14 05:42:23.855088] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.005 [2024-07-14 05:42:23.855095] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.005 [2024-07-14 05:42:23.855102] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8f38e0) on tqpair=0x88b980 00:28:17.005 [2024-07-14 05:42:23.855118] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.005 [2024-07-14 05:42:23.855127] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.005 [2024-07-14 05:42:23.855134] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88b980) 00:28:17.005 [2024-07-14 05:42:23.855144] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.005 [2024-07-14 05:42:23.855166] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f38e0, cid 3, qid 0 00:28:17.005 [2024-07-14 05:42:23.855311] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.005 [2024-07-14 05:42:23.855326] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.005 [2024-07-14 05:42:23.855333] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.005 [2024-07-14 05:42:23.855339] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8f38e0) on tqpair=0x88b980 00:28:17.005 [2024-07-14 05:42:23.855356] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.005 [2024-07-14 05:42:23.855364] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.005 [2024-07-14 05:42:23.855371] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88b980) 00:28:17.005 [2024-07-14 05:42:23.855381] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.005 [2024-07-14 05:42:23.855406] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f38e0, cid 3, qid 0 00:28:17.005 [2024-07-14 05:42:23.855546] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.005 [2024-07-14 05:42:23.855562] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.005 [2024-07-14 05:42:23.855569] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.005 [2024-07-14 05:42:23.855575] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8f38e0) on tqpair=0x88b980 00:28:17.005 [2024-07-14 05:42:23.855592] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.005 [2024-07-14 05:42:23.855601] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.005 [2024-07-14 05:42:23.855607] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88b980) 00:28:17.005 [2024-07-14 05:42:23.855618] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.005 [2024-07-14 05:42:23.855639] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f38e0, cid 3, qid 0 00:28:17.005 [2024-07-14 05:42:23.855781] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.005 [2024-07-14 05:42:23.855802] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.005 [2024-07-14 05:42:23.855818] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.005 [2024-07-14 05:42:23.855831] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8f38e0) on tqpair=0x88b980 00:28:17.005 [2024-07-14 05:42:23.855853] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.005 [2024-07-14 05:42:23.855862] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.005 [2024-07-14 05:42:23.859887] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88b980) 00:28:17.005 [2024-07-14 05:42:23.859900] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.005 [2024-07-14 05:42:23.859924] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f38e0, cid 3, qid 0 00:28:17.005 [2024-07-14 05:42:23.860107] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.005 [2024-07-14 05:42:23.860124] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.005 [2024-07-14 05:42:23.860131] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.005 [2024-07-14 05:42:23.860138] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8f38e0) on tqpair=0x88b980 00:28:17.005 [2024-07-14 05:42:23.860151] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:28:17.005 00:28:17.005 05:42:23 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:28:17.005 [2024-07-14 05:42:23.894076] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:17.005 [2024-07-14 05:42:23.894122] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3330828 ] 00:28:17.005 EAL: No free 2048 kB hugepages reported on node 1 00:28:17.005 [2024-07-14 05:42:23.929933] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:28:17.005 [2024-07-14 05:42:23.929982] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:17.005 [2024-07-14 05:42:23.929991] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:17.005 [2024-07-14 05:42:23.930009] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:17.005 [2024-07-14 05:42:23.930021] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:17.005 [2024-07-14 05:42:23.930276] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:28:17.005 [2024-07-14 05:42:23.930317] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xb69980 0 00:28:17.005 [2024-07-14 05:42:23.936880] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:17.005 [2024-07-14 05:42:23.936899] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:17.006 [2024-07-14 05:42:23.936907] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:17.006 [2024-07-14 05:42:23.936913] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:17.006 [2024-07-14 05:42:23.936966] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.006 [2024-07-14 05:42:23.936978] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.006 [2024-07-14 05:42:23.936985] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb69980) 00:28:17.006 [2024-07-14 05:42:23.936999] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:17.006 [2024-07-14 05:42:23.937025] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd14c0, cid 0, qid 0 00:28:17.006 [2024-07-14 05:42:23.943878] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.006 [2024-07-14 05:42:23.943897] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.006 [2024-07-14 05:42:23.943905] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.006 [2024-07-14 05:42:23.943912] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbd14c0) on tqpair=0xb69980 00:28:17.006 [2024-07-14 05:42:23.943931] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:17.006 [2024-07-14 05:42:23.943942] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:28:17.006 [2024-07-14 05:42:23.943952] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:28:17.006 [2024-07-14 05:42:23.943974] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.006 [2024-07-14 05:42:23.943983] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.006 [2024-07-14 05:42:23.943990] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb69980) 00:28:17.006 [2024-07-14 05:42:23.944001] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.006 [2024-07-14 05:42:23.944025] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd14c0, cid 0, qid 0 00:28:17.006 [2024-07-14 05:42:23.944185] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.006 [2024-07-14 05:42:23.944200] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.006 [2024-07-14 05:42:23.944207] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.006 [2024-07-14 05:42:23.944214] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbd14c0) on tqpair=0xb69980 00:28:17.006 [2024-07-14 05:42:23.944226] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:28:17.006 [2024-07-14 05:42:23.944240] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:28:17.006 [2024-07-14 05:42:23.944253] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.006 [2024-07-14 05:42:23.944260] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.006 [2024-07-14 05:42:23.944266] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb69980) 00:28:17.006 [2024-07-14 05:42:23.944277] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.006 [2024-07-14 05:42:23.944299] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd14c0, cid 0, qid 0 00:28:17.006 [2024-07-14 05:42:23.944459] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.006 [2024-07-14 05:42:23.944471] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.006 [2024-07-14 05:42:23.944478] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.006 [2024-07-14 05:42:23.944485] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbd14c0) on tqpair=0xb69980 00:28:17.006 [2024-07-14 05:42:23.944493] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:28:17.006 [2024-07-14 05:42:23.944507] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:28:17.006 [2024-07-14 05:42:23.944519] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.006 [2024-07-14 05:42:23.944526] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.006 [2024-07-14 05:42:23.944533] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb69980) 00:28:17.006 [2024-07-14 05:42:23.944543] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.006 [2024-07-14 05:42:23.944564] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd14c0, cid 0, qid 0 00:28:17.006 [2024-07-14 05:42:23.944743] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.006 [2024-07-14 05:42:23.944756] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.006 [2024-07-14 05:42:23.944763] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.006 [2024-07-14 05:42:23.944770] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbd14c0) on tqpair=0xb69980 00:28:17.006 [2024-07-14 05:42:23.944778] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:17.006 [2024-07-14 05:42:23.944795] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.006 [2024-07-14 05:42:23.944803] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.006 [2024-07-14 05:42:23.944810] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb69980) 00:28:17.006 [2024-07-14 05:42:23.944824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.006 [2024-07-14 05:42:23.944846] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd14c0, cid 0, qid 0 00:28:17.006 [2024-07-14 05:42:23.945010] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.006 [2024-07-14 05:42:23.945024] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.006 [2024-07-14 05:42:23.945030] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.006 [2024-07-14 05:42:23.945037] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbd14c0) on tqpair=0xb69980 00:28:17.006 [2024-07-14 05:42:23.945045] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:28:17.006 [2024-07-14 05:42:23.945053] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:28:17.006 [2024-07-14 05:42:23.945066] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:17.006 [2024-07-14 05:42:23.945178] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:28:17.006 [2024-07-14 05:42:23.945186] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:17.006 [2024-07-14 05:42:23.945198] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.006 [2024-07-14 05:42:23.945205] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.006 [2024-07-14 05:42:23.945211] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb69980) 00:28:17.006 [2024-07-14 05:42:23.945222] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.006 [2024-07-14 05:42:23.945242] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd14c0, cid 0, qid 0 00:28:17.006 [2024-07-14 05:42:23.945416] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.006 [2024-07-14 05:42:23.945432] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.006 [2024-07-14 05:42:23.945439] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.006 [2024-07-14 05:42:23.945445] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbd14c0) on tqpair=0xb69980 00:28:17.006 [2024-07-14 05:42:23.945454] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:17.006 [2024-07-14 05:42:23.945470] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.006 [2024-07-14 05:42:23.945479] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.006 [2024-07-14 05:42:23.945486] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb69980) 00:28:17.006 [2024-07-14 05:42:23.945497] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.006 [2024-07-14 05:42:23.945517] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd14c0, cid 0, qid 0 00:28:17.006 [2024-07-14 05:42:23.945681] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.006 [2024-07-14 05:42:23.945696] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.006 [2024-07-14 05:42:23.945703] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.006 [2024-07-14 05:42:23.945709] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbd14c0) on tqpair=0xb69980 00:28:17.006 [2024-07-14 05:42:23.945717] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:17.006 [2024-07-14 05:42:23.945725] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:28:17.006 [2024-07-14 05:42:23.945739] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:28:17.006 [2024-07-14 05:42:23.945755] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:28:17.006 [2024-07-14 05:42:23.945771] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.006 [2024-07-14 05:42:23.945780] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb69980) 00:28:17.006 [2024-07-14 05:42:23.945806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.006 [2024-07-14 05:42:23.945827] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd14c0, cid 0, qid 0 00:28:17.006 [2024-07-14 05:42:23.946057] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:17.006 [2024-07-14 05:42:23.946073] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:17.006 [2024-07-14 05:42:23.946080] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:17.006 [2024-07-14 05:42:23.946086] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb69980): datao=0, datal=4096, cccid=0 00:28:17.006 [2024-07-14 05:42:23.946094] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbd14c0) on tqpair(0xb69980): expected_datao=0, payload_size=4096 00:28:17.006 [2024-07-14 05:42:23.946101] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.006 [2024-07-14 05:42:23.946112] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:17.007 [2024-07-14 05:42:23.946119] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:17.007 [2024-07-14 05:42:23.946170] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.007 [2024-07-14 05:42:23.946181] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.007 [2024-07-14 05:42:23.946188] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.007 [2024-07-14 05:42:23.946194] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbd14c0) on tqpair=0xb69980 00:28:17.007 [2024-07-14 05:42:23.946209] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:28:17.007 [2024-07-14 05:42:23.946219] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:28:17.007 [2024-07-14 05:42:23.946226] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:28:17.007 [2024-07-14 05:42:23.946233] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:28:17.007 [2024-07-14 05:42:23.946240] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:28:17.007 [2024-07-14 05:42:23.946248] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:28:17.007 [2024-07-14 05:42:23.946262] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:28:17.007 [2024-07-14 05:42:23.946274] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.007 [2024-07-14 05:42:23.946282] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.007 [2024-07-14 05:42:23.946288] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb69980) 00:28:17.007 [2024-07-14 05:42:23.946299] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:17.007 [2024-07-14 05:42:23.946335] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd14c0, cid 0, qid 0 00:28:17.007 [2024-07-14 05:42:23.946508] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.007 [2024-07-14 05:42:23.946520] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.007 [2024-07-14 05:42:23.946527] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.007 [2024-07-14 05:42:23.946533] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbd14c0) on tqpair=0xb69980 00:28:17.007 [2024-07-14 05:42:23.946548] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.007 [2024-07-14 05:42:23.946556] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.007 [2024-07-14 05:42:23.946562] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb69980) 00:28:17.007 [2024-07-14 05:42:23.946572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:17.007 [2024-07-14 05:42:23.946582] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.007 [2024-07-14 05:42:23.946589] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.007 [2024-07-14 05:42:23.946595] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xb69980) 00:28:17.007 [2024-07-14 05:42:23.946604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:17.007 [2024-07-14 05:42:23.946614] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.007 [2024-07-14 05:42:23.946620] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.007 [2024-07-14 05:42:23.946626] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xb69980) 00:28:17.007 [2024-07-14 05:42:23.946635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:17.007 [2024-07-14 05:42:23.946645] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.007 [2024-07-14 05:42:23.946651] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.007 [2024-07-14 05:42:23.946657] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb69980) 00:28:17.007 [2024-07-14 05:42:23.946666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:17.007 [2024-07-14 05:42:23.946674] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:28:17.007 [2024-07-14 05:42:23.946708] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:17.007 [2024-07-14 05:42:23.946721] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.007 [2024-07-14 05:42:23.946727] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb69980) 00:28:17.007 [2024-07-14 05:42:23.946737] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.007 [2024-07-14 05:42:23.946758] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd14c0, cid 0, qid 0 00:28:17.007 [2024-07-14 05:42:23.946785] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd1620, cid 1, qid 0 00:28:17.007 [2024-07-14 05:42:23.946793] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd1780, cid 2, qid 0 00:28:17.007 [2024-07-14 05:42:23.946801] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd18e0, cid 3, qid 0 00:28:17.007 [2024-07-14 05:42:23.946808] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd1a40, cid 4, qid 0 00:28:17.007 [2024-07-14 05:42:23.947021] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.007 [2024-07-14 05:42:23.947037] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.007 [2024-07-14 05:42:23.947043] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.007 [2024-07-14 05:42:23.947050] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbd1a40) on tqpair=0xb69980 00:28:17.007 [2024-07-14 05:42:23.947058] nvme_ctrlr.c:2904:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:28:17.007 [2024-07-14 05:42:23.947067] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:28:17.007 [2024-07-14 05:42:23.947081] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:28:17.007 [2024-07-14 05:42:23.947095] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:28:17.007 [2024-07-14 05:42:23.947106] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.007 [2024-07-14 05:42:23.947113] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.007 [2024-07-14 05:42:23.947120] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb69980) 00:28:17.007 [2024-07-14 05:42:23.947130] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:17.007 [2024-07-14 05:42:23.947151] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd1a40, cid 4, qid 0 00:28:17.007 [2024-07-14 05:42:23.947308] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.007 [2024-07-14 05:42:23.947323] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.007 [2024-07-14 05:42:23.947330] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.007 [2024-07-14 05:42:23.947336] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbd1a40) on tqpair=0xb69980 00:28:17.007 [2024-07-14 05:42:23.947404] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:28:17.007 [2024-07-14 05:42:23.947424] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:28:17.007 [2024-07-14 05:42:23.947438] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.007 [2024-07-14 05:42:23.947461] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb69980) 00:28:17.007 [2024-07-14 05:42:23.947471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.007 [2024-07-14 05:42:23.947492] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd1a40, cid 4, qid 0 00:28:17.007 [2024-07-14 05:42:23.947680] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:17.007 [2024-07-14 05:42:23.947696] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:17.007 [2024-07-14 05:42:23.947702] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:17.007 [2024-07-14 05:42:23.947709] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb69980): datao=0, datal=4096, cccid=4 00:28:17.007 [2024-07-14 05:42:23.947716] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbd1a40) on tqpair(0xb69980): expected_datao=0, payload_size=4096 00:28:17.007 [2024-07-14 05:42:23.947724] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.007 [2024-07-14 05:42:23.947758] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:17.007 [2024-07-14 05:42:23.947767] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:17.007 [2024-07-14 05:42:23.951882] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.007 [2024-07-14 05:42:23.951898] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.007 [2024-07-14 05:42:23.951904] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.007 [2024-07-14 05:42:23.951911] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbd1a40) on tqpair=0xb69980 00:28:17.007 [2024-07-14 05:42:23.951925] nvme_ctrlr.c:4570:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:28:17.007 [2024-07-14 05:42:23.951943] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:28:17.007 [2024-07-14 05:42:23.951974] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:28:17.007 [2024-07-14 05:42:23.951988] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.007 [2024-07-14 05:42:23.951996] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb69980) 00:28:17.007 [2024-07-14 05:42:23.952007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.007 [2024-07-14 05:42:23.952034] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd1a40, cid 4, qid 0 00:28:17.007 [2024-07-14 05:42:23.952217] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:17.007 [2024-07-14 05:42:23.952232] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:17.007 [2024-07-14 05:42:23.952238] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:17.007 [2024-07-14 05:42:23.952245] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb69980): datao=0, datal=4096, cccid=4 00:28:17.007 [2024-07-14 05:42:23.952252] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbd1a40) on tqpair(0xb69980): expected_datao=0, payload_size=4096 00:28:17.007 [2024-07-14 05:42:23.952259] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.007 [2024-07-14 05:42:23.952299] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:17.007 [2024-07-14 05:42:23.952308] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:17.007 [2024-07-14 05:42:23.952418] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.007 [2024-07-14 05:42:23.952429] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.007 [2024-07-14 05:42:23.952436] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.007 [2024-07-14 05:42:23.952442] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbd1a40) on tqpair=0xb69980 00:28:17.007 [2024-07-14 05:42:23.952463] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:28:17.007 [2024-07-14 05:42:23.952481] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:28:17.007 [2024-07-14 05:42:23.952495] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.008 [2024-07-14 05:42:23.952502] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb69980) 00:28:17.008 [2024-07-14 05:42:23.952513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.008 [2024-07-14 05:42:23.952534] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd1a40, cid 4, qid 0 00:28:17.008 [2024-07-14 05:42:23.952704] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:17.008 [2024-07-14 05:42:23.952717] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:17.008 [2024-07-14 05:42:23.952723] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:17.008 [2024-07-14 05:42:23.952730] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb69980): datao=0, datal=4096, cccid=4 00:28:17.008 [2024-07-14 05:42:23.952737] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbd1a40) on tqpair(0xb69980): expected_datao=0, payload_size=4096 00:28:17.008 [2024-07-14 05:42:23.952744] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.008 [2024-07-14 05:42:23.952754] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:17.008 [2024-07-14 05:42:23.952761] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:17.008 [2024-07-14 05:42:23.952821] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.008 [2024-07-14 05:42:23.952833] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.008 [2024-07-14 05:42:23.952839] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.008 [2024-07-14 05:42:23.952846] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbd1a40) on tqpair=0xb69980 00:28:17.008 [2024-07-14 05:42:23.952858] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:28:17.008 [2024-07-14 05:42:23.952881] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:28:17.008 [2024-07-14 05:42:23.952896] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:28:17.008 [2024-07-14 05:42:23.952911] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:28:17.008 [2024-07-14 05:42:23.952920] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:28:17.008 [2024-07-14 05:42:23.952928] nvme_ctrlr.c:2992:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:28:17.008 [2024-07-14 05:42:23.952936] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:28:17.008 [2024-07-14 05:42:23.952944] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:28:17.008 [2024-07-14 05:42:23.952967] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.008 [2024-07-14 05:42:23.952977] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb69980) 00:28:17.008 [2024-07-14 05:42:23.952988] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.008 [2024-07-14 05:42:23.952999] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.008 [2024-07-14 05:42:23.953006] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.008 [2024-07-14 05:42:23.953012] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb69980) 00:28:17.008 [2024-07-14 05:42:23.953021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:17.008 [2024-07-14 05:42:23.953046] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd1a40, cid 4, qid 0 00:28:17.008 [2024-07-14 05:42:23.953057] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd1ba0, cid 5, qid 0 00:28:17.008 [2024-07-14 05:42:23.953254] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.008 [2024-07-14 05:42:23.953269] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.008 [2024-07-14 05:42:23.953276] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.008 [2024-07-14 05:42:23.953282] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbd1a40) on tqpair=0xb69980 00:28:17.008 [2024-07-14 05:42:23.953293] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.008 [2024-07-14 05:42:23.953303] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.008 [2024-07-14 05:42:23.953309] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.008 [2024-07-14 05:42:23.953315] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbd1ba0) on tqpair=0xb69980 00:28:17.008 [2024-07-14 05:42:23.953331] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.008 [2024-07-14 05:42:23.953340] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb69980) 00:28:17.008 [2024-07-14 05:42:23.953367] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.008 [2024-07-14 05:42:23.953388] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd1ba0, cid 5, qid 0 00:28:17.008 [2024-07-14 05:42:23.953588] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.008 [2024-07-14 05:42:23.953604] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.008 [2024-07-14 05:42:23.953610] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.008 [2024-07-14 05:42:23.953617] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbd1ba0) on tqpair=0xb69980 00:28:17.008 [2024-07-14 05:42:23.953633] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.008 [2024-07-14 05:42:23.953642] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb69980) 00:28:17.008 [2024-07-14 05:42:23.953652] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.008 [2024-07-14 05:42:23.953677] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd1ba0, cid 5, qid 0 00:28:17.008 [2024-07-14 05:42:23.953837] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.008 [2024-07-14 05:42:23.953852] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.008 [2024-07-14 05:42:23.953859] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.008 [2024-07-14 05:42:23.953875] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbd1ba0) on tqpair=0xb69980 00:28:17.008 [2024-07-14 05:42:23.953894] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.008 [2024-07-14 05:42:23.953903] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb69980) 00:28:17.008 [2024-07-14 05:42:23.953913] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.008 [2024-07-14 05:42:23.953934] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd1ba0, cid 5, qid 0 00:28:17.008 [2024-07-14 05:42:23.954090] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.008 [2024-07-14 05:42:23.954105] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.008 [2024-07-14 05:42:23.954111] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.008 [2024-07-14 05:42:23.954118] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbd1ba0) on tqpair=0xb69980 00:28:17.008 [2024-07-14 05:42:23.954137] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.008 [2024-07-14 05:42:23.954147] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb69980) 00:28:17.008 [2024-07-14 05:42:23.954158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.008 [2024-07-14 05:42:23.954169] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.008 [2024-07-14 05:42:23.954176] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb69980) 00:28:17.008 [2024-07-14 05:42:23.954186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.008 [2024-07-14 05:42:23.954197] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.008 [2024-07-14 05:42:23.954204] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xb69980) 00:28:17.008 [2024-07-14 05:42:23.954213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.008 [2024-07-14 05:42:23.954224] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.008 [2024-07-14 05:42:23.954247] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xb69980) 00:28:17.008 [2024-07-14 05:42:23.954257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.008 [2024-07-14 05:42:23.954278] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd1ba0, cid 5, qid 0 00:28:17.008 [2024-07-14 05:42:23.954289] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd1a40, cid 4, qid 0 00:28:17.008 [2024-07-14 05:42:23.954312] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd1d00, cid 6, qid 0 00:28:17.008 [2024-07-14 05:42:23.954319] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd1e60, cid 7, qid 0 00:28:17.008 [2024-07-14 05:42:23.954639] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:17.008 [2024-07-14 05:42:23.954654] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:17.008 [2024-07-14 05:42:23.954661] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:17.008 [2024-07-14 05:42:23.954667] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb69980): datao=0, datal=8192, cccid=5 00:28:17.008 [2024-07-14 05:42:23.954679] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbd1ba0) on tqpair(0xb69980): expected_datao=0, payload_size=8192 00:28:17.008 [2024-07-14 05:42:23.954687] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.008 [2024-07-14 05:42:23.954697] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:17.008 [2024-07-14 05:42:23.954704] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:17.008 [2024-07-14 05:42:23.954712] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:17.008 [2024-07-14 05:42:23.954721] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:17.009 [2024-07-14 05:42:23.954727] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:17.009 [2024-07-14 05:42:23.954733] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb69980): datao=0, datal=512, cccid=4 00:28:17.009 [2024-07-14 05:42:23.954740] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbd1a40) on tqpair(0xb69980): expected_datao=0, payload_size=512 00:28:17.009 [2024-07-14 05:42:23.954748] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.009 [2024-07-14 05:42:23.954756] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:17.009 [2024-07-14 05:42:23.954763] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:17.009 [2024-07-14 05:42:23.954771] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:17.009 [2024-07-14 05:42:23.954780] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:17.009 [2024-07-14 05:42:23.954786] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:17.009 [2024-07-14 05:42:23.954792] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb69980): datao=0, datal=512, cccid=6 00:28:17.009 [2024-07-14 05:42:23.954800] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbd1d00) on tqpair(0xb69980): expected_datao=0, payload_size=512 00:28:17.009 [2024-07-14 05:42:23.954807] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.009 [2024-07-14 05:42:23.954816] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:17.009 [2024-07-14 05:42:23.954822] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:17.009 [2024-07-14 05:42:23.954831] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:17.009 [2024-07-14 05:42:23.954839] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:17.009 [2024-07-14 05:42:23.954845] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:17.009 [2024-07-14 05:42:23.954852] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb69980): datao=0, datal=4096, cccid=7 00:28:17.009 [2024-07-14 05:42:23.954859] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbd1e60) on tqpair(0xb69980): expected_datao=0, payload_size=4096 00:28:17.009 [2024-07-14 05:42:23.954876] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.009 [2024-07-14 05:42:23.954887] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:17.009 [2024-07-14 05:42:23.954894] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:17.009 [2024-07-14 05:42:23.954906] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.009 [2024-07-14 05:42:23.954915] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.009 [2024-07-14 05:42:23.954922] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.009 [2024-07-14 05:42:23.954928] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbd1ba0) on tqpair=0xb69980 00:28:17.009 [2024-07-14 05:42:23.954948] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.009 [2024-07-14 05:42:23.954959] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.009 [2024-07-14 05:42:23.954966] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.009 [2024-07-14 05:42:23.954972] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbd1a40) on tqpair=0xb69980 00:28:17.009 [2024-07-14 05:42:23.954985] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.009 [2024-07-14 05:42:23.954996] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.009 [2024-07-14 05:42:23.955002] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.009 [2024-07-14 05:42:23.955011] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbd1d00) on tqpair=0xb69980 00:28:17.009 [2024-07-14 05:42:23.955025] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.009 [2024-07-14 05:42:23.955036] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.009 [2024-07-14 05:42:23.955057] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.009 [2024-07-14 05:42:23.955064] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbd1e60) on tqpair=0xb69980 00:28:17.009 ===================================================== 00:28:17.009 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:17.009 ===================================================== 00:28:17.009 Controller Capabilities/Features 00:28:17.009 ================================ 00:28:17.009 Vendor ID: 8086 00:28:17.009 Subsystem Vendor ID: 8086 00:28:17.009 Serial Number: SPDK00000000000001 00:28:17.009 Model Number: SPDK bdev Controller 00:28:17.009 Firmware Version: 24.05.1 00:28:17.009 Recommended Arb Burst: 6 00:28:17.009 IEEE OUI Identifier: e4 d2 5c 00:28:17.009 Multi-path I/O 00:28:17.009 May have multiple subsystem ports: Yes 00:28:17.009 May have multiple controllers: Yes 00:28:17.009 Associated with SR-IOV VF: No 00:28:17.009 Max Data Transfer Size: 131072 00:28:17.009 Max Number of Namespaces: 32 00:28:17.009 Max Number of I/O Queues: 127 00:28:17.009 NVMe Specification Version (VS): 1.3 00:28:17.009 NVMe Specification Version (Identify): 1.3 00:28:17.009 Maximum Queue Entries: 128 00:28:17.009 Contiguous Queues Required: Yes 00:28:17.009 Arbitration Mechanisms Supported 00:28:17.009 Weighted Round Robin: Not Supported 00:28:17.009 Vendor Specific: Not Supported 00:28:17.009 Reset Timeout: 15000 ms 00:28:17.009 Doorbell Stride: 4 bytes 00:28:17.009 NVM Subsystem Reset: Not Supported 00:28:17.009 Command Sets Supported 00:28:17.009 NVM Command Set: Supported 00:28:17.009 Boot Partition: Not Supported 00:28:17.009 Memory Page Size Minimum: 4096 bytes 00:28:17.009 Memory Page Size Maximum: 4096 bytes 00:28:17.009 Persistent Memory Region: Not Supported 00:28:17.009 Optional Asynchronous Events Supported 00:28:17.009 Namespace Attribute Notices: Supported 00:28:17.009 Firmware Activation Notices: Not Supported 00:28:17.009 ANA Change Notices: Not Supported 00:28:17.009 PLE Aggregate Log Change Notices: Not Supported 00:28:17.009 LBA Status Info Alert Notices: Not Supported 00:28:17.009 EGE Aggregate Log Change Notices: Not Supported 00:28:17.009 Normal NVM Subsystem Shutdown event: Not Supported 00:28:17.009 Zone Descriptor Change Notices: Not Supported 00:28:17.009 Discovery Log Change Notices: Not Supported 00:28:17.009 Controller Attributes 00:28:17.009 128-bit Host Identifier: Supported 00:28:17.009 Non-Operational Permissive Mode: Not Supported 00:28:17.009 NVM Sets: Not Supported 00:28:17.009 Read Recovery Levels: Not Supported 00:28:17.009 Endurance Groups: Not Supported 00:28:17.009 Predictable Latency Mode: Not Supported 00:28:17.009 Traffic Based Keep ALive: Not Supported 00:28:17.009 Namespace Granularity: Not Supported 00:28:17.009 SQ Associations: Not Supported 00:28:17.009 UUID List: Not Supported 00:28:17.009 Multi-Domain Subsystem: Not Supported 00:28:17.009 Fixed Capacity Management: Not Supported 00:28:17.009 Variable Capacity Management: Not Supported 00:28:17.009 Delete Endurance Group: Not Supported 00:28:17.009 Delete NVM Set: Not Supported 00:28:17.009 Extended LBA Formats Supported: Not Supported 00:28:17.009 Flexible Data Placement Supported: Not Supported 00:28:17.009 00:28:17.009 Controller Memory Buffer Support 00:28:17.009 ================================ 00:28:17.009 Supported: No 00:28:17.009 00:28:17.009 Persistent Memory Region Support 00:28:17.009 ================================ 00:28:17.009 Supported: No 00:28:17.009 00:28:17.009 Admin Command Set Attributes 00:28:17.009 ============================ 00:28:17.009 Security Send/Receive: Not Supported 00:28:17.009 Format NVM: Not Supported 00:28:17.009 Firmware Activate/Download: Not Supported 00:28:17.009 Namespace Management: Not Supported 00:28:17.009 Device Self-Test: Not Supported 00:28:17.009 Directives: Not Supported 00:28:17.009 NVMe-MI: Not Supported 00:28:17.009 Virtualization Management: Not Supported 00:28:17.009 Doorbell Buffer Config: Not Supported 00:28:17.009 Get LBA Status Capability: Not Supported 00:28:17.009 Command & Feature Lockdown Capability: Not Supported 00:28:17.009 Abort Command Limit: 4 00:28:17.009 Async Event Request Limit: 4 00:28:17.009 Number of Firmware Slots: N/A 00:28:17.009 Firmware Slot 1 Read-Only: N/A 00:28:17.009 Firmware Activation Without Reset: N/A 00:28:17.009 Multiple Update Detection Support: N/A 00:28:17.009 Firmware Update Granularity: No Information Provided 00:28:17.009 Per-Namespace SMART Log: No 00:28:17.009 Asymmetric Namespace Access Log Page: Not Supported 00:28:17.009 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:28:17.009 Command Effects Log Page: Supported 00:28:17.009 Get Log Page Extended Data: Supported 00:28:17.009 Telemetry Log Pages: Not Supported 00:28:17.009 Persistent Event Log Pages: Not Supported 00:28:17.009 Supported Log Pages Log Page: May Support 00:28:17.009 Commands Supported & Effects Log Page: Not Supported 00:28:17.009 Feature Identifiers & Effects Log Page:May Support 00:28:17.009 NVMe-MI Commands & Effects Log Page: May Support 00:28:17.009 Data Area 4 for Telemetry Log: Not Supported 00:28:17.009 Error Log Page Entries Supported: 128 00:28:17.009 Keep Alive: Supported 00:28:17.009 Keep Alive Granularity: 10000 ms 00:28:17.009 00:28:17.009 NVM Command Set Attributes 00:28:17.009 ========================== 00:28:17.009 Submission Queue Entry Size 00:28:17.009 Max: 64 00:28:17.009 Min: 64 00:28:17.009 Completion Queue Entry Size 00:28:17.009 Max: 16 00:28:17.009 Min: 16 00:28:17.009 Number of Namespaces: 32 00:28:17.009 Compare Command: Supported 00:28:17.009 Write Uncorrectable Command: Not Supported 00:28:17.009 Dataset Management Command: Supported 00:28:17.009 Write Zeroes Command: Supported 00:28:17.009 Set Features Save Field: Not Supported 00:28:17.009 Reservations: Supported 00:28:17.009 Timestamp: Not Supported 00:28:17.009 Copy: Supported 00:28:17.009 Volatile Write Cache: Present 00:28:17.009 Atomic Write Unit (Normal): 1 00:28:17.009 Atomic Write Unit (PFail): 1 00:28:17.009 Atomic Compare & Write Unit: 1 00:28:17.009 Fused Compare & Write: Supported 00:28:17.009 Scatter-Gather List 00:28:17.009 SGL Command Set: Supported 00:28:17.009 SGL Keyed: Supported 00:28:17.009 SGL Bit Bucket Descriptor: Not Supported 00:28:17.009 SGL Metadata Pointer: Not Supported 00:28:17.009 Oversized SGL: Not Supported 00:28:17.009 SGL Metadata Address: Not Supported 00:28:17.009 SGL Offset: Supported 00:28:17.009 Transport SGL Data Block: Not Supported 00:28:17.010 Replay Protected Memory Block: Not Supported 00:28:17.010 00:28:17.010 Firmware Slot Information 00:28:17.010 ========================= 00:28:17.010 Active slot: 1 00:28:17.010 Slot 1 Firmware Revision: 24.05.1 00:28:17.010 00:28:17.010 00:28:17.010 Commands Supported and Effects 00:28:17.010 ============================== 00:28:17.010 Admin Commands 00:28:17.010 -------------- 00:28:17.010 Get Log Page (02h): Supported 00:28:17.010 Identify (06h): Supported 00:28:17.010 Abort (08h): Supported 00:28:17.010 Set Features (09h): Supported 00:28:17.010 Get Features (0Ah): Supported 00:28:17.010 Asynchronous Event Request (0Ch): Supported 00:28:17.010 Keep Alive (18h): Supported 00:28:17.010 I/O Commands 00:28:17.010 ------------ 00:28:17.010 Flush (00h): Supported LBA-Change 00:28:17.010 Write (01h): Supported LBA-Change 00:28:17.010 Read (02h): Supported 00:28:17.010 Compare (05h): Supported 00:28:17.010 Write Zeroes (08h): Supported LBA-Change 00:28:17.010 Dataset Management (09h): Supported LBA-Change 00:28:17.010 Copy (19h): Supported LBA-Change 00:28:17.010 Unknown (79h): Supported LBA-Change 00:28:17.010 Unknown (7Ah): Supported 00:28:17.010 00:28:17.010 Error Log 00:28:17.010 ========= 00:28:17.010 00:28:17.010 Arbitration 00:28:17.010 =========== 00:28:17.010 Arbitration Burst: 1 00:28:17.010 00:28:17.010 Power Management 00:28:17.010 ================ 00:28:17.010 Number of Power States: 1 00:28:17.010 Current Power State: Power State #0 00:28:17.010 Power State #0: 00:28:17.010 Max Power: 0.00 W 00:28:17.010 Non-Operational State: Operational 00:28:17.010 Entry Latency: Not Reported 00:28:17.010 Exit Latency: Not Reported 00:28:17.010 Relative Read Throughput: 0 00:28:17.010 Relative Read Latency: 0 00:28:17.010 Relative Write Throughput: 0 00:28:17.010 Relative Write Latency: 0 00:28:17.010 Idle Power: Not Reported 00:28:17.010 Active Power: Not Reported 00:28:17.010 Non-Operational Permissive Mode: Not Supported 00:28:17.010 00:28:17.010 Health Information 00:28:17.010 ================== 00:28:17.010 Critical Warnings: 00:28:17.010 Available Spare Space: OK 00:28:17.010 Temperature: OK 00:28:17.010 Device Reliability: OK 00:28:17.010 Read Only: No 00:28:17.010 Volatile Memory Backup: OK 00:28:17.010 Current Temperature: 0 Kelvin (-273 Celsius) 00:28:17.010 Temperature Threshold: [2024-07-14 05:42:23.955217] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.010 [2024-07-14 05:42:23.955229] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xb69980) 00:28:17.010 [2024-07-14 05:42:23.955240] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.010 [2024-07-14 05:42:23.955262] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd1e60, cid 7, qid 0 00:28:17.010 [2024-07-14 05:42:23.955447] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.010 [2024-07-14 05:42:23.955463] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.010 [2024-07-14 05:42:23.955469] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.010 [2024-07-14 05:42:23.955476] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbd1e60) on tqpair=0xb69980 00:28:17.010 [2024-07-14 05:42:23.955518] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:28:17.010 [2024-07-14 05:42:23.955539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.010 [2024-07-14 05:42:23.955551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.010 [2024-07-14 05:42:23.955560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.010 [2024-07-14 05:42:23.955584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.010 [2024-07-14 05:42:23.955597] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.010 [2024-07-14 05:42:23.955604] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.010 [2024-07-14 05:42:23.955611] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb69980) 00:28:17.010 [2024-07-14 05:42:23.955621] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.010 [2024-07-14 05:42:23.955642] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd18e0, cid 3, qid 0 00:28:17.010 [2024-07-14 05:42:23.955856] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.010 [2024-07-14 05:42:23.959880] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.010 [2024-07-14 05:42:23.959891] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.010 [2024-07-14 05:42:23.959898] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbd18e0) on tqpair=0xb69980 00:28:17.010 [2024-07-14 05:42:23.959910] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.010 [2024-07-14 05:42:23.959917] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.010 [2024-07-14 05:42:23.959924] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb69980) 00:28:17.010 [2024-07-14 05:42:23.959934] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.010 [2024-07-14 05:42:23.959977] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd18e0, cid 3, qid 0 00:28:17.010 [2024-07-14 05:42:23.960186] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.010 [2024-07-14 05:42:23.960198] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.010 [2024-07-14 05:42:23.960205] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.010 [2024-07-14 05:42:23.960216] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbd18e0) on tqpair=0xb69980 00:28:17.010 [2024-07-14 05:42:23.960225] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:28:17.010 [2024-07-14 05:42:23.960233] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:28:17.010 [2024-07-14 05:42:23.960248] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.010 [2024-07-14 05:42:23.960257] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.010 [2024-07-14 05:42:23.960263] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb69980) 00:28:17.010 [2024-07-14 05:42:23.960288] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.010 [2024-07-14 05:42:23.960309] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd18e0, cid 3, qid 0 00:28:17.010 [2024-07-14 05:42:23.960505] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.010 [2024-07-14 05:42:23.960517] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.010 [2024-07-14 05:42:23.960523] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.010 [2024-07-14 05:42:23.960530] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbd18e0) on tqpair=0xb69980 00:28:17.010 [2024-07-14 05:42:23.960546] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.010 [2024-07-14 05:42:23.960555] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.010 [2024-07-14 05:42:23.960561] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb69980) 00:28:17.010 [2024-07-14 05:42:23.960572] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.010 [2024-07-14 05:42:23.960592] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd18e0, cid 3, qid 0 00:28:17.010 [2024-07-14 05:42:23.960745] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.010 [2024-07-14 05:42:23.960760] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.010 [2024-07-14 05:42:23.960767] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.010 [2024-07-14 05:42:23.960773] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbd18e0) on tqpair=0xb69980 00:28:17.010 [2024-07-14 05:42:23.960790] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.010 [2024-07-14 05:42:23.960798] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.010 [2024-07-14 05:42:23.960805] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb69980) 00:28:17.010 [2024-07-14 05:42:23.960815] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.010 [2024-07-14 05:42:23.960836] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd18e0, cid 3, qid 0 00:28:17.010 [2024-07-14 05:42:23.961044] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.010 [2024-07-14 05:42:23.961059] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.010 [2024-07-14 05:42:23.961066] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.010 [2024-07-14 05:42:23.961072] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbd18e0) on tqpair=0xb69980 00:28:17.010 [2024-07-14 05:42:23.961089] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.010 [2024-07-14 05:42:23.961098] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.010 [2024-07-14 05:42:23.961105] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb69980) 00:28:17.010 [2024-07-14 05:42:23.961115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.010 [2024-07-14 05:42:23.961136] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd18e0, cid 3, qid 0 00:28:17.010 [2024-07-14 05:42:23.961332] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.010 [2024-07-14 05:42:23.961348] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.010 [2024-07-14 05:42:23.961355] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.010 [2024-07-14 05:42:23.961362] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbd18e0) on tqpair=0xb69980 00:28:17.010 [2024-07-14 05:42:23.961378] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.010 [2024-07-14 05:42:23.961387] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.010 [2024-07-14 05:42:23.961393] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb69980) 00:28:17.010 [2024-07-14 05:42:23.961404] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.010 [2024-07-14 05:42:23.961424] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd18e0, cid 3, qid 0 00:28:17.010 [2024-07-14 05:42:23.961636] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.010 [2024-07-14 05:42:23.961652] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.010 [2024-07-14 05:42:23.961658] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.010 [2024-07-14 05:42:23.961665] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbd18e0) on tqpair=0xb69980 00:28:17.010 [2024-07-14 05:42:23.961681] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.010 [2024-07-14 05:42:23.961691] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.010 [2024-07-14 05:42:23.961697] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb69980) 00:28:17.010 [2024-07-14 05:42:23.961708] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.011 [2024-07-14 05:42:23.961728] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd18e0, cid 3, qid 0 00:28:17.011 [2024-07-14 05:42:23.961894] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.011 [2024-07-14 05:42:23.961909] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.011 [2024-07-14 05:42:23.961916] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.011 [2024-07-14 05:42:23.961923] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbd18e0) on tqpair=0xb69980 00:28:17.011 [2024-07-14 05:42:23.961939] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.011 [2024-07-14 05:42:23.961949] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.011 [2024-07-14 05:42:23.961955] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb69980) 00:28:17.011 [2024-07-14 05:42:23.961965] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.011 [2024-07-14 05:42:23.961986] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd18e0, cid 3, qid 0 00:28:17.011 [2024-07-14 05:42:23.962186] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.011 [2024-07-14 05:42:23.962201] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.011 [2024-07-14 05:42:23.962208] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.011 [2024-07-14 05:42:23.962214] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbd18e0) on tqpair=0xb69980 00:28:17.011 [2024-07-14 05:42:23.962231] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.011 [2024-07-14 05:42:23.962240] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.011 [2024-07-14 05:42:23.962247] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb69980) 00:28:17.011 [2024-07-14 05:42:23.962257] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.011 [2024-07-14 05:42:23.962292] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd18e0, cid 3, qid 0 00:28:17.011 [2024-07-14 05:42:23.962486] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.011 [2024-07-14 05:42:23.962498] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.011 [2024-07-14 05:42:23.962509] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.011 [2024-07-14 05:42:23.962516] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbd18e0) on tqpair=0xb69980 00:28:17.011 [2024-07-14 05:42:23.962532] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.011 [2024-07-14 05:42:23.962541] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.011 [2024-07-14 05:42:23.962548] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb69980) 00:28:17.011 [2024-07-14 05:42:23.962558] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.011 [2024-07-14 05:42:23.962579] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd18e0, cid 3, qid 0 00:28:17.011 [2024-07-14 05:42:23.962776] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.011 [2024-07-14 05:42:23.962791] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.011 [2024-07-14 05:42:23.962797] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.011 [2024-07-14 05:42:23.962804] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbd18e0) on tqpair=0xb69980 00:28:17.011 [2024-07-14 05:42:23.962820] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.011 [2024-07-14 05:42:23.962830] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.011 [2024-07-14 05:42:23.962836] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb69980) 00:28:17.011 [2024-07-14 05:42:23.962846] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.011 [2024-07-14 05:42:23.962891] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd18e0, cid 3, qid 0 00:28:17.011 [2024-07-14 05:42:23.963065] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.011 [2024-07-14 05:42:23.963080] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.011 [2024-07-14 05:42:23.963087] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.011 [2024-07-14 05:42:23.963094] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbd18e0) on tqpair=0xb69980 00:28:17.011 [2024-07-14 05:42:23.963110] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.011 [2024-07-14 05:42:23.963120] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.011 [2024-07-14 05:42:23.963126] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb69980) 00:28:17.011 [2024-07-14 05:42:23.963137] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.011 [2024-07-14 05:42:23.963157] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd18e0, cid 3, qid 0 00:28:17.011 [2024-07-14 05:42:23.963353] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.011 [2024-07-14 05:42:23.963364] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.011 [2024-07-14 05:42:23.963371] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.011 [2024-07-14 05:42:23.963377] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbd18e0) on tqpair=0xb69980 00:28:17.011 [2024-07-14 05:42:23.963393] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.011 [2024-07-14 05:42:23.963402] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.011 [2024-07-14 05:42:23.963408] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb69980) 00:28:17.011 [2024-07-14 05:42:23.963419] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.011 [2024-07-14 05:42:23.963454] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd18e0, cid 3, qid 0 00:28:17.011 [2024-07-14 05:42:23.963656] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.011 [2024-07-14 05:42:23.963671] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.011 [2024-07-14 05:42:23.963678] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.011 [2024-07-14 05:42:23.963688] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbd18e0) on tqpair=0xb69980 00:28:17.011 [2024-07-14 05:42:23.963705] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.011 [2024-07-14 05:42:23.963715] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.011 [2024-07-14 05:42:23.963721] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb69980) 00:28:17.011 [2024-07-14 05:42:23.963732] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.011 [2024-07-14 05:42:23.963752] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd18e0, cid 3, qid 0 00:28:17.011 [2024-07-14 05:42:23.967882] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.011 [2024-07-14 05:42:23.967899] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.011 [2024-07-14 05:42:23.967905] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.011 [2024-07-14 05:42:23.967912] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbd18e0) on tqpair=0xb69980 00:28:17.011 [2024-07-14 05:42:23.967929] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.011 [2024-07-14 05:42:23.967938] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.011 [2024-07-14 05:42:23.967945] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb69980) 00:28:17.011 [2024-07-14 05:42:23.967956] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.011 [2024-07-14 05:42:23.967977] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd18e0, cid 3, qid 0 00:28:17.011 [2024-07-14 05:42:23.968140] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.011 [2024-07-14 05:42:23.968154] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.011 [2024-07-14 05:42:23.968161] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.011 [2024-07-14 05:42:23.968168] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbd18e0) on tqpair=0xb69980 00:28:17.011 [2024-07-14 05:42:23.968181] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:28:17.011 0 Kelvin (-273 Celsius) 00:28:17.011 Available Spare: 0% 00:28:17.011 Available Spare Threshold: 0% 00:28:17.011 Life Percentage Used: 0% 00:28:17.011 Data Units Read: 0 00:28:17.011 Data Units Written: 0 00:28:17.011 Host Read Commands: 0 00:28:17.011 Host Write Commands: 0 00:28:17.011 Controller Busy Time: 0 minutes 00:28:17.011 Power Cycles: 0 00:28:17.011 Power On Hours: 0 hours 00:28:17.011 Unsafe Shutdowns: 0 00:28:17.011 Unrecoverable Media Errors: 0 00:28:17.011 Lifetime Error Log Entries: 0 00:28:17.011 Warning Temperature Time: 0 minutes 00:28:17.011 Critical Temperature Time: 0 minutes 00:28:17.011 00:28:17.011 Number of Queues 00:28:17.011 ================ 00:28:17.011 Number of I/O Submission Queues: 127 00:28:17.011 Number of I/O Completion Queues: 127 00:28:17.011 00:28:17.011 Active Namespaces 00:28:17.011 ================= 00:28:17.011 Namespace ID:1 00:28:17.011 Error Recovery Timeout: Unlimited 00:28:17.011 Command Set Identifier: NVM (00h) 00:28:17.011 Deallocate: Supported 00:28:17.011 Deallocated/Unwritten Error: Not Supported 00:28:17.011 Deallocated Read Value: Unknown 00:28:17.011 Deallocate in Write Zeroes: Not Supported 00:28:17.011 Deallocated Guard Field: 0xFFFF 00:28:17.011 Flush: Supported 00:28:17.011 Reservation: Supported 00:28:17.011 Namespace Sharing Capabilities: Multiple Controllers 00:28:17.011 Size (in LBAs): 131072 (0GiB) 00:28:17.011 Capacity (in LBAs): 131072 (0GiB) 00:28:17.011 Utilization (in LBAs): 131072 (0GiB) 00:28:17.011 NGUID: ABCDEF0123456789ABCDEF0123456789 00:28:17.011 EUI64: ABCDEF0123456789 00:28:17.011 UUID: 369e74e5-d666-4539-a0f3-67aa468d905e 00:28:17.011 Thin Provisioning: Not Supported 00:28:17.011 Per-NS Atomic Units: Yes 00:28:17.011 Atomic Boundary Size (Normal): 0 00:28:17.011 Atomic Boundary Size (PFail): 0 00:28:17.011 Atomic Boundary Offset: 0 00:28:17.011 Maximum Single Source Range Length: 65535 00:28:17.011 Maximum Copy Length: 65535 00:28:17.011 Maximum Source Range Count: 1 00:28:17.011 NGUID/EUI64 Never Reused: No 00:28:17.011 Namespace Write Protected: No 00:28:17.011 Number of LBA Formats: 1 00:28:17.011 Current LBA Format: LBA Format #00 00:28:17.011 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:17.011 00:28:17.011 05:42:23 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:28:17.011 05:42:23 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:17.011 05:42:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.011 05:42:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:17.011 05:42:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.011 05:42:23 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:28:17.011 05:42:23 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:28:17.012 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:17.012 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:28:17.012 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:17.012 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:28:17.012 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:17.012 05:42:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:17.012 rmmod nvme_tcp 00:28:17.012 rmmod nvme_fabrics 00:28:17.012 rmmod nvme_keyring 00:28:17.012 05:42:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:17.012 05:42:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:28:17.012 05:42:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:28:17.012 05:42:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 3330680 ']' 00:28:17.012 05:42:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 3330680 00:28:17.012 05:42:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@946 -- # '[' -z 3330680 ']' 00:28:17.012 05:42:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@950 -- # kill -0 3330680 00:28:17.012 05:42:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # uname 00:28:17.012 05:42:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:17.012 05:42:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3330680 00:28:17.012 05:42:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:17.012 05:42:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:17.012 05:42:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3330680' 00:28:17.012 killing process with pid 3330680 00:28:17.012 05:42:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@965 -- # kill 3330680 00:28:17.012 05:42:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@970 -- # wait 3330680 00:28:17.270 05:42:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:17.270 05:42:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:17.270 05:42:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:17.270 05:42:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:17.270 05:42:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:17.270 05:42:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:17.270 05:42:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:17.270 05:42:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:19.799 05:42:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:19.799 00:28:19.799 real 0m5.439s 00:28:19.799 user 0m4.356s 00:28:19.799 sys 0m1.895s 00:28:19.799 05:42:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:19.799 05:42:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:19.799 ************************************ 00:28:19.799 END TEST nvmf_identify 00:28:19.799 ************************************ 00:28:19.799 05:42:26 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:19.799 05:42:26 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:19.799 05:42:26 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:19.799 05:42:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:19.799 ************************************ 00:28:19.799 START TEST nvmf_perf 00:28:19.799 ************************************ 00:28:19.799 05:42:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:19.799 * Looking for test storage... 00:28:19.799 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:19.799 05:42:26 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:19.799 05:42:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:28:19.799 05:42:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:19.799 05:42:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:19.800 05:42:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:19.800 05:42:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:19.800 05:42:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:19.800 05:42:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:19.800 05:42:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:19.800 05:42:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:19.800 05:42:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:19.800 05:42:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:19.800 05:42:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:19.800 05:42:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:19.800 05:42:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:19.800 05:42:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:19.800 05:42:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:19.800 05:42:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:19.800 05:42:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:19.800 05:42:26 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:19.800 05:42:26 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:19.800 05:42:26 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:19.800 05:42:26 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.800 05:42:26 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.800 05:42:26 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.800 05:42:26 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:28:19.800 05:42:26 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.800 05:42:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:28:19.800 05:42:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:19.800 05:42:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:19.800 05:42:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:19.800 05:42:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:19.800 05:42:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:19.800 05:42:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:19.800 05:42:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:19.800 05:42:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:19.800 05:42:26 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:19.800 05:42:26 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:19.800 05:42:26 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:19.800 05:42:26 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:28:19.800 05:42:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:19.800 05:42:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:19.800 05:42:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:19.800 05:42:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:19.800 05:42:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:19.800 05:42:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:19.800 05:42:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:19.800 05:42:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:19.800 05:42:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:19.800 05:42:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:19.800 05:42:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:28:19.800 05:42:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:21.700 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:21.700 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:21.700 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:21.700 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:21.700 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:21.700 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:28:21.700 00:28:21.700 --- 10.0.0.2 ping statistics --- 00:28:21.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:21.700 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:21.700 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:21.700 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:28:21.700 00:28:21.700 --- 10.0.0.1 ping statistics --- 00:28:21.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:21.700 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=3332749 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 3332749 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@827 -- # '[' -z 3332749 ']' 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:21.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:21.700 05:42:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:21.700 [2024-07-14 05:42:28.625111] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:21.700 [2024-07-14 05:42:28.625203] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:21.700 EAL: No free 2048 kB hugepages reported on node 1 00:28:21.700 [2024-07-14 05:42:28.692257] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:21.700 [2024-07-14 05:42:28.784770] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:21.700 [2024-07-14 05:42:28.784836] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:21.701 [2024-07-14 05:42:28.784863] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:21.701 [2024-07-14 05:42:28.784886] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:21.701 [2024-07-14 05:42:28.784899] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:21.701 [2024-07-14 05:42:28.784961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:21.701 [2024-07-14 05:42:28.785017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:21.701 [2024-07-14 05:42:28.785136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:21.701 [2024-07-14 05:42:28.785139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:21.958 05:42:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:21.958 05:42:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@860 -- # return 0 00:28:21.958 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:21.958 05:42:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:21.958 05:42:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:21.958 05:42:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:21.958 05:42:28 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:21.958 05:42:28 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:28:25.234 05:42:32 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:28:25.234 05:42:32 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:28:25.234 05:42:32 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:28:25.234 05:42:32 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:25.492 05:42:32 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:28:25.492 05:42:32 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:28:25.492 05:42:32 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:28:25.492 05:42:32 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:28:25.492 05:42:32 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:28:25.749 [2024-07-14 05:42:32.801321] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:25.749 05:42:32 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:26.007 05:42:33 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:26.007 05:42:33 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:26.264 05:42:33 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:26.264 05:42:33 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:28:26.521 05:42:33 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:26.777 [2024-07-14 05:42:33.784910] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:26.777 05:42:33 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:27.034 05:42:34 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:28:27.034 05:42:34 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:28:27.034 05:42:34 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:28:27.034 05:42:34 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:28:28.406 Initializing NVMe Controllers 00:28:28.406 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:28:28.406 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:28:28.406 Initialization complete. Launching workers. 00:28:28.406 ======================================================== 00:28:28.406 Latency(us) 00:28:28.406 Device Information : IOPS MiB/s Average min max 00:28:28.406 PCIE (0000:88:00.0) NSID 1 from core 0: 85288.06 333.16 374.84 10.62 7270.91 00:28:28.406 ======================================================== 00:28:28.406 Total : 85288.06 333.16 374.84 10.62 7270.91 00:28:28.406 00:28:28.406 05:42:35 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:28.406 EAL: No free 2048 kB hugepages reported on node 1 00:28:29.337 Initializing NVMe Controllers 00:28:29.337 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:29.337 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:29.337 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:29.337 Initialization complete. Launching workers. 00:28:29.337 ======================================================== 00:28:29.337 Latency(us) 00:28:29.337 Device Information : IOPS MiB/s Average min max 00:28:29.337 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 95.66 0.37 10453.50 209.22 45195.61 00:28:29.337 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 61.78 0.24 16314.10 7948.95 47888.33 00:28:29.337 ======================================================== 00:28:29.337 Total : 157.44 0.61 12753.23 209.22 47888.33 00:28:29.337 00:28:29.337 05:42:36 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:29.594 EAL: No free 2048 kB hugepages reported on node 1 00:28:30.967 Initializing NVMe Controllers 00:28:30.967 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:30.967 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:30.967 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:30.967 Initialization complete. Launching workers. 00:28:30.967 ======================================================== 00:28:30.967 Latency(us) 00:28:30.967 Device Information : IOPS MiB/s Average min max 00:28:30.967 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8335.99 32.56 3856.75 586.28 9646.02 00:28:30.967 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3844.00 15.02 8362.02 5160.67 15948.65 00:28:30.967 ======================================================== 00:28:30.967 Total : 12179.99 47.58 5278.61 586.28 15948.65 00:28:30.967 00:28:30.967 05:42:37 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:28:30.968 05:42:37 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:28:30.968 05:42:37 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:30.968 EAL: No free 2048 kB hugepages reported on node 1 00:28:33.498 Initializing NVMe Controllers 00:28:33.498 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:33.498 Controller IO queue size 128, less than required. 00:28:33.498 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:33.498 Controller IO queue size 128, less than required. 00:28:33.498 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:33.498 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:33.499 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:33.499 Initialization complete. Launching workers. 00:28:33.499 ======================================================== 00:28:33.499 Latency(us) 00:28:33.499 Device Information : IOPS MiB/s Average min max 00:28:33.499 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 713.49 178.37 189154.44 103964.83 247246.52 00:28:33.499 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 501.49 125.37 267929.10 87820.76 437877.27 00:28:33.499 ======================================================== 00:28:33.499 Total : 1214.98 303.75 221669.25 87820.76 437877.27 00:28:33.499 00:28:33.499 05:42:40 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:28:33.499 EAL: No free 2048 kB hugepages reported on node 1 00:28:33.757 No valid NVMe controllers or AIO or URING devices found 00:28:33.757 Initializing NVMe Controllers 00:28:33.757 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:33.757 Controller IO queue size 128, less than required. 00:28:33.757 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:33.757 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:28:33.757 Controller IO queue size 128, less than required. 00:28:33.757 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:33.757 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:28:33.757 WARNING: Some requested NVMe devices were skipped 00:28:33.757 05:42:40 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:28:33.757 EAL: No free 2048 kB hugepages reported on node 1 00:28:36.356 Initializing NVMe Controllers 00:28:36.356 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:36.356 Controller IO queue size 128, less than required. 00:28:36.356 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:36.356 Controller IO queue size 128, less than required. 00:28:36.356 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:36.356 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:36.356 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:36.356 Initialization complete. Launching workers. 00:28:36.356 00:28:36.356 ==================== 00:28:36.356 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:28:36.356 TCP transport: 00:28:36.356 polls: 39760 00:28:36.356 idle_polls: 14445 00:28:36.356 sock_completions: 25315 00:28:36.356 nvme_completions: 3069 00:28:36.356 submitted_requests: 4612 00:28:36.356 queued_requests: 1 00:28:36.356 00:28:36.356 ==================== 00:28:36.356 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:28:36.356 TCP transport: 00:28:36.356 polls: 38528 00:28:36.356 idle_polls: 13771 00:28:36.356 sock_completions: 24757 00:28:36.356 nvme_completions: 3633 00:28:36.356 submitted_requests: 5480 00:28:36.356 queued_requests: 1 00:28:36.356 ======================================================== 00:28:36.356 Latency(us) 00:28:36.356 Device Information : IOPS MiB/s Average min max 00:28:36.356 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 766.41 191.60 175286.95 94591.08 261149.06 00:28:36.356 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 907.30 226.83 143412.37 79821.66 201221.50 00:28:36.356 ======================================================== 00:28:36.356 Total : 1673.71 418.43 158008.07 79821.66 261149.06 00:28:36.356 00:28:36.356 05:42:43 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:28:36.356 05:42:43 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:36.612 05:42:43 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:28:36.612 05:42:43 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:28:36.612 05:42:43 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:28:39.888 05:42:46 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=6b5126c7-e5da-4615-aad2-8eed49199814 00:28:39.888 05:42:46 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 6b5126c7-e5da-4615-aad2-8eed49199814 00:28:39.888 05:42:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1360 -- # local lvs_uuid=6b5126c7-e5da-4615-aad2-8eed49199814 00:28:39.888 05:42:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_info 00:28:39.888 05:42:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1362 -- # local fc 00:28:39.888 05:42:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local cs 00:28:39.888 05:42:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:40.145 05:42:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:28:40.145 { 00:28:40.145 "uuid": "6b5126c7-e5da-4615-aad2-8eed49199814", 00:28:40.145 "name": "lvs_0", 00:28:40.145 "base_bdev": "Nvme0n1", 00:28:40.145 "total_data_clusters": 238234, 00:28:40.145 "free_clusters": 238234, 00:28:40.145 "block_size": 512, 00:28:40.145 "cluster_size": 4194304 00:28:40.145 } 00:28:40.145 ]' 00:28:40.145 05:42:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="6b5126c7-e5da-4615-aad2-8eed49199814") .free_clusters' 00:28:40.145 05:42:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # fc=238234 00:28:40.145 05:42:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="6b5126c7-e5da-4615-aad2-8eed49199814") .cluster_size' 00:28:40.145 05:42:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # cs=4194304 00:28:40.145 05:42:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # free_mb=952936 00:28:40.145 05:42:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # echo 952936 00:28:40.145 952936 00:28:40.145 05:42:47 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:28:40.145 05:42:47 nvmf_tcp.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:28:40.145 05:42:47 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6b5126c7-e5da-4615-aad2-8eed49199814 lbd_0 20480 00:28:40.711 05:42:47 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=0164357f-be61-42f2-a3b9-03a31aa24b6b 00:28:40.711 05:42:47 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 0164357f-be61-42f2-a3b9-03a31aa24b6b lvs_n_0 00:28:41.644 05:42:48 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=aebe2062-ebca-4927-82b9-eb65befaaefb 00:28:41.644 05:42:48 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb aebe2062-ebca-4927-82b9-eb65befaaefb 00:28:41.644 05:42:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1360 -- # local lvs_uuid=aebe2062-ebca-4927-82b9-eb65befaaefb 00:28:41.644 05:42:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_info 00:28:41.644 05:42:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1362 -- # local fc 00:28:41.644 05:42:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local cs 00:28:41.644 05:42:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:41.902 05:42:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:28:41.902 { 00:28:41.902 "uuid": "6b5126c7-e5da-4615-aad2-8eed49199814", 00:28:41.902 "name": "lvs_0", 00:28:41.902 "base_bdev": "Nvme0n1", 00:28:41.902 "total_data_clusters": 238234, 00:28:41.902 "free_clusters": 233114, 00:28:41.902 "block_size": 512, 00:28:41.902 "cluster_size": 4194304 00:28:41.902 }, 00:28:41.902 { 00:28:41.902 "uuid": "aebe2062-ebca-4927-82b9-eb65befaaefb", 00:28:41.902 "name": "lvs_n_0", 00:28:41.902 "base_bdev": "0164357f-be61-42f2-a3b9-03a31aa24b6b", 00:28:41.902 "total_data_clusters": 5114, 00:28:41.902 "free_clusters": 5114, 00:28:41.902 "block_size": 512, 00:28:41.902 "cluster_size": 4194304 00:28:41.902 } 00:28:41.902 ]' 00:28:41.902 05:42:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="aebe2062-ebca-4927-82b9-eb65befaaefb") .free_clusters' 00:28:41.902 05:42:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # fc=5114 00:28:41.902 05:42:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="aebe2062-ebca-4927-82b9-eb65befaaefb") .cluster_size' 00:28:41.902 05:42:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # cs=4194304 00:28:41.902 05:42:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # free_mb=20456 00:28:41.902 05:42:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # echo 20456 00:28:41.902 20456 00:28:41.902 05:42:48 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:28:41.902 05:42:48 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u aebe2062-ebca-4927-82b9-eb65befaaefb lbd_nest_0 20456 00:28:42.159 05:42:49 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=eccee166-d02c-4840-830e-01eb92ec6ca8 00:28:42.159 05:42:49 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:42.417 05:42:49 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:28:42.417 05:42:49 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 eccee166-d02c-4840-830e-01eb92ec6ca8 00:28:42.674 05:42:49 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:42.931 05:42:49 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:28:42.932 05:42:49 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:28:42.932 05:42:49 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:42.932 05:42:49 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:42.932 05:42:49 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:42.932 EAL: No free 2048 kB hugepages reported on node 1 00:28:55.113 Initializing NVMe Controllers 00:28:55.113 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:55.114 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:55.114 Initialization complete. Launching workers. 00:28:55.114 ======================================================== 00:28:55.114 Latency(us) 00:28:55.114 Device Information : IOPS MiB/s Average min max 00:28:55.114 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 49.49 0.02 20245.26 233.54 45229.72 00:28:55.114 ======================================================== 00:28:55.114 Total : 49.49 0.02 20245.26 233.54 45229.72 00:28:55.114 00:28:55.114 05:43:00 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:55.114 05:43:00 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:55.114 EAL: No free 2048 kB hugepages reported on node 1 00:29:05.071 Initializing NVMe Controllers 00:29:05.071 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:05.071 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:05.071 Initialization complete. Launching workers. 00:29:05.071 ======================================================== 00:29:05.071 Latency(us) 00:29:05.071 Device Information : IOPS MiB/s Average min max 00:29:05.071 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 79.90 9.99 12521.94 3980.34 47899.99 00:29:05.071 ======================================================== 00:29:05.072 Total : 79.90 9.99 12521.94 3980.34 47899.99 00:29:05.072 00:29:05.072 05:43:10 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:05.072 05:43:10 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:05.072 05:43:10 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:05.072 EAL: No free 2048 kB hugepages reported on node 1 00:29:15.066 Initializing NVMe Controllers 00:29:15.066 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:15.066 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:15.066 Initialization complete. Launching workers. 00:29:15.066 ======================================================== 00:29:15.066 Latency(us) 00:29:15.066 Device Information : IOPS MiB/s Average min max 00:29:15.066 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6789.10 3.31 4713.05 295.76 12059.52 00:29:15.066 ======================================================== 00:29:15.066 Total : 6789.10 3.31 4713.05 295.76 12059.52 00:29:15.066 00:29:15.066 05:43:20 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:15.066 05:43:20 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:15.066 EAL: No free 2048 kB hugepages reported on node 1 00:29:25.035 Initializing NVMe Controllers 00:29:25.035 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:25.035 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:25.035 Initialization complete. Launching workers. 00:29:25.035 ======================================================== 00:29:25.035 Latency(us) 00:29:25.035 Device Information : IOPS MiB/s Average min max 00:29:25.035 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1687.75 210.97 18972.65 1356.74 46142.12 00:29:25.035 ======================================================== 00:29:25.035 Total : 1687.75 210.97 18972.65 1356.74 46142.12 00:29:25.035 00:29:25.035 05:43:31 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:25.035 05:43:31 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:25.035 05:43:31 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:25.035 EAL: No free 2048 kB hugepages reported on node 1 00:29:34.989 Initializing NVMe Controllers 00:29:34.989 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:34.989 Controller IO queue size 128, less than required. 00:29:34.989 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:34.989 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:34.989 Initialization complete. Launching workers. 00:29:34.989 ======================================================== 00:29:34.989 Latency(us) 00:29:34.989 Device Information : IOPS MiB/s Average min max 00:29:34.989 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11879.78 5.80 10776.70 1529.30 26362.18 00:29:34.989 ======================================================== 00:29:34.989 Total : 11879.78 5.80 10776.70 1529.30 26362.18 00:29:34.989 00:29:34.989 05:43:41 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:34.989 05:43:41 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:34.989 EAL: No free 2048 kB hugepages reported on node 1 00:29:44.963 Initializing NVMe Controllers 00:29:44.963 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:44.963 Controller IO queue size 128, less than required. 00:29:44.963 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:44.963 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:44.963 Initialization complete. Launching workers. 00:29:44.963 ======================================================== 00:29:44.963 Latency(us) 00:29:44.963 Device Information : IOPS MiB/s Average min max 00:29:44.963 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1200.30 150.04 107190.30 15195.24 214740.92 00:29:44.963 ======================================================== 00:29:44.963 Total : 1200.30 150.04 107190.30 15195.24 214740.92 00:29:44.963 00:29:44.963 05:43:51 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:44.963 05:43:51 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete eccee166-d02c-4840-830e-01eb92ec6ca8 00:29:45.895 05:43:52 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:29:46.152 05:43:53 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0164357f-be61-42f2-a3b9-03a31aa24b6b 00:29:46.410 05:43:53 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:29:46.668 05:43:53 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:29:46.668 05:43:53 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:29:46.668 05:43:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:46.668 05:43:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:29:46.668 05:43:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:46.668 05:43:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:29:46.668 05:43:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:46.668 05:43:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:46.668 rmmod nvme_tcp 00:29:46.668 rmmod nvme_fabrics 00:29:46.668 rmmod nvme_keyring 00:29:46.668 05:43:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:46.668 05:43:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:29:46.668 05:43:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:29:46.668 05:43:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 3332749 ']' 00:29:46.668 05:43:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 3332749 00:29:46.668 05:43:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@946 -- # '[' -z 3332749 ']' 00:29:46.668 05:43:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@950 -- # kill -0 3332749 00:29:46.668 05:43:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # uname 00:29:46.668 05:43:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:46.668 05:43:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3332749 00:29:46.668 05:43:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:46.668 05:43:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:46.668 05:43:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3332749' 00:29:46.668 killing process with pid 3332749 00:29:46.668 05:43:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@965 -- # kill 3332749 00:29:46.668 05:43:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@970 -- # wait 3332749 00:29:48.566 05:43:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:48.566 05:43:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:48.566 05:43:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:48.566 05:43:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:48.566 05:43:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:48.566 05:43:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:48.566 05:43:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:48.566 05:43:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:50.466 05:43:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:50.466 00:29:50.466 real 1m30.904s 00:29:50.466 user 5m37.109s 00:29:50.466 sys 0m14.926s 00:29:50.466 05:43:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:50.466 05:43:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:50.466 ************************************ 00:29:50.466 END TEST nvmf_perf 00:29:50.466 ************************************ 00:29:50.466 05:43:57 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:50.466 05:43:57 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:29:50.466 05:43:57 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:50.466 05:43:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:50.466 ************************************ 00:29:50.466 START TEST nvmf_fio_host 00:29:50.466 ************************************ 00:29:50.466 05:43:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:50.466 * Looking for test storage... 00:29:50.466 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:50.466 05:43:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:50.466 05:43:57 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:50.466 05:43:57 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:50.466 05:43:57 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:50.466 05:43:57 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.466 05:43:57 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.466 05:43:57 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.466 05:43:57 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:29:50.466 05:43:57 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.466 05:43:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:50.466 05:43:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:29:50.466 05:43:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:50.466 05:43:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:50.466 05:43:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:50.466 05:43:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:50.466 05:43:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:50.466 05:43:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:50.466 05:43:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:50.466 05:43:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:50.466 05:43:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:50.466 05:43:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:50.466 05:43:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:50.466 05:43:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:50.466 05:43:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:50.466 05:43:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:50.466 05:43:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:50.466 05:43:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:50.466 05:43:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:50.466 05:43:57 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:50.466 05:43:57 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:50.466 05:43:57 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:50.466 05:43:57 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.466 05:43:57 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.466 05:43:57 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.466 05:43:57 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:29:50.466 05:43:57 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.466 05:43:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:29:50.466 05:43:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:50.466 05:43:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:50.466 05:43:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:50.466 05:43:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:50.466 05:43:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:50.466 05:43:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:50.466 05:43:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:50.466 05:43:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:50.466 05:43:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:50.466 05:43:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:29:50.466 05:43:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:50.466 05:43:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:50.466 05:43:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:50.466 05:43:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:50.466 05:43:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:50.466 05:43:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:50.466 05:43:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:50.466 05:43:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:50.466 05:43:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:50.466 05:43:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:50.466 05:43:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:29:50.466 05:43:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:52.399 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:52.399 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:52.399 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:52.399 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:52.399 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:52.657 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:52.657 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:52.657 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:52.657 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:52.657 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:52.657 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:52.657 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:52.658 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:52.658 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:29:52.658 00:29:52.658 --- 10.0.0.2 ping statistics --- 00:29:52.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:52.658 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:29:52.658 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:52.658 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:52.658 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:29:52.658 00:29:52.658 --- 10.0.0.1 ping statistics --- 00:29:52.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:52.658 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:29:52.658 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:52.658 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:29:52.658 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:52.658 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:52.658 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:52.658 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:52.658 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:52.658 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:52.658 05:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:52.658 05:43:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:29:52.658 05:43:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:29:52.658 05:43:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:52.658 05:43:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.658 05:43:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3344738 00:29:52.658 05:43:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:52.658 05:43:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:52.658 05:43:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3344738 00:29:52.658 05:43:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@827 -- # '[' -z 3344738 ']' 00:29:52.658 05:43:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:52.658 05:43:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:52.658 05:43:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:52.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:52.658 05:43:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:52.658 05:43:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.658 [2024-07-14 05:43:59.651107] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:29:52.658 [2024-07-14 05:43:59.651195] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:52.658 EAL: No free 2048 kB hugepages reported on node 1 00:29:52.658 [2024-07-14 05:43:59.716938] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:52.915 [2024-07-14 05:43:59.806977] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:52.915 [2024-07-14 05:43:59.807030] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:52.915 [2024-07-14 05:43:59.807043] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:52.915 [2024-07-14 05:43:59.807054] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:52.915 [2024-07-14 05:43:59.807064] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:52.915 [2024-07-14 05:43:59.807119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:52.915 [2024-07-14 05:43:59.807192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:52.915 [2024-07-14 05:43:59.807260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:52.915 [2024-07-14 05:43:59.807263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:52.915 05:43:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:52.915 05:43:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@860 -- # return 0 00:29:52.915 05:43:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:53.172 [2024-07-14 05:44:00.162369] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:53.172 05:44:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:29:53.172 05:44:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:53.172 05:44:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.172 05:44:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:29:53.429 Malloc1 00:29:53.429 05:44:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:53.686 05:44:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:53.943 05:44:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:54.200 [2024-07-14 05:44:01.214130] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:54.200 05:44:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:54.457 05:44:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:54.457 05:44:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:54.457 05:44:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:54.457 05:44:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:29:54.457 05:44:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:54.457 05:44:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:29:54.457 05:44:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:54.457 05:44:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:29:54.457 05:44:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:29:54.457 05:44:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:54.457 05:44:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:54.457 05:44:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:29:54.457 05:44:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:54.457 05:44:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:54.457 05:44:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:54.457 05:44:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:54.457 05:44:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:54.457 05:44:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:29:54.457 05:44:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:54.457 05:44:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:54.457 05:44:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:54.457 05:44:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:54.457 05:44:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:54.714 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:54.714 fio-3.35 00:29:54.714 Starting 1 thread 00:29:54.714 EAL: No free 2048 kB hugepages reported on node 1 00:29:57.241 00:29:57.241 test: (groupid=0, jobs=1): err= 0: pid=3345093: Sun Jul 14 05:44:03 2024 00:29:57.241 read: IOPS=9292, BW=36.3MiB/s (38.1MB/s)(72.9MiB/2007msec) 00:29:57.241 slat (nsec): min=1857, max=159026, avg=2453.04, stdev=1799.28 00:29:57.241 clat (usec): min=3188, max=13101, avg=7620.77, stdev=548.95 00:29:57.241 lat (usec): min=3217, max=13103, avg=7623.22, stdev=548.84 00:29:57.241 clat percentiles (usec): 00:29:57.241 | 1.00th=[ 6390], 5.00th=[ 6783], 10.00th=[ 6980], 20.00th=[ 7177], 00:29:57.241 | 30.00th=[ 7373], 40.00th=[ 7504], 50.00th=[ 7635], 60.00th=[ 7767], 00:29:57.241 | 70.00th=[ 7898], 80.00th=[ 8029], 90.00th=[ 8291], 95.00th=[ 8455], 00:29:57.241 | 99.00th=[ 8848], 99.50th=[ 8979], 99.90th=[10814], 99.95th=[11994], 00:29:57.241 | 99.99th=[13042] 00:29:57.241 bw ( KiB/s): min=36256, max=37872, per=99.97%, avg=37162.00, stdev=673.57, samples=4 00:29:57.241 iops : min= 9064, max= 9468, avg=9290.50, stdev=168.39, samples=4 00:29:57.241 write: IOPS=9297, BW=36.3MiB/s (38.1MB/s)(72.9MiB/2007msec); 0 zone resets 00:29:57.241 slat (nsec): min=1969, max=139724, avg=2593.67, stdev=1465.33 00:29:57.241 clat (usec): min=1427, max=12082, avg=6101.88, stdev=498.10 00:29:57.241 lat (usec): min=1436, max=12084, avg=6104.47, stdev=498.04 00:29:57.241 clat percentiles (usec): 00:29:57.241 | 1.00th=[ 5014], 5.00th=[ 5342], 10.00th=[ 5538], 20.00th=[ 5735], 00:29:57.241 | 30.00th=[ 5866], 40.00th=[ 5997], 50.00th=[ 6128], 60.00th=[ 6194], 00:29:57.241 | 70.00th=[ 6325], 80.00th=[ 6456], 90.00th=[ 6652], 95.00th=[ 6849], 00:29:57.241 | 99.00th=[ 7177], 99.50th=[ 7242], 99.90th=[ 9896], 99.95th=[10945], 00:29:57.241 | 99.99th=[11994] 00:29:57.241 bw ( KiB/s): min=37056, max=37440, per=100.00%, avg=37206.00, stdev=167.98, samples=4 00:29:57.241 iops : min= 9264, max= 9360, avg=9301.50, stdev=42.00, samples=4 00:29:57.241 lat (msec) : 2=0.01%, 4=0.10%, 10=99.77%, 20=0.12% 00:29:57.241 cpu : usr=55.23%, sys=37.04%, ctx=35, majf=0, minf=6 00:29:57.241 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:29:57.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:57.241 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:57.241 issued rwts: total=18651,18660,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:57.241 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:57.241 00:29:57.241 Run status group 0 (all jobs): 00:29:57.241 READ: bw=36.3MiB/s (38.1MB/s), 36.3MiB/s-36.3MiB/s (38.1MB/s-38.1MB/s), io=72.9MiB (76.4MB), run=2007-2007msec 00:29:57.241 WRITE: bw=36.3MiB/s (38.1MB/s), 36.3MiB/s-36.3MiB/s (38.1MB/s-38.1MB/s), io=72.9MiB (76.4MB), run=2007-2007msec 00:29:57.241 05:44:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:57.241 05:44:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:57.241 05:44:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:29:57.241 05:44:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:57.241 05:44:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:29:57.241 05:44:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:57.241 05:44:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:29:57.241 05:44:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:29:57.241 05:44:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:57.241 05:44:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:57.241 05:44:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:29:57.241 05:44:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:57.241 05:44:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:57.241 05:44:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:57.241 05:44:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:57.241 05:44:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:57.241 05:44:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:29:57.241 05:44:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:57.241 05:44:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:57.241 05:44:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:57.241 05:44:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:57.241 05:44:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:57.241 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:29:57.241 fio-3.35 00:29:57.241 Starting 1 thread 00:29:57.241 EAL: No free 2048 kB hugepages reported on node 1 00:29:59.775 00:29:59.775 test: (groupid=0, jobs=1): err= 0: pid=3345536: Sun Jul 14 05:44:06 2024 00:29:59.775 read: IOPS=7409, BW=116MiB/s (121MB/s)(232MiB/2006msec) 00:29:59.775 slat (usec): min=2, max=115, avg= 3.76, stdev= 2.03 00:29:59.775 clat (usec): min=2220, max=24251, avg=10555.77, stdev=2694.86 00:29:59.775 lat (usec): min=2223, max=24254, avg=10559.53, stdev=2694.90 00:29:59.775 clat percentiles (usec): 00:29:59.775 | 1.00th=[ 5145], 5.00th=[ 6652], 10.00th=[ 7373], 20.00th=[ 8291], 00:29:59.775 | 30.00th=[ 8979], 40.00th=[ 9765], 50.00th=[10421], 60.00th=[11076], 00:29:59.775 | 70.00th=[11600], 80.00th=[12518], 90.00th=[13829], 95.00th=[15401], 00:29:59.775 | 99.00th=[19268], 99.50th=[19792], 99.90th=[21627], 99.95th=[22152], 00:29:59.775 | 99.99th=[23725] 00:29:59.775 bw ( KiB/s): min=52448, max=66272, per=50.39%, avg=59744.00, stdev=6268.13, samples=4 00:29:59.775 iops : min= 3278, max= 4142, avg=3734.00, stdev=391.76, samples=4 00:29:59.775 write: IOPS=4216, BW=65.9MiB/s (69.1MB/s)(122MiB/1847msec); 0 zone resets 00:29:59.775 slat (usec): min=30, max=147, avg=34.68, stdev= 5.97 00:29:59.775 clat (usec): min=3649, max=19121, avg=12036.15, stdev=2068.78 00:29:59.775 lat (usec): min=3682, max=19158, avg=12070.83, stdev=2068.81 00:29:59.775 clat percentiles (usec): 00:29:59.775 | 1.00th=[ 7701], 5.00th=[ 9110], 10.00th=[ 9634], 20.00th=[10421], 00:29:59.775 | 30.00th=[10814], 40.00th=[11338], 50.00th=[11863], 60.00th=[12387], 00:29:59.775 | 70.00th=[13042], 80.00th=[13829], 90.00th=[15008], 95.00th=[15795], 00:29:59.775 | 99.00th=[16909], 99.50th=[17171], 99.90th=[18220], 99.95th=[18482], 00:29:59.775 | 99.99th=[19006] 00:29:59.775 bw ( KiB/s): min=54272, max=69440, per=91.78%, avg=61912.00, stdev=6799.85, samples=4 00:29:59.775 iops : min= 3392, max= 4340, avg=3869.50, stdev=424.99, samples=4 00:29:59.775 lat (msec) : 4=0.22%, 10=33.93%, 20=65.56%, 50=0.29% 00:29:59.775 cpu : usr=74.93%, sys=21.34%, ctx=20, majf=0, minf=2 00:29:59.775 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:29:59.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:59.775 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:59.775 issued rwts: total=14864,7787,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:59.775 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:59.775 00:29:59.775 Run status group 0 (all jobs): 00:29:59.775 READ: bw=116MiB/s (121MB/s), 116MiB/s-116MiB/s (121MB/s-121MB/s), io=232MiB (244MB), run=2006-2006msec 00:29:59.775 WRITE: bw=65.9MiB/s (69.1MB/s), 65.9MiB/s-65.9MiB/s (69.1MB/s-69.1MB/s), io=122MiB (128MB), run=1847-1847msec 00:29:59.775 05:44:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:00.033 05:44:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:30:00.033 05:44:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:30:00.033 05:44:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:30:00.033 05:44:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1509 -- # bdfs=() 00:30:00.033 05:44:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1509 -- # local bdfs 00:30:00.033 05:44:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:00.033 05:44:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:00.033 05:44:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:30:00.033 05:44:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:30:00.033 05:44:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:88:00.0 00:30:00.033 05:44:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:30:03.313 Nvme0n1 00:30:03.313 05:44:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:30:05.838 05:44:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=629f9b35-b41f-4832-aace-814dc32e7463 00:30:05.838 05:44:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 629f9b35-b41f-4832-aace-814dc32e7463 00:30:05.838 05:44:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # local lvs_uuid=629f9b35-b41f-4832-aace-814dc32e7463 00:30:05.838 05:44:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_info 00:30:05.838 05:44:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local fc 00:30:05.838 05:44:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local cs 00:30:05.838 05:44:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:06.095 05:44:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:30:06.095 { 00:30:06.095 "uuid": "629f9b35-b41f-4832-aace-814dc32e7463", 00:30:06.095 "name": "lvs_0", 00:30:06.095 "base_bdev": "Nvme0n1", 00:30:06.095 "total_data_clusters": 930, 00:30:06.095 "free_clusters": 930, 00:30:06.095 "block_size": 512, 00:30:06.095 "cluster_size": 1073741824 00:30:06.095 } 00:30:06.095 ]' 00:30:06.095 05:44:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="629f9b35-b41f-4832-aace-814dc32e7463") .free_clusters' 00:30:06.095 05:44:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # fc=930 00:30:06.095 05:44:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="629f9b35-b41f-4832-aace-814dc32e7463") .cluster_size' 00:30:06.353 05:44:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # cs=1073741824 00:30:06.353 05:44:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # free_mb=952320 00:30:06.353 05:44:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # echo 952320 00:30:06.353 952320 00:30:06.353 05:44:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:30:06.611 ed503b1b-a981-4af0-9fc3-2cd2c70686cd 00:30:06.611 05:44:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:30:06.868 05:44:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:30:07.126 05:44:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:07.385 05:44:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:07.385 05:44:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:07.385 05:44:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:07.385 05:44:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:07.385 05:44:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:07.385 05:44:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:07.385 05:44:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:30:07.385 05:44:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:07.385 05:44:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:07.385 05:44:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:07.385 05:44:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:30:07.385 05:44:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:07.385 05:44:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:07.385 05:44:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:07.385 05:44:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:07.385 05:44:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:07.385 05:44:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:07.385 05:44:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:07.385 05:44:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:07.385 05:44:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:07.385 05:44:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:07.385 05:44:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:07.643 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:07.643 fio-3.35 00:30:07.643 Starting 1 thread 00:30:07.643 EAL: No free 2048 kB hugepages reported on node 1 00:30:10.167 00:30:10.167 test: (groupid=0, jobs=1): err= 0: pid=3346823: Sun Jul 14 05:44:16 2024 00:30:10.167 read: IOPS=4095, BW=16.0MiB/s (16.8MB/s)(32.1MiB/2009msec) 00:30:10.167 slat (nsec): min=1797, max=132584, avg=2528.26, stdev=2535.52 00:30:10.167 clat (usec): min=1526, max=177818, avg=16949.17, stdev=13863.11 00:30:10.167 lat (usec): min=1528, max=177852, avg=16951.69, stdev=13863.47 00:30:10.167 clat percentiles (msec): 00:30:10.167 | 1.00th=[ 12], 5.00th=[ 13], 10.00th=[ 13], 20.00th=[ 14], 00:30:10.167 | 30.00th=[ 15], 40.00th=[ 15], 50.00th=[ 16], 60.00th=[ 16], 00:30:10.167 | 70.00th=[ 17], 80.00th=[ 19], 90.00th=[ 20], 95.00th=[ 21], 00:30:10.167 | 99.00th=[ 23], 99.50th=[ 157], 99.90th=[ 178], 99.95th=[ 178], 00:30:10.167 | 99.99th=[ 178] 00:30:10.167 bw ( KiB/s): min= 9824, max=19680, per=99.53%, avg=16304.00, stdev=4643.46, samples=4 00:30:10.167 iops : min= 2456, max= 4920, avg=4076.00, stdev=1160.87, samples=4 00:30:10.167 write: IOPS=4121, BW=16.1MiB/s (16.9MB/s)(32.3MiB/2009msec); 0 zone resets 00:30:10.167 slat (nsec): min=1897, max=133243, avg=2678.81, stdev=2252.06 00:30:10.167 clat (usec): min=605, max=173896, avg=13963.38, stdev=12921.26 00:30:10.167 lat (usec): min=607, max=173903, avg=13966.06, stdev=12921.71 00:30:10.167 clat percentiles (msec): 00:30:10.167 | 1.00th=[ 10], 5.00th=[ 11], 10.00th=[ 11], 20.00th=[ 12], 00:30:10.167 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 13], 60.00th=[ 14], 00:30:10.167 | 70.00th=[ 14], 80.00th=[ 16], 90.00th=[ 17], 95.00th=[ 17], 00:30:10.167 | 99.00th=[ 19], 99.50th=[ 157], 99.90th=[ 174], 99.95th=[ 174], 00:30:10.167 | 99.99th=[ 174] 00:30:10.167 bw ( KiB/s): min=10280, max=19904, per=99.82%, avg=16458.00, stdev=4347.15, samples=4 00:30:10.167 iops : min= 2570, max= 4976, avg=4114.50, stdev=1086.79, samples=4 00:30:10.167 lat (usec) : 750=0.01%, 1000=0.01% 00:30:10.167 lat (msec) : 2=0.02%, 4=0.04%, 10=2.42%, 20=93.62%, 50=3.11% 00:30:10.167 lat (msec) : 250=0.78% 00:30:10.167 cpu : usr=49.00%, sys=46.96%, ctx=107, majf=0, minf=24 00:30:10.167 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:30:10.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.167 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:10.167 issued rwts: total=8227,8281,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:10.167 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:10.167 00:30:10.167 Run status group 0 (all jobs): 00:30:10.167 READ: bw=16.0MiB/s (16.8MB/s), 16.0MiB/s-16.0MiB/s (16.8MB/s-16.8MB/s), io=32.1MiB (33.7MB), run=2009-2009msec 00:30:10.167 WRITE: bw=16.1MiB/s (16.9MB/s), 16.1MiB/s-16.1MiB/s (16.9MB/s-16.9MB/s), io=32.3MiB (33.9MB), run=2009-2009msec 00:30:10.167 05:44:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:10.167 05:44:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:30:11.594 05:44:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=d2097e6d-f5e6-457a-9195-4d162c6f207f 00:30:11.594 05:44:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb d2097e6d-f5e6-457a-9195-4d162c6f207f 00:30:11.594 05:44:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # local lvs_uuid=d2097e6d-f5e6-457a-9195-4d162c6f207f 00:30:11.594 05:44:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_info 00:30:11.594 05:44:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local fc 00:30:11.594 05:44:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local cs 00:30:11.594 05:44:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:11.594 05:44:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:30:11.594 { 00:30:11.594 "uuid": "629f9b35-b41f-4832-aace-814dc32e7463", 00:30:11.594 "name": "lvs_0", 00:30:11.594 "base_bdev": "Nvme0n1", 00:30:11.594 "total_data_clusters": 930, 00:30:11.594 "free_clusters": 0, 00:30:11.594 "block_size": 512, 00:30:11.594 "cluster_size": 1073741824 00:30:11.594 }, 00:30:11.594 { 00:30:11.594 "uuid": "d2097e6d-f5e6-457a-9195-4d162c6f207f", 00:30:11.594 "name": "lvs_n_0", 00:30:11.594 "base_bdev": "ed503b1b-a981-4af0-9fc3-2cd2c70686cd", 00:30:11.594 "total_data_clusters": 237847, 00:30:11.594 "free_clusters": 237847, 00:30:11.595 "block_size": 512, 00:30:11.595 "cluster_size": 4194304 00:30:11.595 } 00:30:11.595 ]' 00:30:11.595 05:44:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="d2097e6d-f5e6-457a-9195-4d162c6f207f") .free_clusters' 00:30:11.595 05:44:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # fc=237847 00:30:11.595 05:44:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="d2097e6d-f5e6-457a-9195-4d162c6f207f") .cluster_size' 00:30:11.595 05:44:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # cs=4194304 00:30:11.595 05:44:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # free_mb=951388 00:30:11.595 05:44:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # echo 951388 00:30:11.595 951388 00:30:11.595 05:44:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:30:12.524 8cb88b1f-8dd0-4103-b203-2bc6ecd45abc 00:30:12.524 05:44:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:30:12.524 05:44:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:30:12.780 05:44:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:30:13.038 05:44:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:13.038 05:44:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:13.038 05:44:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:13.038 05:44:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:13.038 05:44:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:13.038 05:44:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:13.038 05:44:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:30:13.038 05:44:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:13.038 05:44:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:13.038 05:44:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:13.038 05:44:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:30:13.038 05:44:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:13.038 05:44:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:13.038 05:44:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:13.038 05:44:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:13.038 05:44:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:13.038 05:44:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:13.038 05:44:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:13.296 05:44:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:13.296 05:44:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:13.296 05:44:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:13.296 05:44:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:13.296 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:13.296 fio-3.35 00:30:13.296 Starting 1 thread 00:30:13.296 EAL: No free 2048 kB hugepages reported on node 1 00:30:15.822 00:30:15.822 test: (groupid=0, jobs=1): err= 0: pid=3347552: Sun Jul 14 05:44:22 2024 00:30:15.822 read: IOPS=5878, BW=23.0MiB/s (24.1MB/s)(46.2MiB/2010msec) 00:30:15.822 slat (nsec): min=1921, max=142552, avg=2524.76, stdev=2207.33 00:30:15.822 clat (usec): min=4685, max=19487, avg=12079.85, stdev=986.89 00:30:15.822 lat (usec): min=4690, max=19490, avg=12082.37, stdev=986.77 00:30:15.822 clat percentiles (usec): 00:30:15.822 | 1.00th=[ 9765], 5.00th=[10552], 10.00th=[10945], 20.00th=[11338], 00:30:15.822 | 30.00th=[11600], 40.00th=[11863], 50.00th=[12125], 60.00th=[12256], 00:30:15.822 | 70.00th=[12518], 80.00th=[12911], 90.00th=[13304], 95.00th=[13698], 00:30:15.822 | 99.00th=[14484], 99.50th=[14746], 99.90th=[17171], 99.95th=[18482], 00:30:15.823 | 99.99th=[19268] 00:30:15.823 bw ( KiB/s): min=22400, max=23952, per=99.96%, avg=23504.00, stdev=740.02, samples=4 00:30:15.823 iops : min= 5600, max= 5988, avg=5876.00, stdev=185.00, samples=4 00:30:15.823 write: IOPS=5871, BW=22.9MiB/s (24.1MB/s)(46.1MiB/2010msec); 0 zone resets 00:30:15.823 slat (usec): min=2, max=117, avg= 2.62, stdev= 1.65 00:30:15.823 clat (usec): min=2507, max=18533, avg=9589.40, stdev=912.27 00:30:15.823 lat (usec): min=2513, max=18535, avg=9592.02, stdev=912.23 00:30:15.823 clat percentiles (usec): 00:30:15.823 | 1.00th=[ 7504], 5.00th=[ 8225], 10.00th=[ 8586], 20.00th=[ 8979], 00:30:15.823 | 30.00th=[ 9110], 40.00th=[ 9372], 50.00th=[ 9634], 60.00th=[ 9765], 00:30:15.823 | 70.00th=[10028], 80.00th=[10290], 90.00th=[10683], 95.00th=[10945], 00:30:15.823 | 99.00th=[11600], 99.50th=[11994], 99.90th=[17171], 99.95th=[18220], 00:30:15.823 | 99.99th=[18482] 00:30:15.823 bw ( KiB/s): min=23320, max=23616, per=99.96%, avg=23478.00, stdev=129.72, samples=4 00:30:15.823 iops : min= 5830, max= 5904, avg=5869.50, stdev=32.43, samples=4 00:30:15.823 lat (msec) : 4=0.05%, 10=36.04%, 20=63.91% 00:30:15.823 cpu : usr=53.91%, sys=40.27%, ctx=82, majf=0, minf=24 00:30:15.823 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:30:15.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:15.823 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:15.823 issued rwts: total=11816,11802,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:15.823 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:15.823 00:30:15.823 Run status group 0 (all jobs): 00:30:15.823 READ: bw=23.0MiB/s (24.1MB/s), 23.0MiB/s-23.0MiB/s (24.1MB/s-24.1MB/s), io=46.2MiB (48.4MB), run=2010-2010msec 00:30:15.823 WRITE: bw=22.9MiB/s (24.1MB/s), 22.9MiB/s-22.9MiB/s (24.1MB/s-24.1MB/s), io=46.1MiB (48.3MB), run=2010-2010msec 00:30:15.823 05:44:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:30:16.081 05:44:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:30:16.081 05:44:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:30:20.260 05:44:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:30:20.260 05:44:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:30:23.539 05:44:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:30:23.539 05:44:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:30:25.436 05:44:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:30:25.436 05:44:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:30:25.436 05:44:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:30:25.436 05:44:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:25.436 05:44:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:30:25.436 05:44:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:25.436 05:44:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:30:25.436 05:44:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:25.436 05:44:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:25.436 rmmod nvme_tcp 00:30:25.436 rmmod nvme_fabrics 00:30:25.436 rmmod nvme_keyring 00:30:25.436 05:44:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:25.436 05:44:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:30:25.436 05:44:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:30:25.436 05:44:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 3344738 ']' 00:30:25.436 05:44:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 3344738 00:30:25.436 05:44:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@946 -- # '[' -z 3344738 ']' 00:30:25.436 05:44:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@950 -- # kill -0 3344738 00:30:25.436 05:44:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # uname 00:30:25.436 05:44:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:25.436 05:44:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3344738 00:30:25.436 05:44:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:25.436 05:44:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:25.436 05:44:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3344738' 00:30:25.436 killing process with pid 3344738 00:30:25.436 05:44:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@965 -- # kill 3344738 00:30:25.436 05:44:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@970 -- # wait 3344738 00:30:25.694 05:44:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:25.694 05:44:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:25.694 05:44:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:25.694 05:44:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:25.694 05:44:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:25.694 05:44:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:25.694 05:44:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:25.694 05:44:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:27.595 05:44:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:27.595 00:30:27.595 real 0m37.198s 00:30:27.595 user 2m20.664s 00:30:27.595 sys 0m7.733s 00:30:27.595 05:44:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:27.595 05:44:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:27.595 ************************************ 00:30:27.595 END TEST nvmf_fio_host 00:30:27.595 ************************************ 00:30:27.595 05:44:34 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:27.595 05:44:34 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:30:27.595 05:44:34 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:27.595 05:44:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:27.595 ************************************ 00:30:27.595 START TEST nvmf_failover 00:30:27.595 ************************************ 00:30:27.595 05:44:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:27.595 * Looking for test storage... 00:30:27.595 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:27.595 05:44:34 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:27.595 05:44:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:30:27.595 05:44:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:27.595 05:44:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:27.595 05:44:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:27.595 05:44:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:27.595 05:44:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:27.595 05:44:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:27.595 05:44:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:27.595 05:44:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:27.595 05:44:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:27.595 05:44:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:27.595 05:44:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:27.595 05:44:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:27.595 05:44:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:27.595 05:44:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:27.595 05:44:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:27.595 05:44:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:27.595 05:44:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:27.595 05:44:34 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:27.595 05:44:34 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:27.595 05:44:34 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:27.595 05:44:34 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.595 05:44:34 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.595 05:44:34 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.595 05:44:34 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:30:27.595 05:44:34 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.853 05:44:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:30:27.853 05:44:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:27.853 05:44:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:27.853 05:44:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:27.853 05:44:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:27.853 05:44:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:27.853 05:44:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:27.853 05:44:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:27.853 05:44:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:27.853 05:44:34 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:27.853 05:44:34 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:27.853 05:44:34 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:27.853 05:44:34 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:27.853 05:44:34 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:30:27.853 05:44:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:27.853 05:44:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:27.853 05:44:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:27.854 05:44:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:27.854 05:44:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:27.854 05:44:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:27.854 05:44:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:27.854 05:44:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:27.854 05:44:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:27.854 05:44:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:27.854 05:44:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:30:27.854 05:44:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:29.755 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:29.755 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:30:29.755 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:29.755 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:29.755 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:29.755 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:29.755 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:29.755 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:30:29.755 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:29.755 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:30:29.755 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:30:29.755 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:29.756 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:29.756 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:29.756 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:29.756 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:29.756 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:29.756 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.138 ms 00:30:29.756 00:30:29.756 --- 10.0.0.2 ping statistics --- 00:30:29.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:29.756 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:29.756 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:29.756 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:30:29.756 00:30:29.756 --- 10.0.0.1 ping statistics --- 00:30:29.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:29.756 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=3350847 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 3350847 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 3350847 ']' 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:29.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:29.756 05:44:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:29.756 [2024-07-14 05:44:36.840327] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:30:29.756 [2024-07-14 05:44:36.840399] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:30.016 EAL: No free 2048 kB hugepages reported on node 1 00:30:30.016 [2024-07-14 05:44:36.915538] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:30.016 [2024-07-14 05:44:37.012067] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:30.016 [2024-07-14 05:44:37.012125] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:30.016 [2024-07-14 05:44:37.012161] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:30.016 [2024-07-14 05:44:37.012183] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:30.016 [2024-07-14 05:44:37.012203] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:30.016 [2024-07-14 05:44:37.012310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:30.016 [2024-07-14 05:44:37.012409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:30.016 [2024-07-14 05:44:37.012419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:30.274 05:44:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:30.274 05:44:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:30:30.274 05:44:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:30.274 05:44:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:30.274 05:44:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:30.274 05:44:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:30.274 05:44:37 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:30.532 [2024-07-14 05:44:37.425780] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:30.532 05:44:37 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:30.791 Malloc0 00:30:30.791 05:44:37 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:31.049 05:44:38 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:31.319 05:44:38 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:31.621 [2024-07-14 05:44:38.513924] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:31.621 05:44:38 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:31.879 [2024-07-14 05:44:38.770704] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:31.879 05:44:38 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:32.137 [2024-07-14 05:44:39.063704] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:32.137 05:44:39 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3351195 00:30:32.137 05:44:39 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:30:32.137 05:44:39 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:32.137 05:44:39 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3351195 /var/tmp/bdevperf.sock 00:30:32.137 05:44:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 3351195 ']' 00:30:32.137 05:44:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:32.137 05:44:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:32.137 05:44:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:32.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:32.137 05:44:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:32.137 05:44:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:32.396 05:44:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:32.396 05:44:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:30:32.396 05:44:39 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:32.654 NVMe0n1 00:30:32.654 05:44:39 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:33.221 00:30:33.221 05:44:40 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3351225 00:30:33.221 05:44:40 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:33.221 05:44:40 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:30:34.157 05:44:41 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:34.416 [2024-07-14 05:44:41.335698] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.416 [2024-07-14 05:44:41.335771] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.416 [2024-07-14 05:44:41.335787] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.416 [2024-07-14 05:44:41.335799] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.416 [2024-07-14 05:44:41.335811] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.416 [2024-07-14 05:44:41.335824] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.416 [2024-07-14 05:44:41.335836] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.416 [2024-07-14 05:44:41.335881] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.416 [2024-07-14 05:44:41.335895] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.416 [2024-07-14 05:44:41.335921] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.416 [2024-07-14 05:44:41.335934] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.416 [2024-07-14 05:44:41.335946] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.416 [2024-07-14 05:44:41.335957] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.416 [2024-07-14 05:44:41.335969] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.416 [2024-07-14 05:44:41.335980] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.416 [2024-07-14 05:44:41.335992] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.416 [2024-07-14 05:44:41.336004] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.416 [2024-07-14 05:44:41.336016] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.416 [2024-07-14 05:44:41.336027] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.416 [2024-07-14 05:44:41.336039] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.416 [2024-07-14 05:44:41.336050] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.416 [2024-07-14 05:44:41.336062] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.416 [2024-07-14 05:44:41.336074] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.416 [2024-07-14 05:44:41.336086] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.416 [2024-07-14 05:44:41.336098] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.416 [2024-07-14 05:44:41.336110] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.416 [2024-07-14 05:44:41.336121] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.416 [2024-07-14 05:44:41.336133] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.416 [2024-07-14 05:44:41.336149] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.416 [2024-07-14 05:44:41.336161] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.416 [2024-07-14 05:44:41.336176] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.416 [2024-07-14 05:44:41.336188] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.416 [2024-07-14 05:44:41.336199] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.416 [2024-07-14 05:44:41.336211] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.416 [2024-07-14 05:44:41.336247] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.416 [2024-07-14 05:44:41.336272] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.416 [2024-07-14 05:44:41.336287] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.416 [2024-07-14 05:44:41.336298] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.416 [2024-07-14 05:44:41.336310] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.416 [2024-07-14 05:44:41.336322] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.416 [2024-07-14 05:44:41.336385] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.416 [2024-07-14 05:44:41.336417] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.416 [2024-07-14 05:44:41.336431] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.416 [2024-07-14 05:44:41.336444] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.416 [2024-07-14 05:44:41.336456] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.416 [2024-07-14 05:44:41.336468] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.416 [2024-07-14 05:44:41.336480] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.416 [2024-07-14 05:44:41.336491] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.416 [2024-07-14 05:44:41.336503] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.416 [2024-07-14 05:44:41.336515] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.416 [2024-07-14 05:44:41.336526] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.416 [2024-07-14 05:44:41.336538] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.416 [2024-07-14 05:44:41.336550] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.416 [2024-07-14 05:44:41.336562] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.417 [2024-07-14 05:44:41.336575] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.417 [2024-07-14 05:44:41.336601] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.417 [2024-07-14 05:44:41.336614] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.417 [2024-07-14 05:44:41.336626] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.417 [2024-07-14 05:44:41.336638] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.417 [2024-07-14 05:44:41.336650] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.417 [2024-07-14 05:44:41.336662] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.417 [2024-07-14 05:44:41.336685] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.417 [2024-07-14 05:44:41.336711] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.417 [2024-07-14 05:44:41.336725] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.417 [2024-07-14 05:44:41.336737] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.417 [2024-07-14 05:44:41.336749] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.417 [2024-07-14 05:44:41.336762] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.417 [2024-07-14 05:44:41.336774] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.417 [2024-07-14 05:44:41.336785] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.417 [2024-07-14 05:44:41.336797] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.417 [2024-07-14 05:44:41.336809] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.417 [2024-07-14 05:44:41.336820] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.417 [2024-07-14 05:44:41.336832] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.417 [2024-07-14 05:44:41.336843] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.417 [2024-07-14 05:44:41.336861] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.417 [2024-07-14 05:44:41.336882] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.417 [2024-07-14 05:44:41.336895] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.417 [2024-07-14 05:44:41.336907] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.417 [2024-07-14 05:44:41.336918] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.417 [2024-07-14 05:44:41.336930] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.417 [2024-07-14 05:44:41.336942] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.417 [2024-07-14 05:44:41.336953] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.417 [2024-07-14 05:44:41.336968] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.417 [2024-07-14 05:44:41.336981] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.417 [2024-07-14 05:44:41.336993] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.417 [2024-07-14 05:44:41.337005] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.417 [2024-07-14 05:44:41.337017] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.417 [2024-07-14 05:44:41.337029] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.417 [2024-07-14 05:44:41.337042] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1d50 is same with the state(5) to be set 00:30:34.417 05:44:41 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:30:37.695 05:44:44 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:37.695 00:30:37.695 05:44:44 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:37.953 [2024-07-14 05:44:44.975465] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e2bd0 is same with the state(5) to be set 00:30:37.953 [2024-07-14 05:44:44.975528] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e2bd0 is same with the state(5) to be set 00:30:37.953 [2024-07-14 05:44:44.975552] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e2bd0 is same with the state(5) to be set 00:30:37.953 [2024-07-14 05:44:44.975571] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e2bd0 is same with the state(5) to be set 00:30:37.953 [2024-07-14 05:44:44.975591] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e2bd0 is same with the state(5) to be set 00:30:37.953 [2024-07-14 05:44:44.975608] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e2bd0 is same with the state(5) to be set 00:30:37.953 [2024-07-14 05:44:44.975627] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e2bd0 is same with the state(5) to be set 00:30:37.953 [2024-07-14 05:44:44.975644] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e2bd0 is same with the state(5) to be set 00:30:37.953 [2024-07-14 05:44:44.975663] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e2bd0 is same with the state(5) to be set 00:30:37.953 [2024-07-14 05:44:44.975681] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e2bd0 is same with the state(5) to be set 00:30:37.953 [2024-07-14 05:44:44.975699] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e2bd0 is same with the state(5) to be set 00:30:37.953 [2024-07-14 05:44:44.975717] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e2bd0 is same with the state(5) to be set 00:30:37.953 [2024-07-14 05:44:44.975737] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e2bd0 is same with the state(5) to be set 00:30:37.953 [2024-07-14 05:44:44.975756] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e2bd0 is same with the state(5) to be set 00:30:37.953 [2024-07-14 05:44:44.975774] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e2bd0 is same with the state(5) to be set 00:30:37.953 [2024-07-14 05:44:44.975792] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e2bd0 is same with the state(5) to be set 00:30:37.953 [2024-07-14 05:44:44.975812] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e2bd0 is same with the state(5) to be set 00:30:37.953 [2024-07-14 05:44:44.975843] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e2bd0 is same with the state(5) to be set 00:30:37.953 [2024-07-14 05:44:44.975863] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e2bd0 is same with the state(5) to be set 00:30:37.953 [2024-07-14 05:44:44.975893] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e2bd0 is same with the state(5) to be set 00:30:37.953 [2024-07-14 05:44:44.975921] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e2bd0 is same with the state(5) to be set 00:30:37.953 [2024-07-14 05:44:44.975939] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e2bd0 is same with the state(5) to be set 00:30:37.953 [2024-07-14 05:44:44.975959] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e2bd0 is same with the state(5) to be set 00:30:37.953 [2024-07-14 05:44:44.975977] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e2bd0 is same with the state(5) to be set 00:30:37.953 [2024-07-14 05:44:44.975998] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e2bd0 is same with the state(5) to be set 00:30:37.953 [2024-07-14 05:44:44.976017] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e2bd0 is same with the state(5) to be set 00:30:37.953 [2024-07-14 05:44:44.976034] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e2bd0 is same with the state(5) to be set 00:30:37.953 [2024-07-14 05:44:44.976053] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e2bd0 is same with the state(5) to be set 00:30:37.953 [2024-07-14 05:44:44.976070] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e2bd0 is same with the state(5) to be set 00:30:37.953 [2024-07-14 05:44:44.976089] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e2bd0 is same with the state(5) to be set 00:30:37.953 [2024-07-14 05:44:44.976109] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e2bd0 is same with the state(5) to be set 00:30:37.953 [2024-07-14 05:44:44.976129] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e2bd0 is same with the state(5) to be set 00:30:37.953 [2024-07-14 05:44:44.976162] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e2bd0 is same with the state(5) to be set 00:30:37.953 [2024-07-14 05:44:44.976183] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e2bd0 is same with the state(5) to be set 00:30:37.953 [2024-07-14 05:44:44.976217] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e2bd0 is same with the state(5) to be set 00:30:37.953 [2024-07-14 05:44:44.976237] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e2bd0 is same with the state(5) to be set 00:30:37.953 [2024-07-14 05:44:44.976259] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e2bd0 is same with the state(5) to be set 00:30:37.953 [2024-07-14 05:44:44.976280] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e2bd0 is same with the state(5) to be set 00:30:37.953 [2024-07-14 05:44:44.976301] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e2bd0 is same with the state(5) to be set 00:30:37.953 05:44:44 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:30:41.232 05:44:47 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:41.232 [2024-07-14 05:44:48.270638] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:41.232 05:44:48 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:30:42.606 05:44:49 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:42.606 [2024-07-14 05:44:49.532525] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3750 is same with the state(5) to be set 00:30:42.606 [2024-07-14 05:44:49.532596] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3750 is same with the state(5) to be set 00:30:42.606 [2024-07-14 05:44:49.532620] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3750 is same with the state(5) to be set 00:30:42.606 [2024-07-14 05:44:49.532640] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3750 is same with the state(5) to be set 00:30:42.606 [2024-07-14 05:44:49.532659] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3750 is same with the state(5) to be set 00:30:42.606 [2024-07-14 05:44:49.532677] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3750 is same with the state(5) to be set 00:30:42.606 [2024-07-14 05:44:49.532694] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3750 is same with the state(5) to be set 00:30:42.606 [2024-07-14 05:44:49.532711] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3750 is same with the state(5) to be set 00:30:42.606 [2024-07-14 05:44:49.532731] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3750 is same with the state(5) to be set 00:30:42.606 [2024-07-14 05:44:49.532748] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3750 is same with the state(5) to be set 00:30:42.606 [2024-07-14 05:44:49.532767] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3750 is same with the state(5) to be set 00:30:42.606 [2024-07-14 05:44:49.532800] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3750 is same with the state(5) to be set 00:30:42.606 [2024-07-14 05:44:49.532819] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3750 is same with the state(5) to be set 00:30:42.606 [2024-07-14 05:44:49.532838] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3750 is same with the state(5) to be set 00:30:42.606 [2024-07-14 05:44:49.532858] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3750 is same with the state(5) to be set 00:30:42.606 [2024-07-14 05:44:49.532888] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3750 is same with the state(5) to be set 00:30:42.606 [2024-07-14 05:44:49.532909] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3750 is same with the state(5) to be set 00:30:42.606 [2024-07-14 05:44:49.532928] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3750 is same with the state(5) to be set 00:30:42.606 [2024-07-14 05:44:49.532949] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3750 is same with the state(5) to be set 00:30:42.606 [2024-07-14 05:44:49.532969] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3750 is same with the state(5) to be set 00:30:42.606 [2024-07-14 05:44:49.532987] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3750 is same with the state(5) to be set 00:30:42.606 [2024-07-14 05:44:49.533006] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3750 is same with the state(5) to be set 00:30:42.606 [2024-07-14 05:44:49.533026] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3750 is same with the state(5) to be set 00:30:42.606 [2024-07-14 05:44:49.533047] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3750 is same with the state(5) to be set 00:30:42.606 [2024-07-14 05:44:49.533067] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3750 is same with the state(5) to be set 00:30:42.606 [2024-07-14 05:44:49.533087] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3750 is same with the state(5) to be set 00:30:42.606 [2024-07-14 05:44:49.533105] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3750 is same with the state(5) to be set 00:30:42.606 [2024-07-14 05:44:49.533131] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3750 is same with the state(5) to be set 00:30:42.606 [2024-07-14 05:44:49.533171] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3750 is same with the state(5) to be set 00:30:42.606 [2024-07-14 05:44:49.533194] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3750 is same with the state(5) to be set 00:30:42.606 [2024-07-14 05:44:49.533214] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3750 is same with the state(5) to be set 00:30:42.606 [2024-07-14 05:44:49.533247] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3750 is same with the state(5) to be set 00:30:42.606 [2024-07-14 05:44:49.533267] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3750 is same with the state(5) to be set 00:30:42.606 [2024-07-14 05:44:49.533301] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3750 is same with the state(5) to be set 00:30:42.606 [2024-07-14 05:44:49.533321] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3750 is same with the state(5) to be set 00:30:42.606 [2024-07-14 05:44:49.533343] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3750 is same with the state(5) to be set 00:30:42.606 [2024-07-14 05:44:49.533377] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3750 is same with the state(5) to be set 00:30:42.606 [2024-07-14 05:44:49.533399] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3750 is same with the state(5) to be set 00:30:42.607 [2024-07-14 05:44:49.533418] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3750 is same with the state(5) to be set 00:30:42.607 [2024-07-14 05:44:49.533439] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3750 is same with the state(5) to be set 00:30:42.607 [2024-07-14 05:44:49.533461] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3750 is same with the state(5) to be set 00:30:42.607 [2024-07-14 05:44:49.533480] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3750 is same with the state(5) to be set 00:30:42.607 [2024-07-14 05:44:49.533502] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3750 is same with the state(5) to be set 00:30:42.607 [2024-07-14 05:44:49.533523] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3750 is same with the state(5) to be set 00:30:42.607 [2024-07-14 05:44:49.533542] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3750 is same with the state(5) to be set 00:30:42.607 [2024-07-14 05:44:49.533564] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3750 is same with the state(5) to be set 00:30:42.607 [2024-07-14 05:44:49.533584] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3750 is same with the state(5) to be set 00:30:42.607 [2024-07-14 05:44:49.533605] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3750 is same with the state(5) to be set 00:30:42.607 [2024-07-14 05:44:49.533625] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3750 is same with the state(5) to be set 00:30:42.607 [2024-07-14 05:44:49.533645] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3750 is same with the state(5) to be set 00:30:42.607 [2024-07-14 05:44:49.533665] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3750 is same with the state(5) to be set 00:30:42.607 [2024-07-14 05:44:49.533687] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3750 is same with the state(5) to be set 00:30:42.607 [2024-07-14 05:44:49.533721] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3750 is same with the state(5) to be set 00:30:42.607 [2024-07-14 05:44:49.533742] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3750 is same with the state(5) to be set 00:30:42.607 [2024-07-14 05:44:49.533761] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3750 is same with the state(5) to be set 00:30:42.607 [2024-07-14 05:44:49.533786] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3750 is same with the state(5) to be set 00:30:42.607 [2024-07-14 05:44:49.533806] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3750 is same with the state(5) to be set 00:30:42.607 [2024-07-14 05:44:49.533839] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3750 is same with the state(5) to be set 00:30:42.607 [2024-07-14 05:44:49.533859] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3750 is same with the state(5) to be set 00:30:42.607 [2024-07-14 05:44:49.533910] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3750 is same with the state(5) to be set 00:30:42.607 [2024-07-14 05:44:49.533934] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3750 is same with the state(5) to be set 00:30:42.607 [2024-07-14 05:44:49.533969] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3750 is same with the state(5) to be set 00:30:42.607 [2024-07-14 05:44:49.533989] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3750 is same with the state(5) to be set 00:30:42.607 [2024-07-14 05:44:49.534011] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3750 is same with the state(5) to be set 00:30:42.607 [2024-07-14 05:44:49.534031] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3750 is same with the state(5) to be set 00:30:42.607 [2024-07-14 05:44:49.534053] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3750 is same with the state(5) to be set 00:30:42.607 [2024-07-14 05:44:49.534073] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3750 is same with the state(5) to be set 00:30:42.607 [2024-07-14 05:44:49.534094] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3750 is same with the state(5) to be set 00:30:42.607 [2024-07-14 05:44:49.534118] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3750 is same with the state(5) to be set 00:30:42.607 [2024-07-14 05:44:49.534138] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3750 is same with the state(5) to be set 00:30:42.607 [2024-07-14 05:44:49.534161] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3750 is same with the state(5) to be set 00:30:42.607 05:44:49 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 3351225 00:30:49.169 0 00:30:49.169 05:44:55 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 3351195 00:30:49.169 05:44:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 3351195 ']' 00:30:49.169 05:44:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 3351195 00:30:49.169 05:44:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:30:49.169 05:44:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:49.169 05:44:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3351195 00:30:49.169 05:44:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:49.169 05:44:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:49.169 05:44:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3351195' 00:30:49.169 killing process with pid 3351195 00:30:49.169 05:44:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 3351195 00:30:49.169 05:44:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 3351195 00:30:49.169 05:44:55 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:49.169 [2024-07-14 05:44:39.127854] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:30:49.169 [2024-07-14 05:44:39.127989] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3351195 ] 00:30:49.169 EAL: No free 2048 kB hugepages reported on node 1 00:30:49.169 [2024-07-14 05:44:39.194862] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:49.169 [2024-07-14 05:44:39.286718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:49.169 Running I/O for 15 seconds... 00:30:49.169 [2024-07-14 05:44:41.340004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:75424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.169 [2024-07-14 05:44:41.340050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.169 [2024-07-14 05:44:41.340078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:75488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.169 [2024-07-14 05:44:41.340095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.169 [2024-07-14 05:44:41.340110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.169 [2024-07-14 05:44:41.340125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.169 [2024-07-14 05:44:41.340141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:75504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.169 [2024-07-14 05:44:41.340155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.169 [2024-07-14 05:44:41.340185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:75512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.170 [2024-07-14 05:44:41.340199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.170 [2024-07-14 05:44:41.340214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:75520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.170 [2024-07-14 05:44:41.340227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.170 [2024-07-14 05:44:41.340242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:75528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.170 [2024-07-14 05:44:41.340256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.170 [2024-07-14 05:44:41.340270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:75536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.170 [2024-07-14 05:44:41.340283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.170 [2024-07-14 05:44:41.340298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:75544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.170 [2024-07-14 05:44:41.340312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.170 [2024-07-14 05:44:41.340327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:75552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.170 [2024-07-14 05:44:41.340340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.170 [2024-07-14 05:44:41.340355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:75560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.170 [2024-07-14 05:44:41.340369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.170 [2024-07-14 05:44:41.340391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:75568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.170 [2024-07-14 05:44:41.340405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.170 [2024-07-14 05:44:41.340419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:75576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.170 [2024-07-14 05:44:41.340432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.170 [2024-07-14 05:44:41.340447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:75584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.170 [2024-07-14 05:44:41.340460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.170 [2024-07-14 05:44:41.340474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:75592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.170 [2024-07-14 05:44:41.340487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.170 [2024-07-14 05:44:41.340501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:75600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.170 [2024-07-14 05:44:41.340514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.170 [2024-07-14 05:44:41.340529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:75608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.170 [2024-07-14 05:44:41.340542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.170 [2024-07-14 05:44:41.340556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:75616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.170 [2024-07-14 05:44:41.340569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.170 [2024-07-14 05:44:41.340583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:75624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.170 [2024-07-14 05:44:41.340596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.170 [2024-07-14 05:44:41.340611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:75632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.170 [2024-07-14 05:44:41.340624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.170 [2024-07-14 05:44:41.340639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:75640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.170 [2024-07-14 05:44:41.340652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.170 [2024-07-14 05:44:41.340667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:75648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.170 [2024-07-14 05:44:41.340680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.170 [2024-07-14 05:44:41.340694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.170 [2024-07-14 05:44:41.340707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.170 [2024-07-14 05:44:41.340721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:75664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.170 [2024-07-14 05:44:41.340738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.170 [2024-07-14 05:44:41.340753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:75672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.170 [2024-07-14 05:44:41.340767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.170 [2024-07-14 05:44:41.340781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:75680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.170 [2024-07-14 05:44:41.340794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.170 [2024-07-14 05:44:41.340809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:75688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.170 [2024-07-14 05:44:41.340822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.170 [2024-07-14 05:44:41.340836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:75696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.170 [2024-07-14 05:44:41.340850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.170 [2024-07-14 05:44:41.340872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:75704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.170 [2024-07-14 05:44:41.340902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.170 [2024-07-14 05:44:41.340918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:75712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.170 [2024-07-14 05:44:41.340931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.170 [2024-07-14 05:44:41.340946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:75720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.170 [2024-07-14 05:44:41.340960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.170 [2024-07-14 05:44:41.340975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.170 [2024-07-14 05:44:41.340989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.170 [2024-07-14 05:44:41.341003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:75440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.170 [2024-07-14 05:44:41.341017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.170 [2024-07-14 05:44:41.341032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:75448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.170 [2024-07-14 05:44:41.341045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.170 [2024-07-14 05:44:41.341060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:75456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.170 [2024-07-14 05:44:41.341074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.170 [2024-07-14 05:44:41.341089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:75464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.170 [2024-07-14 05:44:41.341102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.170 [2024-07-14 05:44:41.341122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:75472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.170 [2024-07-14 05:44:41.341136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.170 [2024-07-14 05:44:41.341151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:75480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.170 [2024-07-14 05:44:41.341165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.170 [2024-07-14 05:44:41.341196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:75728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.170 [2024-07-14 05:44:41.341209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.170 [2024-07-14 05:44:41.341224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:75736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.170 [2024-07-14 05:44:41.341237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.170 [2024-07-14 05:44:41.341251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.170 [2024-07-14 05:44:41.341264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.170 [2024-07-14 05:44:41.341279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:75752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.170 [2024-07-14 05:44:41.341292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.170 [2024-07-14 05:44:41.341306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:75760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.170 [2024-07-14 05:44:41.341320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.170 [2024-07-14 05:44:41.341334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:75768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.170 [2024-07-14 05:44:41.341347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.170 [2024-07-14 05:44:41.341361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:75776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.170 [2024-07-14 05:44:41.341375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.170 [2024-07-14 05:44:41.341389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:75784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.170 [2024-07-14 05:44:41.341401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.170 [2024-07-14 05:44:41.341416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:75792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.170 [2024-07-14 05:44:41.341429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.171 [2024-07-14 05:44:41.341443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:75800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.171 [2024-07-14 05:44:41.341456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.171 [2024-07-14 05:44:41.341470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:75808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.171 [2024-07-14 05:44:41.341483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.171 [2024-07-14 05:44:41.341501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:75816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.171 [2024-07-14 05:44:41.341514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.171 [2024-07-14 05:44:41.341529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:75824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.171 [2024-07-14 05:44:41.341542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.171 [2024-07-14 05:44:41.341556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:75832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.171 [2024-07-14 05:44:41.341569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.171 [2024-07-14 05:44:41.341584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:75840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.171 [2024-07-14 05:44:41.341598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.171 [2024-07-14 05:44:41.341613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:75848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.171 [2024-07-14 05:44:41.341625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.171 [2024-07-14 05:44:41.341640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:75856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.171 [2024-07-14 05:44:41.341653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.171 [2024-07-14 05:44:41.341667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:75864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.171 [2024-07-14 05:44:41.341680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.171 [2024-07-14 05:44:41.341694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:75872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.171 [2024-07-14 05:44:41.341707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.171 [2024-07-14 05:44:41.341721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:75880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.171 [2024-07-14 05:44:41.341734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.171 [2024-07-14 05:44:41.341749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:75888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.171 [2024-07-14 05:44:41.341761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.171 [2024-07-14 05:44:41.341775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:75896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.171 [2024-07-14 05:44:41.341788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.171 [2024-07-14 05:44:41.341802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.171 [2024-07-14 05:44:41.341815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.171 [2024-07-14 05:44:41.341830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:75912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.171 [2024-07-14 05:44:41.341846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.171 [2024-07-14 05:44:41.341870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:75920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.171 [2024-07-14 05:44:41.341901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.171 [2024-07-14 05:44:41.341916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:75928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.171 [2024-07-14 05:44:41.341930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.171 [2024-07-14 05:44:41.341945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:75936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.171 [2024-07-14 05:44:41.341959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.171 [2024-07-14 05:44:41.341974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:75944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.171 [2024-07-14 05:44:41.341988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.171 [2024-07-14 05:44:41.342002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:75952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.171 [2024-07-14 05:44:41.342016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.171 [2024-07-14 05:44:41.342030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:75960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.171 [2024-07-14 05:44:41.342044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.171 [2024-07-14 05:44:41.342059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:75968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.171 [2024-07-14 05:44:41.342072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.171 [2024-07-14 05:44:41.342087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:75976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.171 [2024-07-14 05:44:41.342100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.171 [2024-07-14 05:44:41.342129] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.171 [2024-07-14 05:44:41.342146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75984 len:8 PRP1 0x0 PRP2 0x0 00:30:49.171 [2024-07-14 05:44:41.342159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.171 [2024-07-14 05:44:41.342199] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.171 [2024-07-14 05:44:41.342211] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.171 [2024-07-14 05:44:41.342222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75992 len:8 PRP1 0x0 PRP2 0x0 00:30:49.171 [2024-07-14 05:44:41.342234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.171 [2024-07-14 05:44:41.342247] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.171 [2024-07-14 05:44:41.342257] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.171 [2024-07-14 05:44:41.342268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76000 len:8 PRP1 0x0 PRP2 0x0 00:30:49.171 [2024-07-14 05:44:41.342284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.171 [2024-07-14 05:44:41.342298] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.171 [2024-07-14 05:44:41.342308] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.171 [2024-07-14 05:44:41.342319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76008 len:8 PRP1 0x0 PRP2 0x0 00:30:49.171 [2024-07-14 05:44:41.342331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.171 [2024-07-14 05:44:41.342343] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.171 [2024-07-14 05:44:41.342353] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.171 [2024-07-14 05:44:41.342364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76016 len:8 PRP1 0x0 PRP2 0x0 00:30:49.171 [2024-07-14 05:44:41.342376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.171 [2024-07-14 05:44:41.342388] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.171 [2024-07-14 05:44:41.342398] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.171 [2024-07-14 05:44:41.342409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76024 len:8 PRP1 0x0 PRP2 0x0 00:30:49.171 [2024-07-14 05:44:41.342421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.171 [2024-07-14 05:44:41.342433] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.171 [2024-07-14 05:44:41.342443] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.171 [2024-07-14 05:44:41.342454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76032 len:8 PRP1 0x0 PRP2 0x0 00:30:49.171 [2024-07-14 05:44:41.342466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.171 [2024-07-14 05:44:41.342478] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.171 [2024-07-14 05:44:41.342489] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.171 [2024-07-14 05:44:41.342500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76040 len:8 PRP1 0x0 PRP2 0x0 00:30:49.171 [2024-07-14 05:44:41.342512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.171 [2024-07-14 05:44:41.342524] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.171 [2024-07-14 05:44:41.342534] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.171 [2024-07-14 05:44:41.342545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76048 len:8 PRP1 0x0 PRP2 0x0 00:30:49.171 [2024-07-14 05:44:41.342557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.171 [2024-07-14 05:44:41.342570] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.171 [2024-07-14 05:44:41.342580] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.171 [2024-07-14 05:44:41.342591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76056 len:8 PRP1 0x0 PRP2 0x0 00:30:49.171 [2024-07-14 05:44:41.342603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.171 [2024-07-14 05:44:41.342615] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.171 [2024-07-14 05:44:41.342625] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.171 [2024-07-14 05:44:41.342639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76064 len:8 PRP1 0x0 PRP2 0x0 00:30:49.171 [2024-07-14 05:44:41.342652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.171 [2024-07-14 05:44:41.342664] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.171 [2024-07-14 05:44:41.342675] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.171 [2024-07-14 05:44:41.342685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76072 len:8 PRP1 0x0 PRP2 0x0 00:30:49.171 [2024-07-14 05:44:41.342697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.172 [2024-07-14 05:44:41.342710] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.172 [2024-07-14 05:44:41.342720] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.172 [2024-07-14 05:44:41.342730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76080 len:8 PRP1 0x0 PRP2 0x0 00:30:49.172 [2024-07-14 05:44:41.342743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.172 [2024-07-14 05:44:41.342756] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.172 [2024-07-14 05:44:41.342766] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.172 [2024-07-14 05:44:41.342777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76088 len:8 PRP1 0x0 PRP2 0x0 00:30:49.172 [2024-07-14 05:44:41.342789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.172 [2024-07-14 05:44:41.342801] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.172 [2024-07-14 05:44:41.342812] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.172 [2024-07-14 05:44:41.342823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76096 len:8 PRP1 0x0 PRP2 0x0 00:30:49.172 [2024-07-14 05:44:41.342835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.172 [2024-07-14 05:44:41.342863] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.172 [2024-07-14 05:44:41.342882] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.172 [2024-07-14 05:44:41.342894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76104 len:8 PRP1 0x0 PRP2 0x0 00:30:49.172 [2024-07-14 05:44:41.342907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.172 [2024-07-14 05:44:41.342920] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.172 [2024-07-14 05:44:41.342931] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.172 [2024-07-14 05:44:41.342941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76112 len:8 PRP1 0x0 PRP2 0x0 00:30:49.172 [2024-07-14 05:44:41.342954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.172 [2024-07-14 05:44:41.342967] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.172 [2024-07-14 05:44:41.342977] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.172 [2024-07-14 05:44:41.342988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76120 len:8 PRP1 0x0 PRP2 0x0 00:30:49.172 [2024-07-14 05:44:41.343000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.172 [2024-07-14 05:44:41.343017] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.172 [2024-07-14 05:44:41.343028] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.172 [2024-07-14 05:44:41.343039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76128 len:8 PRP1 0x0 PRP2 0x0 00:30:49.172 [2024-07-14 05:44:41.343052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.172 [2024-07-14 05:44:41.343064] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.172 [2024-07-14 05:44:41.343075] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.172 [2024-07-14 05:44:41.343086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76136 len:8 PRP1 0x0 PRP2 0x0 00:30:49.172 [2024-07-14 05:44:41.343098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.172 [2024-07-14 05:44:41.343111] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.172 [2024-07-14 05:44:41.343122] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.172 [2024-07-14 05:44:41.343132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76144 len:8 PRP1 0x0 PRP2 0x0 00:30:49.172 [2024-07-14 05:44:41.343145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.172 [2024-07-14 05:44:41.343177] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.172 [2024-07-14 05:44:41.343188] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.172 [2024-07-14 05:44:41.343198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76152 len:8 PRP1 0x0 PRP2 0x0 00:30:49.172 [2024-07-14 05:44:41.343210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.172 [2024-07-14 05:44:41.343222] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.172 [2024-07-14 05:44:41.343232] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.172 [2024-07-14 05:44:41.343243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76160 len:8 PRP1 0x0 PRP2 0x0 00:30:49.172 [2024-07-14 05:44:41.343261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.172 [2024-07-14 05:44:41.343273] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.172 [2024-07-14 05:44:41.343284] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.172 [2024-07-14 05:44:41.343295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76168 len:8 PRP1 0x0 PRP2 0x0 00:30:49.172 [2024-07-14 05:44:41.343307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.172 [2024-07-14 05:44:41.343319] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.172 [2024-07-14 05:44:41.343330] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.172 [2024-07-14 05:44:41.343340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76176 len:8 PRP1 0x0 PRP2 0x0 00:30:49.172 [2024-07-14 05:44:41.343352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.172 [2024-07-14 05:44:41.343364] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.172 [2024-07-14 05:44:41.343380] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.172 [2024-07-14 05:44:41.343391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76184 len:8 PRP1 0x0 PRP2 0x0 00:30:49.172 [2024-07-14 05:44:41.343406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.172 [2024-07-14 05:44:41.343419] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.172 [2024-07-14 05:44:41.343430] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.172 [2024-07-14 05:44:41.343441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76192 len:8 PRP1 0x0 PRP2 0x0 00:30:49.172 [2024-07-14 05:44:41.343453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.172 [2024-07-14 05:44:41.343466] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.172 [2024-07-14 05:44:41.343476] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.172 [2024-07-14 05:44:41.343487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76200 len:8 PRP1 0x0 PRP2 0x0 00:30:49.172 [2024-07-14 05:44:41.343499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.172 [2024-07-14 05:44:41.343511] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.172 [2024-07-14 05:44:41.343521] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.172 [2024-07-14 05:44:41.343532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76208 len:8 PRP1 0x0 PRP2 0x0 00:30:49.172 [2024-07-14 05:44:41.343544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.172 [2024-07-14 05:44:41.343556] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.172 [2024-07-14 05:44:41.343566] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.172 [2024-07-14 05:44:41.343577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76216 len:8 PRP1 0x0 PRP2 0x0 00:30:49.172 [2024-07-14 05:44:41.343589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.172 [2024-07-14 05:44:41.343601] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.172 [2024-07-14 05:44:41.343611] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.172 [2024-07-14 05:44:41.343622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76224 len:8 PRP1 0x0 PRP2 0x0 00:30:49.172 [2024-07-14 05:44:41.343639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.172 [2024-07-14 05:44:41.343651] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.172 [2024-07-14 05:44:41.343662] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.172 [2024-07-14 05:44:41.343673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76232 len:8 PRP1 0x0 PRP2 0x0 00:30:49.172 [2024-07-14 05:44:41.343685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.172 [2024-07-14 05:44:41.343697] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.172 [2024-07-14 05:44:41.343707] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.172 [2024-07-14 05:44:41.343718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76240 len:8 PRP1 0x0 PRP2 0x0 00:30:49.172 [2024-07-14 05:44:41.343730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.172 [2024-07-14 05:44:41.343742] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.172 [2024-07-14 05:44:41.343757] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.172 [2024-07-14 05:44:41.343771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76248 len:8 PRP1 0x0 PRP2 0x0 00:30:49.172 [2024-07-14 05:44:41.343783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.172 [2024-07-14 05:44:41.343796] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.172 [2024-07-14 05:44:41.343806] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.172 [2024-07-14 05:44:41.343817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76256 len:8 PRP1 0x0 PRP2 0x0 00:30:49.172 [2024-07-14 05:44:41.343829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.172 [2024-07-14 05:44:41.343841] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.172 [2024-07-14 05:44:41.343871] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.172 [2024-07-14 05:44:41.343884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76264 len:8 PRP1 0x0 PRP2 0x0 00:30:49.172 [2024-07-14 05:44:41.343897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.172 [2024-07-14 05:44:41.343910] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.172 [2024-07-14 05:44:41.343921] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.172 [2024-07-14 05:44:41.343933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76272 len:8 PRP1 0x0 PRP2 0x0 00:30:49.173 [2024-07-14 05:44:41.343945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.173 [2024-07-14 05:44:41.343958] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.173 [2024-07-14 05:44:41.343969] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.173 [2024-07-14 05:44:41.343980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76280 len:8 PRP1 0x0 PRP2 0x0 00:30:49.173 [2024-07-14 05:44:41.343992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.173 [2024-07-14 05:44:41.344006] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.173 [2024-07-14 05:44:41.344017] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.173 [2024-07-14 05:44:41.344028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76288 len:8 PRP1 0x0 PRP2 0x0 00:30:49.173 [2024-07-14 05:44:41.344045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.173 [2024-07-14 05:44:41.344058] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.173 [2024-07-14 05:44:41.344069] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.173 [2024-07-14 05:44:41.344080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76296 len:8 PRP1 0x0 PRP2 0x0 00:30:49.173 [2024-07-14 05:44:41.344093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.173 [2024-07-14 05:44:41.344105] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.173 [2024-07-14 05:44:41.344116] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.173 [2024-07-14 05:44:41.344127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76304 len:8 PRP1 0x0 PRP2 0x0 00:30:49.173 [2024-07-14 05:44:41.344139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.173 [2024-07-14 05:44:41.344178] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.173 [2024-07-14 05:44:41.344192] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.173 [2024-07-14 05:44:41.344203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76312 len:8 PRP1 0x0 PRP2 0x0 00:30:49.173 [2024-07-14 05:44:41.344225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.173 [2024-07-14 05:44:41.344238] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.173 [2024-07-14 05:44:41.344248] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.173 [2024-07-14 05:44:41.344259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76320 len:8 PRP1 0x0 PRP2 0x0 00:30:49.173 [2024-07-14 05:44:41.344271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.173 [2024-07-14 05:44:41.344284] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.173 [2024-07-14 05:44:41.344294] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.173 [2024-07-14 05:44:41.344305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76328 len:8 PRP1 0x0 PRP2 0x0 00:30:49.173 [2024-07-14 05:44:41.344317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.173 [2024-07-14 05:44:41.344329] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.173 [2024-07-14 05:44:41.344340] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.173 [2024-07-14 05:44:41.344351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76336 len:8 PRP1 0x0 PRP2 0x0 00:30:49.173 [2024-07-14 05:44:41.344362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.173 [2024-07-14 05:44:41.344375] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.173 [2024-07-14 05:44:41.344385] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.173 [2024-07-14 05:44:41.344396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76344 len:8 PRP1 0x0 PRP2 0x0 00:30:49.173 [2024-07-14 05:44:41.344408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.173 [2024-07-14 05:44:41.344421] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.173 [2024-07-14 05:44:41.344431] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.173 [2024-07-14 05:44:41.344442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76352 len:8 PRP1 0x0 PRP2 0x0 00:30:49.173 [2024-07-14 05:44:41.344458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.173 [2024-07-14 05:44:41.344472] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.173 [2024-07-14 05:44:41.344482] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.173 [2024-07-14 05:44:41.344493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76360 len:8 PRP1 0x0 PRP2 0x0 00:30:49.173 [2024-07-14 05:44:41.344505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.173 [2024-07-14 05:44:41.344517] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.173 [2024-07-14 05:44:41.344527] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.173 [2024-07-14 05:44:41.344538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76368 len:8 PRP1 0x0 PRP2 0x0 00:30:49.173 [2024-07-14 05:44:41.344550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.173 [2024-07-14 05:44:41.344566] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.173 [2024-07-14 05:44:41.344577] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.173 [2024-07-14 05:44:41.344588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76376 len:8 PRP1 0x0 PRP2 0x0 00:30:49.173 [2024-07-14 05:44:41.344600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.173 [2024-07-14 05:44:41.344612] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.173 [2024-07-14 05:44:41.344623] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.173 [2024-07-14 05:44:41.344633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76384 len:8 PRP1 0x0 PRP2 0x0 00:30:49.173 [2024-07-14 05:44:41.344645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.173 [2024-07-14 05:44:41.344658] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.173 [2024-07-14 05:44:41.344669] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.173 [2024-07-14 05:44:41.344679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76392 len:8 PRP1 0x0 PRP2 0x0 00:30:49.173 [2024-07-14 05:44:41.344691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.173 [2024-07-14 05:44:41.344703] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.173 [2024-07-14 05:44:41.344714] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.173 [2024-07-14 05:44:41.344724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76400 len:8 PRP1 0x0 PRP2 0x0 00:30:49.173 [2024-07-14 05:44:41.344737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.173 [2024-07-14 05:44:41.344749] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.173 [2024-07-14 05:44:41.344759] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.173 [2024-07-14 05:44:41.344770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76408 len:8 PRP1 0x0 PRP2 0x0 00:30:49.173 [2024-07-14 05:44:41.344782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.173 [2024-07-14 05:44:41.344794] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.173 [2024-07-14 05:44:41.344804] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.173 [2024-07-14 05:44:41.344815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76416 len:8 PRP1 0x0 PRP2 0x0 00:30:49.173 [2024-07-14 05:44:41.344832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.173 [2024-07-14 05:44:41.344845] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.173 [2024-07-14 05:44:41.344862] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.173 [2024-07-14 05:44:41.344898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76424 len:8 PRP1 0x0 PRP2 0x0 00:30:49.173 [2024-07-14 05:44:41.344911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.173 [2024-07-14 05:44:41.344925] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.173 [2024-07-14 05:44:41.344936] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.173 [2024-07-14 05:44:41.344946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76432 len:8 PRP1 0x0 PRP2 0x0 00:30:49.173 [2024-07-14 05:44:41.344965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.173 [2024-07-14 05:44:41.344978] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.173 [2024-07-14 05:44:41.344989] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.173 [2024-07-14 05:44:41.345000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76440 len:8 PRP1 0x0 PRP2 0x0 00:30:49.175 [2024-07-14 05:44:41.345018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.175 [2024-07-14 05:44:41.345074] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x203eb50 was disconnected and freed. reset controller. 00:30:49.175 [2024-07-14 05:44:41.345092] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:49.175 [2024-07-14 05:44:41.345124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:49.175 [2024-07-14 05:44:41.345142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.175 [2024-07-14 05:44:41.345156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:49.175 [2024-07-14 05:44:41.345169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.175 [2024-07-14 05:44:41.345183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:49.175 [2024-07-14 05:44:41.345199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.175 [2024-07-14 05:44:41.345213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:49.175 [2024-07-14 05:44:41.345225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.175 [2024-07-14 05:44:41.345238] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.175 [2024-07-14 05:44:41.348520] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.175 [2024-07-14 05:44:41.348556] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x201feb0 (9): Bad file descriptor 00:30:49.175 [2024-07-14 05:44:41.382696] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:49.175 [2024-07-14 05:44:44.977119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:92392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.175 [2024-07-14 05:44:44.977161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.175 [2024-07-14 05:44:44.977201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:92400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.175 [2024-07-14 05:44:44.977217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.175 [2024-07-14 05:44:44.977232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:92408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.175 [2024-07-14 05:44:44.977246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.175 [2024-07-14 05:44:44.977261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:92416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.175 [2024-07-14 05:44:44.977274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.175 [2024-07-14 05:44:44.977293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:92424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.175 [2024-07-14 05:44:44.977307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.175 [2024-07-14 05:44:44.977321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:92432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.175 [2024-07-14 05:44:44.977335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.175 [2024-07-14 05:44:44.977349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:92440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.175 [2024-07-14 05:44:44.977362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.175 [2024-07-14 05:44:44.977376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:92448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.175 [2024-07-14 05:44:44.977388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.175 [2024-07-14 05:44:44.977402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:92456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.175 [2024-07-14 05:44:44.977415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.175 [2024-07-14 05:44:44.977429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:92464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.175 [2024-07-14 05:44:44.977442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.175 [2024-07-14 05:44:44.977455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:92472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.175 [2024-07-14 05:44:44.977468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.175 [2024-07-14 05:44:44.977482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:92480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.175 [2024-07-14 05:44:44.977495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.175 [2024-07-14 05:44:44.977509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:92488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.175 [2024-07-14 05:44:44.977521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.175 [2024-07-14 05:44:44.977536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:92496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.175 [2024-07-14 05:44:44.977549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.175 [2024-07-14 05:44:44.977563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:92504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.175 [2024-07-14 05:44:44.977576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.175 [2024-07-14 05:44:44.977590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:92512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.175 [2024-07-14 05:44:44.977603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.175 [2024-07-14 05:44:44.977617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:92520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.175 [2024-07-14 05:44:44.977634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.175 [2024-07-14 05:44:44.977649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:92528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.175 [2024-07-14 05:44:44.977662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.175 [2024-07-14 05:44:44.977676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:92536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.175 [2024-07-14 05:44:44.977689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.175 [2024-07-14 05:44:44.977703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:92544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.175 [2024-07-14 05:44:44.977715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.175 [2024-07-14 05:44:44.977729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:92552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.175 [2024-07-14 05:44:44.977742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.175 [2024-07-14 05:44:44.977756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:92560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.175 [2024-07-14 05:44:44.977768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.175 [2024-07-14 05:44:44.977782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:92568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.175 [2024-07-14 05:44:44.977795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.175 [2024-07-14 05:44:44.977809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:92576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.175 [2024-07-14 05:44:44.977822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.175 [2024-07-14 05:44:44.977836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:92584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.175 [2024-07-14 05:44:44.977863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.175 [2024-07-14 05:44:44.977887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:92592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.175 [2024-07-14 05:44:44.977901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.175 [2024-07-14 05:44:44.977931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:92600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.175 [2024-07-14 05:44:44.977946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.175 [2024-07-14 05:44:44.977961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:92608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.175 [2024-07-14 05:44:44.977974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.175 [2024-07-14 05:44:44.977989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:92616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.175 [2024-07-14 05:44:44.978003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.175 [2024-07-14 05:44:44.978017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:92624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.175 [2024-07-14 05:44:44.978035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.175 [2024-07-14 05:44:44.978050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:92632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.175 [2024-07-14 05:44:44.978065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.176 [2024-07-14 05:44:44.978080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:92640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.176 [2024-07-14 05:44:44.978094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.176 [2024-07-14 05:44:44.978109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:92136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.176 [2024-07-14 05:44:44.978123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.176 [2024-07-14 05:44:44.978138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:92144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.176 [2024-07-14 05:44:44.978152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.176 [2024-07-14 05:44:44.978167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:92152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.176 [2024-07-14 05:44:44.978180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.176 [2024-07-14 05:44:44.978195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:92160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.176 [2024-07-14 05:44:44.978209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.176 [2024-07-14 05:44:44.978238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:92168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.176 [2024-07-14 05:44:44.978252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.176 [2024-07-14 05:44:44.978267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:92176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.176 [2024-07-14 05:44:44.978280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.176 [2024-07-14 05:44:44.978294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:92184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.176 [2024-07-14 05:44:44.978307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.176 [2024-07-14 05:44:44.978321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:92192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.176 [2024-07-14 05:44:44.978349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.176 [2024-07-14 05:44:44.978364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:92648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.176 [2024-07-14 05:44:44.978376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.176 [2024-07-14 05:44:44.978390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:92656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.176 [2024-07-14 05:44:44.978403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.176 [2024-07-14 05:44:44.978420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:92664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.176 [2024-07-14 05:44:44.978433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.176 [2024-07-14 05:44:44.978447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:92672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.176 [2024-07-14 05:44:44.978460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.176 [2024-07-14 05:44:44.978474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:92680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.176 [2024-07-14 05:44:44.978486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.176 [2024-07-14 05:44:44.978501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:92688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.176 [2024-07-14 05:44:44.978513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.176 [2024-07-14 05:44:44.978527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:92696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.176 [2024-07-14 05:44:44.978539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.176 [2024-07-14 05:44:44.978553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:92704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.176 [2024-07-14 05:44:44.978566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.176 [2024-07-14 05:44:44.978580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:92200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.176 [2024-07-14 05:44:44.978593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.176 [2024-07-14 05:44:44.978607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:92208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.176 [2024-07-14 05:44:44.978620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.176 [2024-07-14 05:44:44.978634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:92216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.176 [2024-07-14 05:44:44.978647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.176 [2024-07-14 05:44:44.978661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:92224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.176 [2024-07-14 05:44:44.978673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.176 [2024-07-14 05:44:44.978687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:92232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.176 [2024-07-14 05:44:44.978700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.176 [2024-07-14 05:44:44.978714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:92240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.176 [2024-07-14 05:44:44.978726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.176 [2024-07-14 05:44:44.978740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:92248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.176 [2024-07-14 05:44:44.978756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.176 [2024-07-14 05:44:44.978771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:92256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.176 [2024-07-14 05:44:44.978784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.176 [2024-07-14 05:44:44.978797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:92264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.176 [2024-07-14 05:44:44.978810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.176 [2024-07-14 05:44:44.978824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:92272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.176 [2024-07-14 05:44:44.978836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.176 [2024-07-14 05:44:44.978850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:92280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.176 [2024-07-14 05:44:44.978863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.176 [2024-07-14 05:44:44.978911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:92288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.176 [2024-07-14 05:44:44.978925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.176 [2024-07-14 05:44:44.978955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:92296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.176 [2024-07-14 05:44:44.978969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.176 [2024-07-14 05:44:44.978984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:92304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.176 [2024-07-14 05:44:44.978998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.176 [2024-07-14 05:44:44.979014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:92312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.176 [2024-07-14 05:44:44.979028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.176 [2024-07-14 05:44:44.979043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:92320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.176 [2024-07-14 05:44:44.979056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.176 [2024-07-14 05:44:44.979071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:92712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.176 [2024-07-14 05:44:44.979086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.176 [2024-07-14 05:44:44.979101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:92720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.176 [2024-07-14 05:44:44.979114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.176 [2024-07-14 05:44:44.979129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.176 [2024-07-14 05:44:44.979143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.176 [2024-07-14 05:44:44.979161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:92736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.176 [2024-07-14 05:44:44.979176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.176 [2024-07-14 05:44:44.979190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:92744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.176 [2024-07-14 05:44:44.979204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.176 [2024-07-14 05:44:44.979219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:92752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.176 [2024-07-14 05:44:44.979232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.176 [2024-07-14 05:44:44.979262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:92760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.176 [2024-07-14 05:44:44.979275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.176 [2024-07-14 05:44:44.979290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:92768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.176 [2024-07-14 05:44:44.979303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.176 [2024-07-14 05:44:44.979317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:92776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.176 [2024-07-14 05:44:44.979330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.176 [2024-07-14 05:44:44.979344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:92784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.177 [2024-07-14 05:44:44.979357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.177 [2024-07-14 05:44:44.979371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:92792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.177 [2024-07-14 05:44:44.979384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.177 [2024-07-14 05:44:44.979398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:92800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.177 [2024-07-14 05:44:44.979411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.177 [2024-07-14 05:44:44.979425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:92808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.177 [2024-07-14 05:44:44.979438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.177 [2024-07-14 05:44:44.979453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:92816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.177 [2024-07-14 05:44:44.979466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.177 [2024-07-14 05:44:44.979480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:92824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.177 [2024-07-14 05:44:44.979493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.177 [2024-07-14 05:44:44.979508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:92832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.177 [2024-07-14 05:44:44.979520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.177 [2024-07-14 05:44:44.979539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:92328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.177 [2024-07-14 05:44:44.979553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.177 [2024-07-14 05:44:44.979567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:92336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.177 [2024-07-14 05:44:44.979580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.177 [2024-07-14 05:44:44.979595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:92344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.177 [2024-07-14 05:44:44.979607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.177 [2024-07-14 05:44:44.979622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:92352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.177 [2024-07-14 05:44:44.979635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.177 [2024-07-14 05:44:44.979649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:92360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.177 [2024-07-14 05:44:44.979662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.177 [2024-07-14 05:44:44.979677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:92368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.177 [2024-07-14 05:44:44.979690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.177 [2024-07-14 05:44:44.979704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:92376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.177 [2024-07-14 05:44:44.979717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.177 [2024-07-14 05:44:44.979731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:92384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.177 [2024-07-14 05:44:44.979744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.177 [2024-07-14 05:44:44.979758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:92840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.177 [2024-07-14 05:44:44.979771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.177 [2024-07-14 05:44:44.979785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:92848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.177 [2024-07-14 05:44:44.979798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.177 [2024-07-14 05:44:44.979813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:92856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.177 [2024-07-14 05:44:44.979825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.177 [2024-07-14 05:44:44.979840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:92864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.177 [2024-07-14 05:44:44.979873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.177 [2024-07-14 05:44:44.979891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.177 [2024-07-14 05:44:44.979909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.177 [2024-07-14 05:44:44.979925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:92880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.177 [2024-07-14 05:44:44.979939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.177 [2024-07-14 05:44:44.979954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:92888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.177 [2024-07-14 05:44:44.979968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.177 [2024-07-14 05:44:44.979983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:92896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.177 [2024-07-14 05:44:44.979997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.177 [2024-07-14 05:44:44.980012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:92904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.177 [2024-07-14 05:44:44.980026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.177 [2024-07-14 05:44:44.980041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:92912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.177 [2024-07-14 05:44:44.980061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.177 [2024-07-14 05:44:44.980077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:92920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.177 [2024-07-14 05:44:44.980091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.177 [2024-07-14 05:44:44.980106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:92928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.177 [2024-07-14 05:44:44.980119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.177 [2024-07-14 05:44:44.980134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:92936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.177 [2024-07-14 05:44:44.980147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.177 [2024-07-14 05:44:44.980177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:92944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.177 [2024-07-14 05:44:44.980191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.177 [2024-07-14 05:44:44.980205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:92952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.177 [2024-07-14 05:44:44.980219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.177 [2024-07-14 05:44:44.980233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:92960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.177 [2024-07-14 05:44:44.980246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.177 [2024-07-14 05:44:44.980260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:92968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.177 [2024-07-14 05:44:44.980273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.177 [2024-07-14 05:44:44.980291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:92976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.177 [2024-07-14 05:44:44.980304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.177 [2024-07-14 05:44:44.980319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:92984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.177 [2024-07-14 05:44:44.980333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.177 [2024-07-14 05:44:44.980347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:92992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.177 [2024-07-14 05:44:44.980361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.177 [2024-07-14 05:44:44.980375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:93000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.177 [2024-07-14 05:44:44.980388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.177 [2024-07-14 05:44:44.980402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:93008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.177 [2024-07-14 05:44:44.980415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.177 [2024-07-14 05:44:44.980429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:93016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.177 [2024-07-14 05:44:44.980443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.177 [2024-07-14 05:44:44.980457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:93024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.177 [2024-07-14 05:44:44.980470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.177 [2024-07-14 05:44:44.980484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:93032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.177 [2024-07-14 05:44:44.980497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.177 [2024-07-14 05:44:44.980512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:93040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.177 [2024-07-14 05:44:44.980530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.177 [2024-07-14 05:44:44.980545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:93048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.177 [2024-07-14 05:44:44.980558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.177 [2024-07-14 05:44:44.980572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:93056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.177 [2024-07-14 05:44:44.980586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.177 [2024-07-14 05:44:44.980600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:93064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.178 [2024-07-14 05:44:44.980613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.178 [2024-07-14 05:44:44.980628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:93072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.178 [2024-07-14 05:44:44.980644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.178 [2024-07-14 05:44:44.980659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:93080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.178 [2024-07-14 05:44:44.980672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.178 [2024-07-14 05:44:44.980686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:93088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.178 [2024-07-14 05:44:44.980699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.178 [2024-07-14 05:44:44.980714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:93096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.178 [2024-07-14 05:44:44.980727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.178 [2024-07-14 05:44:44.980741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:93104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.178 [2024-07-14 05:44:44.980754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.178 [2024-07-14 05:44:44.980768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:93112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.178 [2024-07-14 05:44:44.980781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.178 [2024-07-14 05:44:44.980796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:93120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.178 [2024-07-14 05:44:44.980809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.178 [2024-07-14 05:44:44.980824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:93128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.178 [2024-07-14 05:44:44.980837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.178 [2024-07-14 05:44:44.980874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:93136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.178 [2024-07-14 05:44:44.980890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.178 [2024-07-14 05:44:44.980906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:93144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.178 [2024-07-14 05:44:44.980920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.178 [2024-07-14 05:44:44.980948] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.178 [2024-07-14 05:44:44.980962] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.178 [2024-07-14 05:44:44.980974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93152 len:8 PRP1 0x0 PRP2 0x0 00:30:49.178 [2024-07-14 05:44:44.980988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.178 [2024-07-14 05:44:44.981044] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2040b50 was disconnected and freed. reset controller. 00:30:49.178 [2024-07-14 05:44:44.981063] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:30:49.178 [2024-07-14 05:44:44.981095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:49.178 [2024-07-14 05:44:44.981113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.178 [2024-07-14 05:44:44.981132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:49.178 [2024-07-14 05:44:44.981146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.178 [2024-07-14 05:44:44.981159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:49.178 [2024-07-14 05:44:44.981172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.178 [2024-07-14 05:44:44.981186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:49.178 [2024-07-14 05:44:44.981199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.178 [2024-07-14 05:44:44.981211] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.178 [2024-07-14 05:44:44.981247] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x201feb0 (9): Bad file descriptor 00:30:49.178 [2024-07-14 05:44:44.984504] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.178 [2024-07-14 05:44:45.016187] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:49.178 [2024-07-14 05:44:49.535044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:30064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.178 [2024-07-14 05:44:49.535086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.178 [2024-07-14 05:44:49.535112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.178 [2024-07-14 05:44:49.535128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.178 [2024-07-14 05:44:49.535143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:30080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.178 [2024-07-14 05:44:49.535157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.178 [2024-07-14 05:44:49.535178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:30088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.178 [2024-07-14 05:44:49.535206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.178 [2024-07-14 05:44:49.535221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:30096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.178 [2024-07-14 05:44:49.535234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.178 [2024-07-14 05:44:49.535248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:30104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.178 [2024-07-14 05:44:49.535261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.178 [2024-07-14 05:44:49.535275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:30112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.178 [2024-07-14 05:44:49.535287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.178 [2024-07-14 05:44:49.535301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:30120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.178 [2024-07-14 05:44:49.535314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.178 [2024-07-14 05:44:49.535334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:30504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.178 [2024-07-14 05:44:49.535348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.178 [2024-07-14 05:44:49.535362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:30512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.178 [2024-07-14 05:44:49.535375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.178 [2024-07-14 05:44:49.535389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:30128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.178 [2024-07-14 05:44:49.535402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.178 [2024-07-14 05:44:49.535416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:30136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.178 [2024-07-14 05:44:49.535428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.178 [2024-07-14 05:44:49.535442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:30144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.178 [2024-07-14 05:44:49.535454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.178 [2024-07-14 05:44:49.535468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:30152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.178 [2024-07-14 05:44:49.535480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.178 [2024-07-14 05:44:49.535494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:30160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.178 [2024-07-14 05:44:49.535507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.178 [2024-07-14 05:44:49.535520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:30168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.178 [2024-07-14 05:44:49.535534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.178 [2024-07-14 05:44:49.535548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.178 [2024-07-14 05:44:49.535560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.178 [2024-07-14 05:44:49.535574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:30184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.178 [2024-07-14 05:44:49.535587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.178 [2024-07-14 05:44:49.535601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:30192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.178 [2024-07-14 05:44:49.535613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.178 [2024-07-14 05:44:49.535628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:30200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.178 [2024-07-14 05:44:49.535641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.178 [2024-07-14 05:44:49.535655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:30208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.178 [2024-07-14 05:44:49.535671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.178 [2024-07-14 05:44:49.535685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:30216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.178 [2024-07-14 05:44:49.535698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.178 [2024-07-14 05:44:49.535712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.178 [2024-07-14 05:44:49.535725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.178 [2024-07-14 05:44:49.535739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:30232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.178 [2024-07-14 05:44:49.535751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.179 [2024-07-14 05:44:49.535765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:30240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.179 [2024-07-14 05:44:49.535778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.179 [2024-07-14 05:44:49.535792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:30248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.179 [2024-07-14 05:44:49.535804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.179 [2024-07-14 05:44:49.535818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:30520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.179 [2024-07-14 05:44:49.535831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.179 [2024-07-14 05:44:49.535845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:30528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.179 [2024-07-14 05:44:49.535874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.179 [2024-07-14 05:44:49.535907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:30536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.179 [2024-07-14 05:44:49.535920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.179 [2024-07-14 05:44:49.535934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:30544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.179 [2024-07-14 05:44:49.535947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.179 [2024-07-14 05:44:49.535962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:30552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.179 [2024-07-14 05:44:49.535975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.179 [2024-07-14 05:44:49.535989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:30560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.179 [2024-07-14 05:44:49.536002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.179 [2024-07-14 05:44:49.536017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:30568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.179 [2024-07-14 05:44:49.536030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.179 [2024-07-14 05:44:49.536052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:30256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.179 [2024-07-14 05:44:49.536066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.179 [2024-07-14 05:44:49.536081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:30264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.179 [2024-07-14 05:44:49.536094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.179 [2024-07-14 05:44:49.536109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:30272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.179 [2024-07-14 05:44:49.536122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.179 [2024-07-14 05:44:49.536136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:30280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.179 [2024-07-14 05:44:49.536153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.179 [2024-07-14 05:44:49.536167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:30288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.179 [2024-07-14 05:44:49.536180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.179 [2024-07-14 05:44:49.536194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:30296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.179 [2024-07-14 05:44:49.536207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.179 [2024-07-14 05:44:49.536222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:30304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.179 [2024-07-14 05:44:49.536234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.179 [2024-07-14 05:44:49.536249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:30312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.179 [2024-07-14 05:44:49.536262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.179 [2024-07-14 05:44:49.536276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:30320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.179 [2024-07-14 05:44:49.536289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.179 [2024-07-14 05:44:49.536304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:30328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.179 [2024-07-14 05:44:49.536316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.179 [2024-07-14 05:44:49.536331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:30336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.179 [2024-07-14 05:44:49.536344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.179 [2024-07-14 05:44:49.536358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:30344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.179 [2024-07-14 05:44:49.536371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.179 [2024-07-14 05:44:49.536386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:30352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.179 [2024-07-14 05:44:49.536402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.179 [2024-07-14 05:44:49.536417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:30360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.179 [2024-07-14 05:44:49.536430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.179 [2024-07-14 05:44:49.536445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:30368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.179 [2024-07-14 05:44:49.536458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.179 [2024-07-14 05:44:49.536473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:30376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.179 [2024-07-14 05:44:49.536487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.179 [2024-07-14 05:44:49.536501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:30384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.179 [2024-07-14 05:44:49.536514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.179 [2024-07-14 05:44:49.536528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:30392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.179 [2024-07-14 05:44:49.536541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.179 [2024-07-14 05:44:49.536556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:30400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.179 [2024-07-14 05:44:49.536568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.179 [2024-07-14 05:44:49.536583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:30408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.179 [2024-07-14 05:44:49.536596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.179 [2024-07-14 05:44:49.536610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:30416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.179 [2024-07-14 05:44:49.536623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.179 [2024-07-14 05:44:49.536638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:30424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.179 [2024-07-14 05:44:49.536650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.179 [2024-07-14 05:44:49.536665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:30432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.179 [2024-07-14 05:44:49.536678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.179 [2024-07-14 05:44:49.536693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:30440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.179 [2024-07-14 05:44:49.536706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.179 [2024-07-14 05:44:49.536720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.179 [2024-07-14 05:44:49.536733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.179 [2024-07-14 05:44:49.536747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:30456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.179 [2024-07-14 05:44:49.536763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.179 [2024-07-14 05:44:49.536778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.179 [2024-07-14 05:44:49.536791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.179 [2024-07-14 05:44:49.536805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.180 [2024-07-14 05:44:49.536818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.180 [2024-07-14 05:44:49.536832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:30480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.180 [2024-07-14 05:44:49.536845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.180 [2024-07-14 05:44:49.536860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.180 [2024-07-14 05:44:49.536895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.180 [2024-07-14 05:44:49.536912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:30496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.180 [2024-07-14 05:44:49.536926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.180 [2024-07-14 05:44:49.536942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:30576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.180 [2024-07-14 05:44:49.536955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.180 [2024-07-14 05:44:49.536970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:30584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.180 [2024-07-14 05:44:49.536983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.180 [2024-07-14 05:44:49.536998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:30592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.180 [2024-07-14 05:44:49.537011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.180 [2024-07-14 05:44:49.537026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:30600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.180 [2024-07-14 05:44:49.537039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.180 [2024-07-14 05:44:49.537054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:30608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.180 [2024-07-14 05:44:49.537067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.180 [2024-07-14 05:44:49.537082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:30616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.180 [2024-07-14 05:44:49.537095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.180 [2024-07-14 05:44:49.537110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:30624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.180 [2024-07-14 05:44:49.537123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.180 [2024-07-14 05:44:49.537142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:30632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.180 [2024-07-14 05:44:49.537156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.180 [2024-07-14 05:44:49.537170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:30640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.180 [2024-07-14 05:44:49.537199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.180 [2024-07-14 05:44:49.537213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:30648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.180 [2024-07-14 05:44:49.537226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.180 [2024-07-14 05:44:49.537240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:30656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.180 [2024-07-14 05:44:49.537253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.180 [2024-07-14 05:44:49.537268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:30664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.180 [2024-07-14 05:44:49.537281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.180 [2024-07-14 05:44:49.537295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:30672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.180 [2024-07-14 05:44:49.537307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.180 [2024-07-14 05:44:49.537322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:30680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.180 [2024-07-14 05:44:49.537335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.180 [2024-07-14 05:44:49.537349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:30688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.180 [2024-07-14 05:44:49.537362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.180 [2024-07-14 05:44:49.537376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:30696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.180 [2024-07-14 05:44:49.537390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.180 [2024-07-14 05:44:49.537405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:30704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.180 [2024-07-14 05:44:49.537418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.180 [2024-07-14 05:44:49.537432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:30712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.180 [2024-07-14 05:44:49.537445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.180 [2024-07-14 05:44:49.537459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:30720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.180 [2024-07-14 05:44:49.537472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.180 [2024-07-14 05:44:49.537486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:30728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.180 [2024-07-14 05:44:49.537502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.180 [2024-07-14 05:44:49.537517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:30736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.180 [2024-07-14 05:44:49.537530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.180 [2024-07-14 05:44:49.537545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.180 [2024-07-14 05:44:49.537557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.180 [2024-07-14 05:44:49.537571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:30752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.180 [2024-07-14 05:44:49.537584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.180 [2024-07-14 05:44:49.537598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:30760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.180 [2024-07-14 05:44:49.537611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.180 [2024-07-14 05:44:49.537625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:30768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.180 [2024-07-14 05:44:49.537644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.180 [2024-07-14 05:44:49.537658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:30776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.180 [2024-07-14 05:44:49.537672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.180 [2024-07-14 05:44:49.537686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:30784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.180 [2024-07-14 05:44:49.537699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.180 [2024-07-14 05:44:49.537714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:30792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.180 [2024-07-14 05:44:49.537727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.180 [2024-07-14 05:44:49.537741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.180 [2024-07-14 05:44:49.537754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.180 [2024-07-14 05:44:49.537768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:30808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.180 [2024-07-14 05:44:49.537781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.180 [2024-07-14 05:44:49.537795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:30816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.180 [2024-07-14 05:44:49.537808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.180 [2024-07-14 05:44:49.537823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:30824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.180 [2024-07-14 05:44:49.537841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.180 [2024-07-14 05:44:49.537887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:30832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.180 [2024-07-14 05:44:49.537903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.180 [2024-07-14 05:44:49.537918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.180 [2024-07-14 05:44:49.537932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.180 [2024-07-14 05:44:49.537947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:30848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.180 [2024-07-14 05:44:49.537960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.180 [2024-07-14 05:44:49.537975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:30856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.180 [2024-07-14 05:44:49.537989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.180 [2024-07-14 05:44:49.538004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:30864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.180 [2024-07-14 05:44:49.538017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.180 [2024-07-14 05:44:49.538032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.180 [2024-07-14 05:44:49.538045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.180 [2024-07-14 05:44:49.538060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:30880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.180 [2024-07-14 05:44:49.538074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.180 [2024-07-14 05:44:49.538089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:30888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.181 [2024-07-14 05:44:49.538102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.181 [2024-07-14 05:44:49.538117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:30896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.181 [2024-07-14 05:44:49.538136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.181 [2024-07-14 05:44:49.538157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:30904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.181 [2024-07-14 05:44:49.538184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.181 [2024-07-14 05:44:49.538200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:30912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.181 [2024-07-14 05:44:49.538213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.181 [2024-07-14 05:44:49.538228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.181 [2024-07-14 05:44:49.538240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.181 [2024-07-14 05:44:49.538255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:30928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.181 [2024-07-14 05:44:49.538268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.181 [2024-07-14 05:44:49.538286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:30936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.181 [2024-07-14 05:44:49.538299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.181 [2024-07-14 05:44:49.538313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:30944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.181 [2024-07-14 05:44:49.538327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.181 [2024-07-14 05:44:49.538341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:30952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.181 [2024-07-14 05:44:49.538359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.181 [2024-07-14 05:44:49.538388] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.181 [2024-07-14 05:44:49.538404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30960 len:8 PRP1 0x0 PRP2 0x0 00:30:49.181 [2024-07-14 05:44:49.538417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.181 [2024-07-14 05:44:49.538433] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.181 [2024-07-14 05:44:49.538445] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.181 [2024-07-14 05:44:49.538456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30968 len:8 PRP1 0x0 PRP2 0x0 00:30:49.181 [2024-07-14 05:44:49.538468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.181 [2024-07-14 05:44:49.538481] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.181 [2024-07-14 05:44:49.538491] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.181 [2024-07-14 05:44:49.538502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30976 len:8 PRP1 0x0 PRP2 0x0 00:30:49.181 [2024-07-14 05:44:49.538514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.181 [2024-07-14 05:44:49.538526] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.181 [2024-07-14 05:44:49.538537] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.181 [2024-07-14 05:44:49.538547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30984 len:8 PRP1 0x0 PRP2 0x0 00:30:49.181 [2024-07-14 05:44:49.538559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.181 [2024-07-14 05:44:49.538571] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.181 [2024-07-14 05:44:49.538582] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.181 [2024-07-14 05:44:49.538593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30992 len:8 PRP1 0x0 PRP2 0x0 00:30:49.181 [2024-07-14 05:44:49.538605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.181 [2024-07-14 05:44:49.538618] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.181 [2024-07-14 05:44:49.538628] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.181 [2024-07-14 05:44:49.538638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31000 len:8 PRP1 0x0 PRP2 0x0 00:30:49.181 [2024-07-14 05:44:49.538650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.181 [2024-07-14 05:44:49.538666] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.181 [2024-07-14 05:44:49.538677] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.181 [2024-07-14 05:44:49.538688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31008 len:8 PRP1 0x0 PRP2 0x0 00:30:49.181 [2024-07-14 05:44:49.538700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.181 [2024-07-14 05:44:49.538712] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.181 [2024-07-14 05:44:49.538722] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.181 [2024-07-14 05:44:49.538733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31016 len:8 PRP1 0x0 PRP2 0x0 00:30:49.181 [2024-07-14 05:44:49.538745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.181 [2024-07-14 05:44:49.538762] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.181 [2024-07-14 05:44:49.538773] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.181 [2024-07-14 05:44:49.538784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31024 len:8 PRP1 0x0 PRP2 0x0 00:30:49.181 [2024-07-14 05:44:49.538796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.181 [2024-07-14 05:44:49.538808] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.181 [2024-07-14 05:44:49.538818] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.181 [2024-07-14 05:44:49.538829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31032 len:8 PRP1 0x0 PRP2 0x0 00:30:49.181 [2024-07-14 05:44:49.538841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.181 [2024-07-14 05:44:49.538860] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.181 [2024-07-14 05:44:49.538894] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.181 [2024-07-14 05:44:49.538906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31040 len:8 PRP1 0x0 PRP2 0x0 00:30:49.181 [2024-07-14 05:44:49.538918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.181 [2024-07-14 05:44:49.538931] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.181 [2024-07-14 05:44:49.538943] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.181 [2024-07-14 05:44:49.538954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31048 len:8 PRP1 0x0 PRP2 0x0 00:30:49.181 [2024-07-14 05:44:49.538966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.181 [2024-07-14 05:44:49.538979] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.181 [2024-07-14 05:44:49.538989] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.181 [2024-07-14 05:44:49.539000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31056 len:8 PRP1 0x0 PRP2 0x0 00:30:49.181 [2024-07-14 05:44:49.539012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.181 [2024-07-14 05:44:49.539025] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.181 [2024-07-14 05:44:49.539036] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.181 [2024-07-14 05:44:49.539047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31064 len:8 PRP1 0x0 PRP2 0x0 00:30:49.181 [2024-07-14 05:44:49.539063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.181 [2024-07-14 05:44:49.539076] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.181 [2024-07-14 05:44:49.539087] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.181 [2024-07-14 05:44:49.539098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31072 len:8 PRP1 0x0 PRP2 0x0 00:30:49.181 [2024-07-14 05:44:49.539111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.181 [2024-07-14 05:44:49.539123] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.181 [2024-07-14 05:44:49.539134] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.181 [2024-07-14 05:44:49.539145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31080 len:8 PRP1 0x0 PRP2 0x0 00:30:49.181 [2024-07-14 05:44:49.539167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.181 [2024-07-14 05:44:49.539243] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x21ea070 was disconnected and freed. reset controller. 00:30:49.181 [2024-07-14 05:44:49.539260] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:30:49.181 [2024-07-14 05:44:49.539293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:49.181 [2024-07-14 05:44:49.539325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.181 [2024-07-14 05:44:49.539340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:49.181 [2024-07-14 05:44:49.539354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.181 [2024-07-14 05:44:49.539367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:49.181 [2024-07-14 05:44:49.539380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.181 [2024-07-14 05:44:49.539393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:49.181 [2024-07-14 05:44:49.539406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.181 [2024-07-14 05:44:49.539419] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.181 [2024-07-14 05:44:49.539456] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x201feb0 (9): Bad file descriptor 00:30:49.181 [2024-07-14 05:44:49.542742] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.181 [2024-07-14 05:44:49.576020] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:49.181 00:30:49.181 Latency(us) 00:30:49.181 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:49.182 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:49.182 Verification LBA range: start 0x0 length 0x4000 00:30:49.182 NVMe0n1 : 15.01 8930.93 34.89 239.62 0.00 13929.08 837.40 17476.27 00:30:49.182 =================================================================================================================== 00:30:49.182 Total : 8930.93 34.89 239.62 0.00 13929.08 837.40 17476.27 00:30:49.182 Received shutdown signal, test time was about 15.000000 seconds 00:30:49.182 00:30:49.182 Latency(us) 00:30:49.182 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:49.182 =================================================================================================================== 00:30:49.182 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:49.182 05:44:55 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:30:49.182 05:44:55 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:30:49.182 05:44:55 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:30:49.182 05:44:55 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3353062 00:30:49.182 05:44:55 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:30:49.182 05:44:55 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3353062 /var/tmp/bdevperf.sock 00:30:49.182 05:44:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 3353062 ']' 00:30:49.182 05:44:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:49.182 05:44:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:49.182 05:44:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:49.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:49.182 05:44:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:49.182 05:44:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:49.182 05:44:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:49.182 05:44:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:30:49.182 05:44:55 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:49.182 [2024-07-14 05:44:56.016814] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:49.182 05:44:56 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:49.182 [2024-07-14 05:44:56.261520] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:49.440 05:44:56 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:49.698 NVMe0n1 00:30:49.698 05:44:56 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:49.955 00:30:50.212 05:44:57 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:50.468 00:30:50.468 05:44:57 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:50.468 05:44:57 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:30:50.724 05:44:57 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:50.980 05:44:57 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:30:54.305 05:45:00 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:54.305 05:45:00 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:30:54.305 05:45:01 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3353797 00:30:54.305 05:45:01 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:54.305 05:45:01 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 3353797 00:30:55.240 0 00:30:55.240 05:45:02 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:55.240 [2024-07-14 05:44:55.540842] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:30:55.240 [2024-07-14 05:44:55.540949] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3353062 ] 00:30:55.240 EAL: No free 2048 kB hugepages reported on node 1 00:30:55.240 [2024-07-14 05:44:55.602443] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:55.240 [2024-07-14 05:44:55.685327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:55.240 [2024-07-14 05:44:57.890263] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:55.240 [2024-07-14 05:44:57.890343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.240 [2024-07-14 05:44:57.890367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.240 [2024-07-14 05:44:57.890383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.240 [2024-07-14 05:44:57.890396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.240 [2024-07-14 05:44:57.890409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.240 [2024-07-14 05:44:57.890422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.240 [2024-07-14 05:44:57.890436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.240 [2024-07-14 05:44:57.890449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.240 [2024-07-14 05:44:57.890462] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.240 [2024-07-14 05:44:57.890502] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.240 [2024-07-14 05:44:57.890532] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f0eb0 (9): Bad file descriptor 00:30:55.240 [2024-07-14 05:44:57.905373] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:55.240 Running I/O for 1 seconds... 00:30:55.240 00:30:55.240 Latency(us) 00:30:55.240 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:55.240 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:55.240 Verification LBA range: start 0x0 length 0x4000 00:30:55.240 NVMe0n1 : 1.01 8443.87 32.98 0.00 0.00 15098.42 2415.12 20097.71 00:30:55.240 =================================================================================================================== 00:30:55.240 Total : 8443.87 32.98 0.00 0.00 15098.42 2415.12 20097.71 00:30:55.240 05:45:02 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:55.240 05:45:02 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:30:55.498 05:45:02 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:55.755 05:45:02 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:55.755 05:45:02 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:30:56.012 05:45:03 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:56.270 05:45:03 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:30:59.547 05:45:06 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:59.547 05:45:06 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:30:59.547 05:45:06 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 3353062 00:30:59.547 05:45:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 3353062 ']' 00:30:59.547 05:45:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 3353062 00:30:59.547 05:45:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:30:59.547 05:45:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:59.547 05:45:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3353062 00:30:59.547 05:45:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:59.547 05:45:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:59.547 05:45:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3353062' 00:30:59.547 killing process with pid 3353062 00:30:59.547 05:45:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 3353062 00:30:59.547 05:45:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 3353062 00:30:59.805 05:45:06 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:30:59.805 05:45:06 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:00.062 05:45:07 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:31:00.062 05:45:07 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:00.062 05:45:07 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:31:00.062 05:45:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:00.062 05:45:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:31:00.062 05:45:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:00.062 05:45:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:31:00.062 05:45:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:00.062 05:45:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:00.062 rmmod nvme_tcp 00:31:00.062 rmmod nvme_fabrics 00:31:00.062 rmmod nvme_keyring 00:31:00.062 05:45:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:00.062 05:45:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:31:00.063 05:45:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:31:00.063 05:45:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 3350847 ']' 00:31:00.063 05:45:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 3350847 00:31:00.063 05:45:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 3350847 ']' 00:31:00.063 05:45:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 3350847 00:31:00.063 05:45:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:31:00.063 05:45:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:00.063 05:45:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3350847 00:31:00.063 05:45:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:31:00.063 05:45:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:31:00.063 05:45:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3350847' 00:31:00.063 killing process with pid 3350847 00:31:00.063 05:45:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 3350847 00:31:00.063 05:45:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 3350847 00:31:00.321 05:45:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:00.321 05:45:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:00.321 05:45:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:00.321 05:45:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:00.321 05:45:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:00.321 05:45:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:00.321 05:45:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:00.321 05:45:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:02.850 05:45:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:02.850 00:31:02.850 real 0m34.812s 00:31:02.850 user 2m2.737s 00:31:02.850 sys 0m5.855s 00:31:02.850 05:45:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:02.850 05:45:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:02.850 ************************************ 00:31:02.850 END TEST nvmf_failover 00:31:02.850 ************************************ 00:31:02.850 05:45:09 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:02.850 05:45:09 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:31:02.850 05:45:09 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:02.850 05:45:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:02.850 ************************************ 00:31:02.850 START TEST nvmf_host_discovery 00:31:02.851 ************************************ 00:31:02.851 05:45:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:02.851 * Looking for test storage... 00:31:02.851 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:02.851 05:45:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:02.851 05:45:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:31:02.851 05:45:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:02.851 05:45:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:02.851 05:45:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:02.851 05:45:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:02.851 05:45:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:02.851 05:45:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:02.851 05:45:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:02.851 05:45:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:02.851 05:45:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:02.851 05:45:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:02.851 05:45:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:02.851 05:45:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:02.851 05:45:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:02.851 05:45:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:02.851 05:45:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:02.851 05:45:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:02.851 05:45:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:02.851 05:45:09 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:02.851 05:45:09 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:02.851 05:45:09 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:02.851 05:45:09 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:02.851 05:45:09 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:02.851 05:45:09 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:02.851 05:45:09 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:31:02.851 05:45:09 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:02.851 05:45:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:31:02.851 05:45:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:02.851 05:45:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:02.851 05:45:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:02.851 05:45:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:02.851 05:45:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:02.851 05:45:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:02.851 05:45:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:02.851 05:45:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:02.851 05:45:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:31:02.851 05:45:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:31:02.851 05:45:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:31:02.851 05:45:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:31:02.851 05:45:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:31:02.851 05:45:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:31:02.851 05:45:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:31:02.851 05:45:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:02.851 05:45:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:02.851 05:45:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:02.851 05:45:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:02.851 05:45:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:02.851 05:45:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:02.851 05:45:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:02.851 05:45:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:02.851 05:45:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:02.851 05:45:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:02.851 05:45:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:31:02.851 05:45:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:04.753 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:04.753 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:31:04.753 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:04.753 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:04.753 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:04.753 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:04.753 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:04.753 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:31:04.753 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:04.753 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:31:04.753 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:31:04.753 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:31:04.753 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:31:04.753 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:31:04.753 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:31:04.753 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:04.753 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:04.753 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:04.753 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:04.753 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:04.753 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:04.753 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:04.753 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:04.753 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:04.753 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:04.753 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:04.753 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:04.753 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:04.753 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:04.753 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:04.753 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:04.753 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:04.753 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:04.753 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:04.753 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:04.753 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:04.753 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:04.753 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:04.753 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:04.753 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:04.753 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:04.753 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:04.753 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:04.753 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:04.753 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:04.753 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:04.753 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:04.753 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:04.753 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:04.753 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:04.753 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:04.753 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:04.753 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:04.753 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:04.753 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:04.753 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:04.754 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:04.754 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:04.754 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:04.754 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:04.754 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:04.754 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:04.754 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:04.754 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:04.754 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:04.754 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:04.754 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:04.754 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:04.754 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:04.754 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:04.754 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:04.754 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:04.754 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:31:04.754 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:04.754 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:04.754 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:04.754 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:04.754 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:04.754 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:04.754 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:04.754 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:04.754 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:04.754 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:04.754 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:04.754 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:04.754 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:04.754 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:04.754 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:04.754 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:04.754 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:04.754 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:04.754 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:04.754 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:04.754 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:04.754 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:04.754 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:04.754 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:04.754 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:31:04.754 00:31:04.754 --- 10.0.0.2 ping statistics --- 00:31:04.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:04.754 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:31:04.754 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:04.754 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:04.754 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:31:04.754 00:31:04.754 --- 10.0.0.1 ping statistics --- 00:31:04.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:04.754 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:31:04.754 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:04.754 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:31:04.754 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:04.754 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:04.754 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:04.754 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:04.754 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:04.754 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:04.754 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:04.754 05:45:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:31:04.754 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:04.754 05:45:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:04.754 05:45:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:04.754 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=3356943 00:31:04.754 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:04.754 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 3356943 00:31:04.754 05:45:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 3356943 ']' 00:31:04.754 05:45:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:04.754 05:45:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:04.754 05:45:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:04.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:04.754 05:45:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:04.754 05:45:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:04.754 [2024-07-14 05:45:11.569593] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:31:04.754 [2024-07-14 05:45:11.569680] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:04.754 EAL: No free 2048 kB hugepages reported on node 1 00:31:04.754 [2024-07-14 05:45:11.638429] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:04.754 [2024-07-14 05:45:11.727747] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:04.754 [2024-07-14 05:45:11.727812] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:04.754 [2024-07-14 05:45:11.727837] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:04.754 [2024-07-14 05:45:11.727858] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:04.754 [2024-07-14 05:45:11.727888] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:04.754 [2024-07-14 05:45:11.727928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:04.754 05:45:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:04.754 05:45:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:31:04.754 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:04.754 05:45:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:04.754 05:45:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.013 05:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:05.013 05:45:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:05.013 05:45:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.013 05:45:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.013 [2024-07-14 05:45:11.863448] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:05.013 05:45:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.013 05:45:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:31:05.013 05:45:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.013 05:45:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.013 [2024-07-14 05:45:11.871636] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:05.013 05:45:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.013 05:45:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:31:05.013 05:45:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.013 05:45:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.013 null0 00:31:05.013 05:45:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.013 05:45:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:31:05.013 05:45:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.013 05:45:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.013 null1 00:31:05.013 05:45:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.013 05:45:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:31:05.013 05:45:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.013 05:45:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.013 05:45:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.013 05:45:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3357082 00:31:05.013 05:45:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:31:05.013 05:45:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3357082 /tmp/host.sock 00:31:05.013 05:45:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 3357082 ']' 00:31:05.013 05:45:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:31:05.013 05:45:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:05.013 05:45:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:05.013 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:05.013 05:45:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:05.013 05:45:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.013 [2024-07-14 05:45:11.944404] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:31:05.013 [2024-07-14 05:45:11.944470] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3357082 ] 00:31:05.013 EAL: No free 2048 kB hugepages reported on node 1 00:31:05.013 [2024-07-14 05:45:12.005731] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:05.013 [2024-07-14 05:45:12.096647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:05.271 05:45:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:05.271 05:45:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:31:05.271 05:45:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:05.271 05:45:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:31:05.271 05:45:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.271 05:45:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.271 05:45:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.271 05:45:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:31:05.271 05:45:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.271 05:45:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.271 05:45:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.271 05:45:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:31:05.271 05:45:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:31:05.271 05:45:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:05.272 05:45:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.272 05:45:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:05.272 05:45:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.272 05:45:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:05.272 05:45:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:05.272 05:45:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.272 05:45:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:31:05.272 05:45:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:31:05.272 05:45:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:05.272 05:45:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.272 05:45:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:05.272 05:45:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.272 05:45:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:05.272 05:45:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:05.272 05:45:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.272 05:45:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:31:05.272 05:45:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:31:05.272 05:45:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.272 05:45:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.272 05:45:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.272 05:45:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:31:05.272 05:45:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:05.272 05:45:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.272 05:45:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:05.272 05:45:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.272 05:45:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:05.272 05:45:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:05.272 05:45:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.272 05:45:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:31:05.272 05:45:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:31:05.272 05:45:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:05.272 05:45:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:05.272 05:45:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.272 05:45:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:05.272 05:45:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.272 05:45:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:05.272 05:45:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.530 05:45:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:31:05.530 05:45:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:31:05.530 05:45:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.530 05:45:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.530 05:45:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.530 05:45:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:31:05.530 05:45:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:05.530 05:45:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.530 05:45:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.530 05:45:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:05.530 05:45:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:05.530 05:45:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:05.530 05:45:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.530 05:45:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:31:05.530 05:45:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:31:05.530 05:45:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:05.530 05:45:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.530 05:45:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:05.530 05:45:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.530 05:45:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:05.530 05:45:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:05.530 05:45:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.530 05:45:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:31:05.530 05:45:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:05.530 05:45:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.530 05:45:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.530 [2024-07-14 05:45:12.477251] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:05.530 05:45:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.530 05:45:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:31:05.530 05:45:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:05.530 05:45:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.530 05:45:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:05.530 05:45:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.530 05:45:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:05.530 05:45:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:05.530 05:45:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.530 05:45:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:31:05.530 05:45:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:31:05.530 05:45:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:05.530 05:45:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.530 05:45:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:05.530 05:45:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.530 05:45:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:05.530 05:45:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:05.530 05:45:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.530 05:45:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:31:05.530 05:45:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:31:05.530 05:45:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:05.530 05:45:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:05.530 05:45:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:05.530 05:45:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:05.530 05:45:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:05.530 05:45:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:05.530 05:45:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:05.530 05:45:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:05.530 05:45:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:05.530 05:45:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.530 05:45:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.530 05:45:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.530 05:45:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:05.530 05:45:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:31:05.530 05:45:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:05.530 05:45:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:05.530 05:45:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:31:05.530 05:45:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.530 05:45:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.530 05:45:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.530 05:45:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:05.530 05:45:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:05.530 05:45:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:05.530 05:45:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:05.530 05:45:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:05.530 05:45:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:05.531 05:45:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:05.531 05:45:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.531 05:45:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:05.531 05:45:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.531 05:45:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:05.531 05:45:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:05.531 05:45:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.789 05:45:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == \n\v\m\e\0 ]] 00:31:05.789 05:45:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:31:06.353 [2024-07-14 05:45:13.271984] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:06.353 [2024-07-14 05:45:13.272008] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:06.353 [2024-07-14 05:45:13.272029] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:06.353 [2024-07-14 05:45:13.399501] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:06.610 [2024-07-14 05:45:13.542620] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:06.610 [2024-07-14 05:45:13.542646] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:06.610 05:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:06.610 05:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:06.610 05:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:06.610 05:45:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:06.610 05:45:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:06.610 05:45:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:06.610 05:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.610 05:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.610 05:45:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:06.610 05:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.610 05:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:06.610 05:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:06.610 05:45:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:31:06.610 05:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:31:06.610 05:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:06.610 05:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:06.610 05:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:31:06.610 05:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:06.610 05:45:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:06.610 05:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.610 05:45:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:06.610 05:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.610 05:45:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:06.610 05:45:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:06.868 05:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.868 05:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:31:06.868 05:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:06.868 05:45:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:31:06.868 05:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:31:06.868 05:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:06.868 05:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:06.868 05:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:31:06.868 05:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:31:06.868 05:45:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:06.868 05:45:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:06.868 05:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.868 05:45:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:06.868 05:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.868 05:45:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:06.868 05:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.868 05:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0 ]] 00:31:06.868 05:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:06.868 05:45:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:31:06.868 05:45:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:31:06.868 05:45:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:06.868 05:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:06.868 05:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:06.868 05:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:06.868 05:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:06.868 05:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:06.868 05:45:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:06.868 05:45:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:06.868 05:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.868 05:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.868 05:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.868 05:45:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:31:06.868 05:45:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:31:06.868 05:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:06.868 05:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:06.868 05:45:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:31:06.868 05:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.868 05:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.868 05:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.868 05:45:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:06.868 05:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:06.868 05:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:06.868 05:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:06.868 05:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:06.869 05:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:06.869 05:45:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:06.869 05:45:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:06.869 05:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.869 05:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.869 05:45:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:06.869 05:45:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:06.869 05:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.869 05:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:06.869 05:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:06.869 05:45:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:31:06.869 05:45:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:31:06.869 05:45:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:06.869 05:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:06.869 05:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:06.869 05:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:06.869 05:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:06.869 05:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:06.869 05:45:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:31:06.869 05:45:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:06.869 05:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.869 05:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.869 05:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.869 05:45:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:06.869 05:45:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:31:06.869 05:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:06.869 05:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:31:08.242 05:45:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:08.242 05:45:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:08.242 05:45:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:08.242 05:45:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:31:08.242 05:45:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.242 05:45:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:08.242 05:45:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.242 05:45:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.242 05:45:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:31:08.242 05:45:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:08.242 05:45:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:08.242 05:45:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:08.242 05:45:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:31:08.242 05:45:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.242 05:45:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.242 [2024-07-14 05:45:14.964980] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:08.242 [2024-07-14 05:45:14.965841] bdev_nvme.c:6966:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:08.242 [2024-07-14 05:45:14.965891] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:08.242 05:45:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.242 05:45:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:08.242 05:45:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:08.242 05:45:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:08.242 05:45:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:08.242 05:45:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:08.242 05:45:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:08.242 05:45:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:08.242 05:45:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:08.242 05:45:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.242 05:45:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:08.242 05:45:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.242 05:45:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:08.242 05:45:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.242 05:45:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:08.242 05:45:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:08.242 05:45:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:08.242 05:45:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:08.242 05:45:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:08.242 05:45:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:08.242 05:45:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:08.242 05:45:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:08.242 05:45:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:08.242 05:45:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:08.242 05:45:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.242 05:45:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.242 05:45:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:08.242 05:45:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:08.242 05:45:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.242 05:45:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:08.243 05:45:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:08.243 05:45:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:31:08.243 05:45:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:31:08.243 05:45:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:08.243 05:45:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:08.243 05:45:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:31:08.243 05:45:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:31:08.243 05:45:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:08.243 05:45:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:08.243 05:45:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.243 05:45:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.243 05:45:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:08.243 05:45:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:08.243 05:45:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.243 [2024-07-14 05:45:15.093285] bdev_nvme.c:6908:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:31:08.243 05:45:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:31:08.243 05:45:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:31:08.500 [2024-07-14 05:45:15.358636] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:08.500 [2024-07-14 05:45:15.358662] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:08.500 [2024-07-14 05:45:15.358673] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:09.065 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:09.065 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:31:09.065 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:31:09.065 05:45:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:09.065 05:45:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:09.065 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.065 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:09.065 05:45:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:09.065 05:45:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:09.065 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.065 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:31:09.065 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:09.065 05:45:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:31:09.065 05:45:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:09.065 05:45:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:09.065 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:09.065 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:09.065 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:09.065 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:09.065 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:09.065 05:45:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:09.065 05:45:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:09.065 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.065 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:09.065 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.324 05:45:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:09.324 05:45:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:09.324 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:09.324 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:09.324 05:45:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:09.324 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.324 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:09.324 [2024-07-14 05:45:16.193243] bdev_nvme.c:6966:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:09.324 [2024-07-14 05:45:16.193283] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:09.324 [2024-07-14 05:45:16.196545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:09.324 [2024-07-14 05:45:16.196579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.324 [2024-07-14 05:45:16.196596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:09.324 [2024-07-14 05:45:16.196610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.324 [2024-07-14 05:45:16.196624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:09.324 [2024-07-14 05:45:16.196639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.324 [2024-07-14 05:45:16.196661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:09.324 [2024-07-14 05:45:16.196675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.324 [2024-07-14 05:45:16.196688] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0450 is same with the state(5) to be set 00:31:09.325 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.325 05:45:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:09.325 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:09.325 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:09.325 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:09.325 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:09.325 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:09.325 05:45:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:09.325 05:45:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:09.325 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.325 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:09.325 05:45:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:09.325 05:45:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:09.325 [2024-07-14 05:45:16.206551] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c0450 (9): Bad file descriptor 00:31:09.325 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.325 [2024-07-14 05:45:16.216605] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:09.325 [2024-07-14 05:45:16.216900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.325 [2024-07-14 05:45:16.216930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c0450 with addr=10.0.0.2, port=4420 00:31:09.325 [2024-07-14 05:45:16.216947] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0450 is same with the state(5) to be set 00:31:09.325 [2024-07-14 05:45:16.216971] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c0450 (9): Bad file descriptor 00:31:09.325 [2024-07-14 05:45:16.216993] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:09.325 [2024-07-14 05:45:16.217007] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:09.325 [2024-07-14 05:45:16.217023] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:09.325 [2024-07-14 05:45:16.217043] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.325 [2024-07-14 05:45:16.226679] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:09.325 [2024-07-14 05:45:16.226951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.325 [2024-07-14 05:45:16.226980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c0450 with addr=10.0.0.2, port=4420 00:31:09.325 [2024-07-14 05:45:16.226996] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0450 is same with the state(5) to be set 00:31:09.325 [2024-07-14 05:45:16.227018] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c0450 (9): Bad file descriptor 00:31:09.325 [2024-07-14 05:45:16.227039] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:09.325 [2024-07-14 05:45:16.227058] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:09.325 [2024-07-14 05:45:16.227089] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:09.325 [2024-07-14 05:45:16.227108] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.325 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:09.325 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:09.325 05:45:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:09.325 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:09.325 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:09.325 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:09.325 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:09.325 [2024-07-14 05:45:16.236750] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:09.325 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:09.325 [2024-07-14 05:45:16.236981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.325 [2024-07-14 05:45:16.237011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c0450 with addr=10.0.0.2, port=4420 00:31:09.325 [2024-07-14 05:45:16.237027] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0450 is same with the state(5) to be set 00:31:09.325 [2024-07-14 05:45:16.237050] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c0450 (9): Bad file descriptor 00:31:09.325 [2024-07-14 05:45:16.237071] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:09.325 [2024-07-14 05:45:16.237085] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:09.325 [2024-07-14 05:45:16.237099] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:09.325 [2024-07-14 05:45:16.237118] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.325 05:45:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:09.325 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.325 05:45:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:09.325 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:09.325 05:45:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:09.325 05:45:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:09.325 [2024-07-14 05:45:16.246827] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:09.325 [2024-07-14 05:45:16.247091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.325 [2024-07-14 05:45:16.247120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c0450 with addr=10.0.0.2, port=4420 00:31:09.325 [2024-07-14 05:45:16.247136] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0450 is same with the state(5) to be set 00:31:09.325 [2024-07-14 05:45:16.247159] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c0450 (9): Bad file descriptor 00:31:09.325 [2024-07-14 05:45:16.247180] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:09.325 [2024-07-14 05:45:16.247195] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:09.325 [2024-07-14 05:45:16.247209] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:09.325 [2024-07-14 05:45:16.247233] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.325 [2024-07-14 05:45:16.256900] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:09.325 [2024-07-14 05:45:16.257202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.325 [2024-07-14 05:45:16.257229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c0450 with addr=10.0.0.2, port=4420 00:31:09.325 [2024-07-14 05:45:16.257246] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0450 is same with the state(5) to be set 00:31:09.325 [2024-07-14 05:45:16.257268] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c0450 (9): Bad file descriptor 00:31:09.325 [2024-07-14 05:45:16.257289] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:09.325 [2024-07-14 05:45:16.257303] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:09.325 [2024-07-14 05:45:16.257317] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:09.325 [2024-07-14 05:45:16.257336] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.325 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.325 [2024-07-14 05:45:16.266968] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:09.325 [2024-07-14 05:45:16.267273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.325 [2024-07-14 05:45:16.267299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c0450 with addr=10.0.0.2, port=4420 00:31:09.325 [2024-07-14 05:45:16.267315] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0450 is same with the state(5) to be set 00:31:09.325 [2024-07-14 05:45:16.267336] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c0450 (9): Bad file descriptor 00:31:09.325 [2024-07-14 05:45:16.267356] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:09.325 [2024-07-14 05:45:16.267385] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:09.325 [2024-07-14 05:45:16.267398] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:09.325 [2024-07-14 05:45:16.267416] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.325 [2024-07-14 05:45:16.277038] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controlle 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:09.325 r 00:31:09.325 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:09.325 [2024-07-14 05:45:16.277295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.325 [2024-07-14 05:45:16.277323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c0450 with addr=10.0.0.2, port=4420 00:31:09.325 [2024-07-14 05:45:16.277340] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0450 is same with the state(5) to be set 00:31:09.325 [2024-07-14 05:45:16.277362] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c0450 (9): Bad file descriptor 00:31:09.325 [2024-07-14 05:45:16.277382] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:09.325 [2024-07-14 05:45:16.277397] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:09.325 [2024-07-14 05:45:16.277410] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:09.325 [2024-07-14 05:45:16.277429] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.325 05:45:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:31:09.325 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:31:09.325 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:09.325 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:09.325 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:31:09.325 [2024-07-14 05:45:16.279511] bdev_nvme.c:6771:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:31:09.325 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:31:09.325 [2024-07-14 05:45:16.279538] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:09.325 05:45:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:09.326 05:45:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:09.326 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.326 05:45:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:09.326 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:09.326 05:45:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:09.326 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.326 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4421 == \4\4\2\1 ]] 00:31:09.326 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:09.326 05:45:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:31:09.326 05:45:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:09.326 05:45:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:09.326 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:09.326 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:09.326 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:09.326 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:09.326 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:09.326 05:45:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:09.326 05:45:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:09.326 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.326 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:09.326 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.326 05:45:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:09.326 05:45:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:09.326 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:09.326 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:09.326 05:45:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:31:09.326 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.326 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:09.326 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.326 05:45:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:31:09.326 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:31:09.326 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:09.326 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:09.326 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:31:09.326 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:09.326 05:45:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:09.326 05:45:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:09.326 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.326 05:45:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:09.326 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:09.326 05:45:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:09.326 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.326 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:31:09.326 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:09.326 05:45:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:31:09.326 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:31:09.326 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:09.326 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:09.326 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:31:09.326 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:09.326 05:45:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:09.326 05:45:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:09.326 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.326 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:09.326 05:45:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:09.326 05:45:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:09.584 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.584 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:31:09.584 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:09.584 05:45:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:31:09.584 05:45:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:31:09.584 05:45:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:09.584 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:09.584 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:09.584 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:09.584 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:09.584 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:09.584 05:45:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:09.584 05:45:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:09.584 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.584 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:09.584 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.584 05:45:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:31:09.584 05:45:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:31:09.584 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:09.584 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:09.584 05:45:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:09.584 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.584 05:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:10.517 [2024-07-14 05:45:17.552825] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:10.517 [2024-07-14 05:45:17.552860] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:10.517 [2024-07-14 05:45:17.552892] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:10.775 [2024-07-14 05:45:17.681343] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:31:11.033 [2024-07-14 05:45:17.989409] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:11.033 [2024-07-14 05:45:17.989461] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:11.033 05:45:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.033 05:45:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:11.033 05:45:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:11.033 05:45:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:11.033 05:45:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:11.033 05:45:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:11.033 05:45:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:11.033 05:45:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:11.033 05:45:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:11.033 05:45:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.033 05:45:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:11.033 request: 00:31:11.033 { 00:31:11.033 "name": "nvme", 00:31:11.033 "trtype": "tcp", 00:31:11.033 "traddr": "10.0.0.2", 00:31:11.033 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:11.033 "adrfam": "ipv4", 00:31:11.033 "trsvcid": "8009", 00:31:11.033 "wait_for_attach": true, 00:31:11.033 "method": "bdev_nvme_start_discovery", 00:31:11.033 "req_id": 1 00:31:11.033 } 00:31:11.033 Got JSON-RPC error response 00:31:11.033 response: 00:31:11.033 { 00:31:11.033 "code": -17, 00:31:11.033 "message": "File exists" 00:31:11.033 } 00:31:11.033 05:45:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:11.033 05:45:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:11.033 05:45:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:11.033 05:45:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:11.033 05:45:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:11.033 05:45:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:31:11.033 05:45:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:11.033 05:45:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:11.033 05:45:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.033 05:45:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:11.033 05:45:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:11.033 05:45:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:11.033 05:45:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.033 05:45:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:31:11.033 05:45:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:31:11.033 05:45:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:11.033 05:45:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:11.033 05:45:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.033 05:45:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:11.033 05:45:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:11.033 05:45:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:11.034 05:45:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.034 05:45:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:11.034 05:45:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:11.034 05:45:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:11.034 05:45:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:11.034 05:45:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:11.034 05:45:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:11.034 05:45:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:11.034 05:45:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:11.034 05:45:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:11.034 05:45:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.034 05:45:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:11.034 request: 00:31:11.034 { 00:31:11.034 "name": "nvme_second", 00:31:11.034 "trtype": "tcp", 00:31:11.034 "traddr": "10.0.0.2", 00:31:11.034 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:11.034 "adrfam": "ipv4", 00:31:11.034 "trsvcid": "8009", 00:31:11.034 "wait_for_attach": true, 00:31:11.034 "method": "bdev_nvme_start_discovery", 00:31:11.034 "req_id": 1 00:31:11.034 } 00:31:11.034 Got JSON-RPC error response 00:31:11.034 response: 00:31:11.034 { 00:31:11.034 "code": -17, 00:31:11.034 "message": "File exists" 00:31:11.034 } 00:31:11.034 05:45:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:11.034 05:45:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:11.034 05:45:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:11.034 05:45:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:11.034 05:45:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:11.034 05:45:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:31:11.034 05:45:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:11.034 05:45:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.034 05:45:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:11.034 05:45:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:11.034 05:45:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:11.034 05:45:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:11.034 05:45:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.293 05:45:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:31:11.293 05:45:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:31:11.293 05:45:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:11.293 05:45:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:11.293 05:45:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.293 05:45:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:11.293 05:45:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:11.293 05:45:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:11.293 05:45:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.293 05:45:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:11.293 05:45:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:11.293 05:45:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:11.293 05:45:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:11.293 05:45:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:11.293 05:45:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:11.293 05:45:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:11.293 05:45:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:11.293 05:45:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:11.293 05:45:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.293 05:45:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:12.259 [2024-07-14 05:45:19.208947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.259 [2024-07-14 05:45:19.209012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c1190 with addr=10.0.0.2, port=8010 00:31:12.259 [2024-07-14 05:45:19.209045] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:12.259 [2024-07-14 05:45:19.209060] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:12.259 [2024-07-14 05:45:19.209074] bdev_nvme.c:7046:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:13.193 [2024-07-14 05:45:20.211277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.193 [2024-07-14 05:45:20.211318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c1190 with addr=10.0.0.2, port=8010 00:31:13.193 [2024-07-14 05:45:20.211343] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:13.193 [2024-07-14 05:45:20.211356] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:13.193 [2024-07-14 05:45:20.211368] bdev_nvme.c:7046:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:14.125 [2024-07-14 05:45:21.213510] bdev_nvme.c:7027:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:31:14.125 request: 00:31:14.125 { 00:31:14.125 "name": "nvme_second", 00:31:14.125 "trtype": "tcp", 00:31:14.125 "traddr": "10.0.0.2", 00:31:14.125 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:14.125 "adrfam": "ipv4", 00:31:14.125 "trsvcid": "8010", 00:31:14.125 "attach_timeout_ms": 3000, 00:31:14.125 "method": "bdev_nvme_start_discovery", 00:31:14.125 "req_id": 1 00:31:14.125 } 00:31:14.125 Got JSON-RPC error response 00:31:14.125 response: 00:31:14.125 { 00:31:14.125 "code": -110, 00:31:14.125 "message": "Connection timed out" 00:31:14.125 } 00:31:14.125 05:45:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:14.125 05:45:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:14.125 05:45:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:14.125 05:45:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:14.125 05:45:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:14.125 05:45:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:31:14.125 05:45:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:14.125 05:45:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.125 05:45:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:14.125 05:45:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:14.125 05:45:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:14.125 05:45:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:14.125 05:45:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.384 05:45:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:31:14.384 05:45:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:31:14.384 05:45:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3357082 00:31:14.384 05:45:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:31:14.384 05:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:14.384 05:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:31:14.384 05:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:14.384 05:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:31:14.384 05:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:14.384 05:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:14.384 rmmod nvme_tcp 00:31:14.384 rmmod nvme_fabrics 00:31:14.384 rmmod nvme_keyring 00:31:14.384 05:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:14.384 05:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:31:14.384 05:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:31:14.384 05:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 3356943 ']' 00:31:14.384 05:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 3356943 00:31:14.384 05:45:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@946 -- # '[' -z 3356943 ']' 00:31:14.384 05:45:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@950 -- # kill -0 3356943 00:31:14.384 05:45:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # uname 00:31:14.384 05:45:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:14.384 05:45:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3356943 00:31:14.384 05:45:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:31:14.384 05:45:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:31:14.384 05:45:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3356943' 00:31:14.384 killing process with pid 3356943 00:31:14.384 05:45:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@965 -- # kill 3356943 00:31:14.384 05:45:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@970 -- # wait 3356943 00:31:14.643 05:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:14.643 05:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:14.643 05:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:14.643 05:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:14.643 05:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:14.643 05:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:14.643 05:45:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:14.643 05:45:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:17.174 05:45:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:17.174 00:31:17.174 real 0m14.170s 00:31:17.174 user 0m21.253s 00:31:17.174 sys 0m2.703s 00:31:17.174 05:45:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:17.174 05:45:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:17.174 ************************************ 00:31:17.174 END TEST nvmf_host_discovery 00:31:17.174 ************************************ 00:31:17.174 05:45:23 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:31:17.174 05:45:23 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:31:17.174 05:45:23 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:17.174 05:45:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:17.174 ************************************ 00:31:17.174 START TEST nvmf_host_multipath_status 00:31:17.174 ************************************ 00:31:17.174 05:45:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:31:17.174 * Looking for test storage... 00:31:17.174 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:17.174 05:45:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:17.174 05:45:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:31:17.174 05:45:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:17.174 05:45:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:17.174 05:45:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:17.174 05:45:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:17.174 05:45:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:17.174 05:45:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:17.174 05:45:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:17.174 05:45:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:17.174 05:45:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:17.174 05:45:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:17.174 05:45:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:17.174 05:45:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:17.174 05:45:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:17.174 05:45:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:17.174 05:45:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:17.174 05:45:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:17.174 05:45:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:17.174 05:45:23 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:17.174 05:45:23 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:17.174 05:45:23 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:17.174 05:45:23 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.174 05:45:23 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.174 05:45:23 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.174 05:45:23 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:31:17.174 05:45:23 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.174 05:45:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:31:17.174 05:45:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:17.174 05:45:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:17.174 05:45:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:17.174 05:45:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:17.174 05:45:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:17.174 05:45:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:17.174 05:45:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:17.174 05:45:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:17.174 05:45:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:31:17.174 05:45:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:31:17.174 05:45:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:17.174 05:45:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:31:17.174 05:45:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:17.174 05:45:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:31:17.174 05:45:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:31:17.174 05:45:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:17.174 05:45:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:17.174 05:45:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:17.174 05:45:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:17.174 05:45:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:17.174 05:45:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:17.174 05:45:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:17.174 05:45:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:17.174 05:45:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:17.174 05:45:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:17.174 05:45:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:31:17.174 05:45:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:19.077 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:19.077 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:19.077 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:19.077 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:19.077 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:19.078 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:19.078 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:19.078 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:19.078 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:19.078 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:19.078 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:19.078 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:19.078 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:19.078 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:19.078 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms 00:31:19.078 00:31:19.078 --- 10.0.0.2 ping statistics --- 00:31:19.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:19.078 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:31:19.078 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:19.078 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:19.078 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:31:19.078 00:31:19.078 --- 10.0.0.1 ping statistics --- 00:31:19.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:19.078 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:31:19.078 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:19.078 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:31:19.078 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:19.078 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:19.078 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:19.078 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:19.078 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:19.078 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:19.078 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:19.078 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:31:19.078 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:19.078 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:19.078 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:19.078 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=3360242 00:31:19.078 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:31:19.078 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 3360242 00:31:19.078 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 3360242 ']' 00:31:19.078 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:19.078 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:19.078 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:19.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:19.078 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:19.078 05:45:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:19.078 [2024-07-14 05:45:25.927347] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:31:19.078 [2024-07-14 05:45:25.927436] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:19.078 EAL: No free 2048 kB hugepages reported on node 1 00:31:19.078 [2024-07-14 05:45:26.002002] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:19.078 [2024-07-14 05:45:26.104373] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:19.078 [2024-07-14 05:45:26.104432] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:19.078 [2024-07-14 05:45:26.104447] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:19.078 [2024-07-14 05:45:26.104460] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:19.078 [2024-07-14 05:45:26.104470] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:19.078 [2024-07-14 05:45:26.105889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:19.078 [2024-07-14 05:45:26.105895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:19.336 05:45:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:19.336 05:45:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:31:19.336 05:45:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:19.336 05:45:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:19.336 05:45:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:19.336 05:45:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:19.336 05:45:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3360242 00:31:19.336 05:45:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:19.594 [2024-07-14 05:45:26.505330] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:19.594 05:45:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:31:19.852 Malloc0 00:31:19.853 05:45:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:31:20.110 05:45:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:20.368 05:45:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:20.625 [2024-07-14 05:45:27.642409] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:20.625 05:45:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:20.882 [2024-07-14 05:45:27.899073] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:20.882 05:45:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3360528 00:31:20.882 05:45:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:31:20.882 05:45:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:20.882 05:45:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3360528 /var/tmp/bdevperf.sock 00:31:20.882 05:45:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 3360528 ']' 00:31:20.882 05:45:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:20.882 05:45:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:20.882 05:45:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:20.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:20.882 05:45:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:20.882 05:45:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:21.140 05:45:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:21.140 05:45:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:31:21.140 05:45:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:31:21.397 05:45:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:31:21.961 Nvme0n1 00:31:21.961 05:45:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:31:22.217 Nvme0n1 00:31:22.217 05:45:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:31:22.217 05:45:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:31:24.742 05:45:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:31:24.742 05:45:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:31:24.742 05:45:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:24.742 05:45:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:31:26.115 05:45:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:31:26.115 05:45:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:26.115 05:45:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:26.115 05:45:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:26.115 05:45:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:26.115 05:45:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:26.115 05:45:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:26.115 05:45:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:26.372 05:45:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:26.372 05:45:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:26.372 05:45:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:26.372 05:45:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:26.628 05:45:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:26.628 05:45:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:26.628 05:45:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:26.628 05:45:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:26.885 05:45:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:26.885 05:45:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:26.885 05:45:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:26.885 05:45:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:27.143 05:45:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:27.143 05:45:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:27.143 05:45:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:27.143 05:45:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:27.400 05:45:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:27.400 05:45:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:31:27.400 05:45:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:27.660 05:45:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:27.918 05:45:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:31:28.867 05:45:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:31:28.867 05:45:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:28.867 05:45:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:28.867 05:45:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:29.175 05:45:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:29.175 05:45:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:29.175 05:45:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:29.175 05:45:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:29.175 05:45:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:29.175 05:45:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:29.175 05:45:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:29.175 05:45:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:29.433 05:45:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:29.433 05:45:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:29.433 05:45:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:29.433 05:45:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:29.691 05:45:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:29.691 05:45:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:29.691 05:45:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:29.691 05:45:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:29.950 05:45:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:29.950 05:45:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:29.950 05:45:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:29.950 05:45:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:30.208 05:45:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:30.208 05:45:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:31:30.208 05:45:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:30.466 05:45:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:31:30.725 05:45:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:31:31.659 05:45:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:31:31.660 05:45:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:31.660 05:45:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:31.660 05:45:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:31.917 05:45:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:31.917 05:45:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:31.917 05:45:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:31.917 05:45:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:32.175 05:45:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:32.175 05:45:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:32.175 05:45:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:32.175 05:45:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:32.432 05:45:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:32.432 05:45:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:32.432 05:45:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:32.432 05:45:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:32.690 05:45:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:32.690 05:45:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:32.690 05:45:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:32.690 05:45:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:32.947 05:45:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:32.947 05:45:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:32.947 05:45:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:32.947 05:45:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:33.204 05:45:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:33.204 05:45:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:31:33.204 05:45:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:33.460 05:45:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:33.717 05:45:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:31:34.648 05:45:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:31:34.648 05:45:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:34.648 05:45:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:34.649 05:45:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:34.913 05:45:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:34.913 05:45:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:34.914 05:45:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:34.914 05:45:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:35.177 05:45:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:35.177 05:45:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:35.177 05:45:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:35.177 05:45:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:35.434 05:45:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:35.434 05:45:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:35.434 05:45:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:35.434 05:45:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:35.691 05:45:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:35.691 05:45:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:35.691 05:45:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:35.691 05:45:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:35.949 05:45:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:35.949 05:45:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:35.949 05:45:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:35.949 05:45:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:36.206 05:45:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:36.206 05:45:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:31:36.206 05:45:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:31:36.464 05:45:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:36.722 05:45:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:31:37.653 05:45:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:31:37.653 05:45:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:37.653 05:45:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:37.654 05:45:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:37.911 05:45:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:37.911 05:45:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:37.911 05:45:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:37.911 05:45:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:38.169 05:45:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:38.169 05:45:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:38.169 05:45:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:38.169 05:45:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:38.432 05:45:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:38.432 05:45:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:38.432 05:45:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:38.432 05:45:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:38.689 05:45:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:38.689 05:45:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:31:38.689 05:45:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:38.689 05:45:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:38.946 05:45:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:38.946 05:45:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:38.946 05:45:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:38.946 05:45:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:39.203 05:45:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:39.203 05:45:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:31:39.203 05:45:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:31:39.461 05:45:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:39.718 05:45:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:31:40.652 05:45:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:31:40.652 05:45:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:40.652 05:45:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:40.652 05:45:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:40.909 05:45:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:40.909 05:45:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:40.909 05:45:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:40.909 05:45:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:41.167 05:45:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:41.167 05:45:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:41.167 05:45:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:41.167 05:45:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:41.425 05:45:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:41.425 05:45:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:41.425 05:45:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:41.425 05:45:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:41.683 05:45:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:41.683 05:45:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:31:41.683 05:45:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:41.683 05:45:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:41.941 05:45:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:41.941 05:45:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:41.941 05:45:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:41.941 05:45:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:42.198 05:45:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:42.198 05:45:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:31:42.456 05:45:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:31:42.456 05:45:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:31:42.714 05:45:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:42.971 05:45:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:31:43.957 05:45:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:31:43.957 05:45:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:43.957 05:45:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:43.957 05:45:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:44.214 05:45:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:44.214 05:45:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:44.215 05:45:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:44.215 05:45:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:44.473 05:45:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:44.473 05:45:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:44.473 05:45:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:44.473 05:45:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:44.757 05:45:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:44.757 05:45:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:44.757 05:45:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:44.757 05:45:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:45.015 05:45:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:45.015 05:45:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:45.015 05:45:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:45.015 05:45:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:45.273 05:45:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:45.273 05:45:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:45.273 05:45:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:45.273 05:45:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:45.530 05:45:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:45.530 05:45:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:31:45.530 05:45:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:45.788 05:45:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:46.046 05:45:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:31:46.987 05:45:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:31:46.987 05:45:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:46.987 05:45:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:46.987 05:45:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:47.244 05:45:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:47.244 05:45:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:47.244 05:45:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:47.244 05:45:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:47.501 05:45:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:47.501 05:45:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:47.501 05:45:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:47.501 05:45:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:47.759 05:45:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:47.759 05:45:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:47.759 05:45:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:47.759 05:45:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:48.016 05:45:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:48.016 05:45:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:48.016 05:45:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:48.016 05:45:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:48.275 05:45:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:48.275 05:45:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:48.275 05:45:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:48.275 05:45:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:48.532 05:45:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:48.532 05:45:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:31:48.532 05:45:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:48.790 05:45:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:31:49.047 05:45:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:31:49.978 05:45:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:31:49.978 05:45:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:49.978 05:45:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:49.978 05:45:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:50.235 05:45:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:50.235 05:45:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:50.235 05:45:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:50.235 05:45:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:50.492 05:45:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:50.492 05:45:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:50.492 05:45:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:50.492 05:45:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:50.750 05:45:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:50.750 05:45:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:50.750 05:45:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:50.750 05:45:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:51.009 05:45:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:51.009 05:45:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:51.009 05:45:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:51.009 05:45:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:51.267 05:45:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:51.267 05:45:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:51.267 05:45:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:51.267 05:45:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:51.525 05:45:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:51.525 05:45:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:31:51.525 05:45:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:51.783 05:45:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:52.042 05:45:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:31:52.977 05:45:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:31:52.977 05:45:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:52.977 05:45:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:52.977 05:45:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:53.236 05:46:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:53.236 05:46:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:53.236 05:46:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:53.236 05:46:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:53.508 05:46:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:53.508 05:46:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:53.508 05:46:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:53.508 05:46:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:53.766 05:46:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:53.766 05:46:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:53.766 05:46:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:53.766 05:46:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:54.024 05:46:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:54.024 05:46:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:54.024 05:46:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:54.024 05:46:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:54.282 05:46:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:54.282 05:46:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:54.282 05:46:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:54.282 05:46:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:54.540 05:46:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:54.540 05:46:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3360528 00:31:54.540 05:46:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 3360528 ']' 00:31:54.540 05:46:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 3360528 00:31:54.540 05:46:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:31:54.540 05:46:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:54.540 05:46:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3360528 00:31:54.540 05:46:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:31:54.540 05:46:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:31:54.540 05:46:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3360528' 00:31:54.540 killing process with pid 3360528 00:31:54.540 05:46:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 3360528 00:31:54.540 05:46:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 3360528 00:31:54.540 Connection closed with partial response: 00:31:54.540 00:31:54.540 00:31:54.808 05:46:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3360528 00:31:54.809 05:46:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:54.809 [2024-07-14 05:45:27.957299] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:31:54.809 [2024-07-14 05:45:27.957390] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3360528 ] 00:31:54.809 EAL: No free 2048 kB hugepages reported on node 1 00:31:54.809 [2024-07-14 05:45:28.019681] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:54.809 [2024-07-14 05:45:28.111282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:54.809 Running I/O for 90 seconds... 00:31:54.809 [2024-07-14 05:45:43.442687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:70568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.809 [2024-07-14 05:45:43.442772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:54.809 [2024-07-14 05:45:43.442835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:70576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.809 [2024-07-14 05:45:43.442875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:54.809 [2024-07-14 05:45:43.442903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:70584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.809 [2024-07-14 05:45:43.442921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:54.809 [2024-07-14 05:45:43.442943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:70592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.809 [2024-07-14 05:45:43.442961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:54.809 [2024-07-14 05:45:43.442984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:70600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.809 [2024-07-14 05:45:43.443001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:54.809 [2024-07-14 05:45:43.443024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:70608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.809 [2024-07-14 05:45:43.443042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:54.809 [2024-07-14 05:45:43.443065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:70616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.809 [2024-07-14 05:45:43.443083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:54.809 [2024-07-14 05:45:43.443121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:70624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.809 [2024-07-14 05:45:43.443139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:54.809 [2024-07-14 05:45:43.443161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:70632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.809 [2024-07-14 05:45:43.443176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:54.809 [2024-07-14 05:45:43.443199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:70640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.809 [2024-07-14 05:45:43.443215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:54.809 [2024-07-14 05:45:43.443237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:70648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.809 [2024-07-14 05:45:43.443264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:54.809 [2024-07-14 05:45:43.443286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:70656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.809 [2024-07-14 05:45:43.443303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:54.809 [2024-07-14 05:45:43.443324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:70664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.809 [2024-07-14 05:45:43.443340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:54.809 [2024-07-14 05:45:43.443362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:70672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.809 [2024-07-14 05:45:43.443378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:54.809 [2024-07-14 05:45:43.443400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:70680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.809 [2024-07-14 05:45:43.443416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:54.809 [2024-07-14 05:45:43.443438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.809 [2024-07-14 05:45:43.443454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:54.809 [2024-07-14 05:45:43.443476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:70696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.809 [2024-07-14 05:45:43.443493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:54.809 [2024-07-14 05:45:43.443515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:70704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.809 [2024-07-14 05:45:43.443547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:54.809 [2024-07-14 05:45:43.443569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:70712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.809 [2024-07-14 05:45:43.443585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:54.809 [2024-07-14 05:45:43.443606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:70720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.809 [2024-07-14 05:45:43.443636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:54.809 [2024-07-14 05:45:43.443660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:70728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.809 [2024-07-14 05:45:43.443676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.809 [2024-07-14 05:45:43.443698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:70736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.809 [2024-07-14 05:45:43.443714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:54.809 [2024-07-14 05:45:43.443736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:70744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.809 [2024-07-14 05:45:43.443752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:54.809 [2024-07-14 05:45:43.443779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:70752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.809 [2024-07-14 05:45:43.443796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:54.809 [2024-07-14 05:45:43.443818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:70760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.809 [2024-07-14 05:45:43.443835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:54.809 [2024-07-14 05:45:43.443857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:70768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.809 [2024-07-14 05:45:43.443882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:54.809 [2024-07-14 05:45:43.443906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:70776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.809 [2024-07-14 05:45:43.443938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:54.809 [2024-07-14 05:45:43.443963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:70784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.809 [2024-07-14 05:45:43.443980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:54.809 [2024-07-14 05:45:43.444003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:70792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.809 [2024-07-14 05:45:43.444020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:54.809 [2024-07-14 05:45:43.444042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:70800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.809 [2024-07-14 05:45:43.444059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:54.809 [2024-07-14 05:45:43.444082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:70808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.809 [2024-07-14 05:45:43.444099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:54.809 [2024-07-14 05:45:43.444121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:70816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.809 [2024-07-14 05:45:43.444138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:54.809 [2024-07-14 05:45:43.444161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:70824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.809 [2024-07-14 05:45:43.444178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:54.809 [2024-07-14 05:45:43.444200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:70832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.809 [2024-07-14 05:45:43.444216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:54.809 [2024-07-14 05:45:43.444238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:70840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.809 [2024-07-14 05:45:43.444255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:54.809 [2024-07-14 05:45:43.444282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:70848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.809 [2024-07-14 05:45:43.444315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:54.809 [2024-07-14 05:45:43.444338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:70856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.809 [2024-07-14 05:45:43.444354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:54.809 [2024-07-14 05:45:43.444376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:70864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.810 [2024-07-14 05:45:43.444392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:54.810 [2024-07-14 05:45:43.444414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:70872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.810 [2024-07-14 05:45:43.444430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:54.810 [2024-07-14 05:45:43.444470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:70880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.810 [2024-07-14 05:45:43.444486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:54.810 [2024-07-14 05:45:43.444509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:70888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.810 [2024-07-14 05:45:43.444526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:54.810 [2024-07-14 05:45:43.444548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:70896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.810 [2024-07-14 05:45:43.444565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:54.810 [2024-07-14 05:45:43.444587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:70904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.810 [2024-07-14 05:45:43.444604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:54.810 [2024-07-14 05:45:43.444626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:70912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.810 [2024-07-14 05:45:43.444643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:54.810 [2024-07-14 05:45:43.444665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:70920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.810 [2024-07-14 05:45:43.444682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:54.810 [2024-07-14 05:45:43.444703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:70928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.810 [2024-07-14 05:45:43.444720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:54.810 [2024-07-14 05:45:43.444742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:70936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.810 [2024-07-14 05:45:43.444759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:54.810 [2024-07-14 05:45:43.444782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:70944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.810 [2024-07-14 05:45:43.444802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:54.810 [2024-07-14 05:45:43.444842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:70952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.810 [2024-07-14 05:45:43.444860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:54.810 [2024-07-14 05:45:43.444906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:70960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.810 [2024-07-14 05:45:43.444924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:54.810 [2024-07-14 05:45:43.444947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:70968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.810 [2024-07-14 05:45:43.444964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:54.810 [2024-07-14 05:45:43.444986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:70976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.810 [2024-07-14 05:45:43.445003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:54.810 [2024-07-14 05:45:43.445025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:70984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.810 [2024-07-14 05:45:43.445042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:54.810 [2024-07-14 05:45:43.445064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:70992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.810 [2024-07-14 05:45:43.445081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:54.810 [2024-07-14 05:45:43.445103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:71000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.810 [2024-07-14 05:45:43.445120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:54.810 [2024-07-14 05:45:43.445142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:71008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.810 [2024-07-14 05:45:43.445159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:54.810 [2024-07-14 05:45:43.445182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:70312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.810 [2024-07-14 05:45:43.445198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:54.810 [2024-07-14 05:45:43.445220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:70320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.810 [2024-07-14 05:45:43.445237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:54.810 [2024-07-14 05:45:43.445260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:70328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.810 [2024-07-14 05:45:43.445277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:54.810 [2024-07-14 05:45:43.445298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:70336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.810 [2024-07-14 05:45:43.445320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:54.810 [2024-07-14 05:45:43.445343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:70344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.810 [2024-07-14 05:45:43.445374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:54.810 [2024-07-14 05:45:43.445398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:70352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.810 [2024-07-14 05:45:43.445415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:54.810 [2024-07-14 05:45:43.445601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:70360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.810 [2024-07-14 05:45:43.445624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:54.810 [2024-07-14 05:45:43.445655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:71016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.810 [2024-07-14 05:45:43.445675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:54.810 [2024-07-14 05:45:43.445704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:71024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.810 [2024-07-14 05:45:43.445722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:54.810 [2024-07-14 05:45:43.445749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:71032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.810 [2024-07-14 05:45:43.445766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:54.810 [2024-07-14 05:45:43.445794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:71040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.810 [2024-07-14 05:45:43.445811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:54.810 [2024-07-14 05:45:43.445838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:71048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.810 [2024-07-14 05:45:43.445856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:54.810 [2024-07-14 05:45:43.445894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:71056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.810 [2024-07-14 05:45:43.445913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:54.810 [2024-07-14 05:45:43.445940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:71064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.810 [2024-07-14 05:45:43.445958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:54.810 [2024-07-14 05:45:43.445984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:71072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.810 [2024-07-14 05:45:43.446001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:54.810 [2024-07-14 05:45:43.446043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:71080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.810 [2024-07-14 05:45:43.446060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:54.810 [2024-07-14 05:45:43.446092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:71088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.810 [2024-07-14 05:45:43.446109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:54.810 [2024-07-14 05:45:43.446135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:71096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.810 [2024-07-14 05:45:43.446152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:54.810 [2024-07-14 05:45:43.446178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:71104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.810 [2024-07-14 05:45:43.446195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:54.810 [2024-07-14 05:45:43.446221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:71112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.810 [2024-07-14 05:45:43.446238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:54.810 [2024-07-14 05:45:43.446264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:71120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.810 [2024-07-14 05:45:43.446280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:54.810 [2024-07-14 05:45:43.446306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:71128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.810 [2024-07-14 05:45:43.446323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:54.810 [2024-07-14 05:45:43.446349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:71136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.810 [2024-07-14 05:45:43.446366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:54.811 [2024-07-14 05:45:43.446392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:71144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.811 [2024-07-14 05:45:43.446409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:54.811 [2024-07-14 05:45:43.446436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:71152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.811 [2024-07-14 05:45:43.446453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:54.811 [2024-07-14 05:45:43.446479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:71160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.811 [2024-07-14 05:45:43.446496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:54.811 [2024-07-14 05:45:43.446522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:71168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.811 [2024-07-14 05:45:43.446539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:54.811 [2024-07-14 05:45:43.446565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:71176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.811 [2024-07-14 05:45:43.446581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:54.811 [2024-07-14 05:45:43.446614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:71184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.811 [2024-07-14 05:45:43.446631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:54.811 [2024-07-14 05:45:43.446658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:71192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.811 [2024-07-14 05:45:43.446674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:54.811 [2024-07-14 05:45:43.446700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:71200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.811 [2024-07-14 05:45:43.446717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:54.811 [2024-07-14 05:45:43.446743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:71208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.811 [2024-07-14 05:45:43.446760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:54.811 [2024-07-14 05:45:43.446786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:71216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.811 [2024-07-14 05:45:43.446803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:54.811 [2024-07-14 05:45:43.446830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:71224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.811 [2024-07-14 05:45:43.446846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:54.811 [2024-07-14 05:45:43.446880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:71232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.811 [2024-07-14 05:45:43.446898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:54.811 [2024-07-14 05:45:43.446924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:71240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.811 [2024-07-14 05:45:43.446941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:54.811 [2024-07-14 05:45:43.446967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:71248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.811 [2024-07-14 05:45:43.446984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:54.811 [2024-07-14 05:45:43.447010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:71256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.811 [2024-07-14 05:45:43.447026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:54.811 [2024-07-14 05:45:43.447053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:71264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.811 [2024-07-14 05:45:43.447069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:54.811 [2024-07-14 05:45:43.447095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:71272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.811 [2024-07-14 05:45:43.447112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:54.811 [2024-07-14 05:45:43.447138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:71280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.811 [2024-07-14 05:45:43.447158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:54.811 [2024-07-14 05:45:43.447185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:71288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.811 [2024-07-14 05:45:43.447202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:54.811 [2024-07-14 05:45:43.447228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:71296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.811 [2024-07-14 05:45:43.447245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:54.811 [2024-07-14 05:45:43.447270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:71304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.811 [2024-07-14 05:45:43.447287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:54.811 [2024-07-14 05:45:43.447313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:71312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.811 [2024-07-14 05:45:43.447329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:54.811 [2024-07-14 05:45:43.447355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:70368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.811 [2024-07-14 05:45:43.447372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:54.811 [2024-07-14 05:45:43.447398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:70376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.811 [2024-07-14 05:45:43.447414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:54.811 [2024-07-14 05:45:43.447441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:70384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.811 [2024-07-14 05:45:43.447457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:54.811 [2024-07-14 05:45:43.447483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:70392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.811 [2024-07-14 05:45:43.447499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:54.811 [2024-07-14 05:45:43.447525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:70400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.811 [2024-07-14 05:45:43.447542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:54.811 [2024-07-14 05:45:43.447568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:70408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.811 [2024-07-14 05:45:43.447584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:54.811 [2024-07-14 05:45:43.447611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:70416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.811 [2024-07-14 05:45:43.447627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:54.811 [2024-07-14 05:45:43.447653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:70424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.811 [2024-07-14 05:45:43.447673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:54.811 [2024-07-14 05:45:43.447700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:71320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.811 [2024-07-14 05:45:43.447716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:54.811 [2024-07-14 05:45:43.447742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:71328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.811 [2024-07-14 05:45:43.447759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:54.811 [2024-07-14 05:45:43.447785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:70432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.811 [2024-07-14 05:45:43.447802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:54.811 [2024-07-14 05:45:43.447829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:70440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.811 [2024-07-14 05:45:43.447845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:54.811 [2024-07-14 05:45:43.447878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:70448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.811 [2024-07-14 05:45:43.447896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:54.811 [2024-07-14 05:45:43.447923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:70456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.811 [2024-07-14 05:45:43.447939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:54.811 [2024-07-14 05:45:43.447965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:70464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.811 [2024-07-14 05:45:43.447981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:54.811 [2024-07-14 05:45:43.448008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:70472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.811 [2024-07-14 05:45:43.448024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:54.811 [2024-07-14 05:45:43.448050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:70480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.811 [2024-07-14 05:45:43.448067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:54.811 [2024-07-14 05:45:43.448093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:70488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.811 [2024-07-14 05:45:43.448109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:54.811 [2024-07-14 05:45:43.448135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:70496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.811 [2024-07-14 05:45:43.448151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:54.811 [2024-07-14 05:45:43.448177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:70504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.812 [2024-07-14 05:45:43.448194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:54.812 [2024-07-14 05:45:43.448225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:70512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.812 [2024-07-14 05:45:43.448242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:54.812 [2024-07-14 05:45:43.448268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:70520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.812 [2024-07-14 05:45:43.448285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:54.812 [2024-07-14 05:45:43.448311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:70528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.812 [2024-07-14 05:45:43.448328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:54.812 [2024-07-14 05:45:43.448354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:70536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.812 [2024-07-14 05:45:43.448370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:54.812 [2024-07-14 05:45:43.448396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:70544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.812 [2024-07-14 05:45:43.448413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:54.812 [2024-07-14 05:45:43.448439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:70552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.812 [2024-07-14 05:45:43.448457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:54.812 [2024-07-14 05:45:43.448805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:70560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.812 [2024-07-14 05:45:43.448829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:54.812 [2024-07-14 05:45:58.959050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:54808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.812 [2024-07-14 05:45:58.959120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:54.812 [2024-07-14 05:45:58.959157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:54824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.812 [2024-07-14 05:45:58.959176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:54.812 [2024-07-14 05:45:58.959200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:54840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.812 [2024-07-14 05:45:58.959217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:54.812 [2024-07-14 05:45:58.959240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:54856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.812 [2024-07-14 05:45:58.959258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:54.812 [2024-07-14 05:45:58.959281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:54872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.812 [2024-07-14 05:45:58.959298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:54.812 [2024-07-14 05:45:58.959331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:54888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.812 [2024-07-14 05:45:58.959348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:54.812 [2024-07-14 05:45:58.959372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:54904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.812 [2024-07-14 05:45:58.959389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:54.812 [2024-07-14 05:45:58.959413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:54920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.812 [2024-07-14 05:45:58.959429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:54.812 [2024-07-14 05:45:58.959454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:54936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.812 [2024-07-14 05:45:58.959472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:54.812 [2024-07-14 05:45:58.959510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:54952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.812 [2024-07-14 05:45:58.959526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:54.812 [2024-07-14 05:45:58.959548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:54968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.812 [2024-07-14 05:45:58.959567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:54.812 [2024-07-14 05:45:58.959590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:54984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.812 [2024-07-14 05:45:58.959607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:54.812 [2024-07-14 05:45:58.959629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:55000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.812 [2024-07-14 05:45:58.959646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:54.812 [2024-07-14 05:45:58.959668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:55016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.812 [2024-07-14 05:45:58.959686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:54.812 [2024-07-14 05:45:58.959708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:55032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.812 [2024-07-14 05:45:58.959726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:54.812 [2024-07-14 05:45:58.959748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:55048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.812 [2024-07-14 05:45:58.959764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:54.812 [2024-07-14 05:45:58.959809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:55064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.812 [2024-07-14 05:45:58.959827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:54.812 [2024-07-14 05:45:58.959850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:55080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.812 [2024-07-14 05:45:58.959880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:54.812 [2024-07-14 05:45:58.959905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:55096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.812 [2024-07-14 05:45:58.959922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:54.812 [2024-07-14 05:45:58.959945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:55112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.812 [2024-07-14 05:45:58.959962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:54.812 [2024-07-14 05:45:58.959984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:55128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.812 [2024-07-14 05:45:58.960001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:54.812 [2024-07-14 05:45:58.960023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:55144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.812 [2024-07-14 05:45:58.960040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:54.812 [2024-07-14 05:45:58.960062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:55160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.812 [2024-07-14 05:45:58.960079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:54.812 [2024-07-14 05:45:58.960101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:55176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.812 [2024-07-14 05:45:58.960117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:54.812 [2024-07-14 05:45:58.960139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:55192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.812 [2024-07-14 05:45:58.960156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:54.812 [2024-07-14 05:45:58.960178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:54728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.812 [2024-07-14 05:45:58.960196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:54.812 [2024-07-14 05:45:58.960218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:54760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.812 [2024-07-14 05:45:58.960234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:54.812 [2024-07-14 05:45:58.960274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:54784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.812 [2024-07-14 05:45:58.960291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:54.812 [2024-07-14 05:45:58.960312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:55216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.812 [2024-07-14 05:45:58.960343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:54.812 [2024-07-14 05:45:58.960365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:55232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.812 [2024-07-14 05:45:58.960385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:54.812 [2024-07-14 05:45:58.960407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:55248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.812 [2024-07-14 05:45:58.960422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:54.812 [2024-07-14 05:45:58.960443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:55264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.812 [2024-07-14 05:45:58.960459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:54.812 [2024-07-14 05:45:58.960480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:55280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.812 [2024-07-14 05:45:58.960496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:54.812 [2024-07-14 05:45:58.960518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:55296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.812 [2024-07-14 05:45:58.960533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:54.813 [2024-07-14 05:45:58.960555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:55312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.813 [2024-07-14 05:45:58.960570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:54.813 [2024-07-14 05:45:58.960591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:55328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.813 [2024-07-14 05:45:58.960607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:54.813 [2024-07-14 05:45:58.960627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:55344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.813 [2024-07-14 05:45:58.960643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:54.813 [2024-07-14 05:45:58.960664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:55360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.813 [2024-07-14 05:45:58.960680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:54.813 [2024-07-14 05:45:58.960701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:55376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.813 [2024-07-14 05:45:58.960717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:54.813 [2024-07-14 05:45:58.961709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:55392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.813 [2024-07-14 05:45:58.961736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.813 [2024-07-14 05:45:58.961764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:55408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.813 [2024-07-14 05:45:58.961783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:54.813 [2024-07-14 05:45:58.961806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.813 [2024-07-14 05:45:58.961823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:54.813 [2024-07-14 05:45:58.961852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:55440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.813 [2024-07-14 05:45:58.961878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:54.813 [2024-07-14 05:45:58.961903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:55456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.813 [2024-07-14 05:45:58.961920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:54.813 [2024-07-14 05:45:58.961943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:55472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.813 [2024-07-14 05:45:58.961960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:54.813 [2024-07-14 05:45:58.961982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:55488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.813 [2024-07-14 05:45:58.961999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:54.813 [2024-07-14 05:45:58.962021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:55504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.813 [2024-07-14 05:45:58.962037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:54.813 [2024-07-14 05:45:58.962059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:54752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.813 [2024-07-14 05:45:58.962076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:54.813 [2024-07-14 05:45:58.962099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:54792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.813 [2024-07-14 05:45:58.962117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:54.813 [2024-07-14 05:45:58.962138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:55520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.813 [2024-07-14 05:45:58.962155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:54.813 [2024-07-14 05:45:58.962178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:55536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.813 [2024-07-14 05:45:58.962195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:54.813 [2024-07-14 05:45:58.962218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:55552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.813 [2024-07-14 05:45:58.962234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:54.813 [2024-07-14 05:45:58.962256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:55568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.813 [2024-07-14 05:45:58.962274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:54.813 [2024-07-14 05:45:58.962313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:55584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.813 [2024-07-14 05:45:58.962330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:54.813 [2024-07-14 05:45:58.962372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:55600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.813 [2024-07-14 05:45:58.962388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:54.813 [2024-07-14 05:45:58.962409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:55616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.813 [2024-07-14 05:45:58.962425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:54.813 [2024-07-14 05:45:58.962446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:55632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.813 [2024-07-14 05:45:58.962462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:54.813 [2024-07-14 05:45:58.962501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:55648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.813 [2024-07-14 05:45:58.962519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:54.813 [2024-07-14 05:45:58.962542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:55664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.813 [2024-07-14 05:45:58.962558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:54.813 [2024-07-14 05:45:58.962580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:55680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.813 [2024-07-14 05:45:58.962597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:54.813 [2024-07-14 05:45:58.962619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:55696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.813 [2024-07-14 05:45:58.962635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:54.813 [2024-07-14 05:45:58.962657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:55712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.813 [2024-07-14 05:45:58.962673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:54.813 [2024-07-14 05:45:58.962695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:55728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.813 [2024-07-14 05:45:58.962727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:54.813 [2024-07-14 05:45:58.962752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:55744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.813 [2024-07-14 05:45:58.962769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:54.813 [2024-07-14 05:45:58.962792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:54816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.813 [2024-07-14 05:45:58.962809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:54.813 [2024-07-14 05:45:58.962831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:54848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.813 [2024-07-14 05:45:58.962848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:54.813 [2024-07-14 05:45:58.962877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:54880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.813 [2024-07-14 05:45:58.962899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:54.814 [2024-07-14 05:45:58.962928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:54912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.814 [2024-07-14 05:45:58.962945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:54.814 [2024-07-14 05:45:58.962969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:54944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.814 [2024-07-14 05:45:58.962986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:54.814 [2024-07-14 05:45:58.963009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:54976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.814 [2024-07-14 05:45:58.963026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:54.814 [2024-07-14 05:45:58.963047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:55008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.814 [2024-07-14 05:45:58.963064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:54.814 [2024-07-14 05:45:58.963087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:55040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.814 [2024-07-14 05:45:58.963103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:54.814 [2024-07-14 05:45:58.963126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:55072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.814 [2024-07-14 05:45:58.963147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:54.814 [2024-07-14 05:45:58.963170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:55104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.814 [2024-07-14 05:45:58.963186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:54.814 [2024-07-14 05:45:58.963209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:55136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.814 [2024-07-14 05:45:58.963242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:54.814 [2024-07-14 05:45:58.963264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:55168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.814 [2024-07-14 05:45:58.963280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:54.814 [2024-07-14 05:45:58.963302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:55200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.814 [2024-07-14 05:45:58.963318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:54.814 [2024-07-14 05:45:58.963340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:55224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.814 [2024-07-14 05:45:58.963372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:54.814 [2024-07-14 05:45:58.963396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:55256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.814 [2024-07-14 05:45:58.963417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:54.814 [2024-07-14 05:45:58.963439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:55288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.814 [2024-07-14 05:45:58.963456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:54.814 [2024-07-14 05:45:58.963485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:55320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.814 [2024-07-14 05:45:58.963503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:54.814 [2024-07-14 05:45:58.963526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:55352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.814 [2024-07-14 05:45:58.963543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:54.814 [2024-07-14 05:45:58.964396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:55760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.814 [2024-07-14 05:45:58.964421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:54.814 [2024-07-14 05:45:58.964449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:55400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.814 [2024-07-14 05:45:58.964467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:54.814 [2024-07-14 05:45:58.964490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:55432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.814 [2024-07-14 05:45:58.964508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:54.814 [2024-07-14 05:45:58.964530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:55464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.814 [2024-07-14 05:45:58.964547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:54.814 [2024-07-14 05:45:58.964569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:55496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.814 [2024-07-14 05:45:58.964587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:54.814 [2024-07-14 05:45:58.964609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:55528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.814 [2024-07-14 05:45:58.964626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:54.814 [2024-07-14 05:45:58.964648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:55560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.814 [2024-07-14 05:45:58.964665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:54.814 [2024-07-14 05:45:58.964687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:55592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.814 [2024-07-14 05:45:58.964704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:54.814 [2024-07-14 05:45:58.964742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:54808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.814 [2024-07-14 05:45:58.964764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:54.814 [2024-07-14 05:45:58.964802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:54840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.814 [2024-07-14 05:45:58.964819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:54.814 [2024-07-14 05:45:58.964840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:54872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.814 [2024-07-14 05:45:58.964856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:54.814 [2024-07-14 05:45:58.964901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:54904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.814 [2024-07-14 05:45:58.964920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:54.814 [2024-07-14 05:45:58.964941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:54936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.814 [2024-07-14 05:45:58.964958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:54.814 [2024-07-14 05:45:58.964979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:54968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.814 [2024-07-14 05:45:58.964996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:54.814 [2024-07-14 05:45:58.965022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:55000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.814 [2024-07-14 05:45:58.965073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:54.814 [2024-07-14 05:45:58.965103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:55032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.814 [2024-07-14 05:45:58.965121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:54.814 [2024-07-14 05:45:58.965143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:55064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.814 [2024-07-14 05:45:58.965161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:54.814 [2024-07-14 05:45:58.965198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:55096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.814 [2024-07-14 05:45:58.965216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:54.814 [2024-07-14 05:45:58.965239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:55128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.814 [2024-07-14 05:45:58.965255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:54.814 [2024-07-14 05:45:58.965277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:55160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.814 [2024-07-14 05:45:58.965293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:54.814 [2024-07-14 05:45:58.965315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:55192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.814 [2024-07-14 05:45:58.965332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:54.814 [2024-07-14 05:45:58.965374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:54760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.814 [2024-07-14 05:45:58.965392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:54.814 [2024-07-14 05:45:58.965416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:55216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.814 [2024-07-14 05:45:58.965449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:54.814 [2024-07-14 05:45:58.965472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:55248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.814 [2024-07-14 05:45:58.965489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:54.814 [2024-07-14 05:45:58.965528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:55280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.814 [2024-07-14 05:45:58.965546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:54.814 [2024-07-14 05:45:58.965568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:55312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.814 [2024-07-14 05:45:58.965586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:54.815 [2024-07-14 05:45:58.965613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:55344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.815 [2024-07-14 05:45:58.965631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:54.815 [2024-07-14 05:45:58.965654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:55376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.815 [2024-07-14 05:45:58.965672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:54.815 [2024-07-14 05:45:58.966208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:55640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.815 [2024-07-14 05:45:58.966235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:54.815 [2024-07-14 05:45:58.966264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:55672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.815 [2024-07-14 05:45:58.966283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:54.815 [2024-07-14 05:45:58.966307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:55704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.815 [2024-07-14 05:45:58.966326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:54.815 [2024-07-14 05:45:58.966350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:55736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.815 [2024-07-14 05:45:58.966367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:54.815 [2024-07-14 05:45:58.966390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:55408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.815 [2024-07-14 05:45:58.966408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:54.815 [2024-07-14 05:45:58.966436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:55440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.815 [2024-07-14 05:45:58.966455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:54.815 [2024-07-14 05:45:58.966478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:55472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.815 [2024-07-14 05:45:58.966495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:54.815 [2024-07-14 05:45:58.966518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:55504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.815 [2024-07-14 05:45:58.966535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:54.815 [2024-07-14 05:45:58.966558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:54792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.815 [2024-07-14 05:45:58.966576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:54.815 [2024-07-14 05:45:58.966598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:55536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.815 [2024-07-14 05:45:58.966616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:54.815 [2024-07-14 05:45:58.966643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:55568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.815 [2024-07-14 05:45:58.966677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:54.815 [2024-07-14 05:45:58.966702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:55600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.815 [2024-07-14 05:45:58.966719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:54.815 [2024-07-14 05:45:58.966741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:55632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.815 [2024-07-14 05:45:58.966762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:54.815 [2024-07-14 05:45:58.966785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:55664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.815 [2024-07-14 05:45:58.966803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:54.815 [2024-07-14 05:45:58.966825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:55696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.815 [2024-07-14 05:45:58.966842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:54.815 [2024-07-14 05:45:58.966891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:55728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.815 [2024-07-14 05:45:58.966910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:54.815 [2024-07-14 05:45:58.966933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:54816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.815 [2024-07-14 05:45:58.966951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:54.815 [2024-07-14 05:45:58.966974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:54880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.815 [2024-07-14 05:45:58.966995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:54.815 [2024-07-14 05:45:58.967019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:54944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.815 [2024-07-14 05:45:58.967037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:54.815 [2024-07-14 05:45:58.967059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:55008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.815 [2024-07-14 05:45:58.967076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:54.815 [2024-07-14 05:45:58.967099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:55072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.815 [2024-07-14 05:45:58.967116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:54.815 [2024-07-14 05:45:58.967138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:55136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.815 [2024-07-14 05:45:58.967156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:54.815 [2024-07-14 05:45:58.967178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:55200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.815 [2024-07-14 05:45:58.967196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:54.815 [2024-07-14 05:45:58.967218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:55256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.815 [2024-07-14 05:45:58.967235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:54.815 [2024-07-14 05:45:58.967259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:55320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.815 [2024-07-14 05:45:58.967291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:54.815 [2024-07-14 05:45:58.969493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:55400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.815 [2024-07-14 05:45:58.969517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:54.815 [2024-07-14 05:45:58.969544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:55464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.815 [2024-07-14 05:45:58.969562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:54.815 [2024-07-14 05:45:58.969584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:55528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.815 [2024-07-14 05:45:58.969600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:54.815 [2024-07-14 05:45:58.969622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:55592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.815 [2024-07-14 05:45:58.969638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:54.815 [2024-07-14 05:45:58.969659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:54840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.815 [2024-07-14 05:45:58.969682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:54.815 [2024-07-14 05:45:58.969705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:54904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.815 [2024-07-14 05:45:58.969721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:54.815 [2024-07-14 05:45:58.969742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:54968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.815 [2024-07-14 05:45:58.969758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:54.815 [2024-07-14 05:45:58.969779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:55032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.815 [2024-07-14 05:45:58.969796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:54.815 [2024-07-14 05:45:58.969818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:55096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.815 [2024-07-14 05:45:58.969834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:54.815 [2024-07-14 05:45:58.969878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:55160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.815 [2024-07-14 05:45:58.969898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:54.815 [2024-07-14 05:45:58.969936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:54760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.815 [2024-07-14 05:45:58.969956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:54.815 [2024-07-14 05:45:58.969980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:55248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.815 [2024-07-14 05:45:58.969998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:54.815 [2024-07-14 05:45:58.970020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:55312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.815 [2024-07-14 05:45:58.970037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:54.815 [2024-07-14 05:45:58.970058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:55376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.815 [2024-07-14 05:45:58.970075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:54.815 [2024-07-14 05:45:58.970097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:55776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.815 [2024-07-14 05:45:58.970114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:54.816 [2024-07-14 05:45:58.970136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:55672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.816 [2024-07-14 05:45:58.970167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:54.816 [2024-07-14 05:45:58.970190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:55736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.816 [2024-07-14 05:45:58.970206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:54.816 [2024-07-14 05:45:58.970248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:55440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.816 [2024-07-14 05:45:58.970265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:54.816 [2024-07-14 05:45:58.970287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:55504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.816 [2024-07-14 05:45:58.970304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:54.816 [2024-07-14 05:45:58.970324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:55536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.816 [2024-07-14 05:45:58.970341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:54.816 [2024-07-14 05:45:58.970362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:55600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.816 [2024-07-14 05:45:58.970378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:54.816 [2024-07-14 05:45:58.970399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:55664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.816 [2024-07-14 05:45:58.970415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:54.816 [2024-07-14 05:45:58.970435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:55728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.816 [2024-07-14 05:45:58.970451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:54.816 [2024-07-14 05:45:58.970472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:54880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.816 [2024-07-14 05:45:58.970488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:54.816 [2024-07-14 05:45:58.970509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:55008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.816 [2024-07-14 05:45:58.970525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:54.816 [2024-07-14 05:45:58.970546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:55136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.816 [2024-07-14 05:45:58.970562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:54.816 [2024-07-14 05:45:58.970584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:55256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.816 [2024-07-14 05:45:58.970600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:54.816 [2024-07-14 05:45:58.973206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:54824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.816 [2024-07-14 05:45:58.973247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:54.816 [2024-07-14 05:45:58.973275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:54888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.816 [2024-07-14 05:45:58.973308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:54.816 [2024-07-14 05:45:58.973335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:54952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.816 [2024-07-14 05:45:58.973352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:54.816 [2024-07-14 05:45:58.973372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:55016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.816 [2024-07-14 05:45:58.973387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:54.816 [2024-07-14 05:45:58.973408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:55080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.816 [2024-07-14 05:45:58.973423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:54.816 [2024-07-14 05:45:58.973444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:55144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.816 [2024-07-14 05:45:58.973459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.816 [2024-07-14 05:45:58.973480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:55232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.816 [2024-07-14 05:45:58.973515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:54.816 [2024-07-14 05:45:58.973539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:55784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.816 [2024-07-14 05:45:58.973555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:54.816 [2024-07-14 05:45:58.973577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:55800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.816 [2024-07-14 05:45:58.973593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:54.816 [2024-07-14 05:45:58.973614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:55816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.816 [2024-07-14 05:45:58.973631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:54.816 [2024-07-14 05:45:58.973652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:55832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.816 [2024-07-14 05:45:58.973668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:54.816 [2024-07-14 05:45:58.973689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:55848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.816 [2024-07-14 05:45:58.973709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:54.816 [2024-07-14 05:45:58.973731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:55864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.816 [2024-07-14 05:45:58.973748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:54.816 [2024-07-14 05:45:58.973770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:55880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.816 [2024-07-14 05:45:58.973787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:54.816 [2024-07-14 05:45:58.973808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:55896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.816 [2024-07-14 05:45:58.973828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:54.816 [2024-07-14 05:45:58.973874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:55912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.816 [2024-07-14 05:45:58.973894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:54.816 [2024-07-14 05:45:58.973936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:55928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.816 [2024-07-14 05:45:58.973953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:54.816 [2024-07-14 05:45:58.973976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:55328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.816 [2024-07-14 05:45:58.973993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:54.816 [2024-07-14 05:45:58.974016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:55936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.816 [2024-07-14 05:45:58.974033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:54.816 [2024-07-14 05:45:58.974055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:55952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.816 [2024-07-14 05:45:58.974072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:54.816 [2024-07-14 05:45:58.974094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:55968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.816 [2024-07-14 05:45:58.974111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:54.816 [2024-07-14 05:45:58.974133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:55984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.816 [2024-07-14 05:45:58.974150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:54.816 [2024-07-14 05:45:58.974182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:56000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.816 [2024-07-14 05:45:58.974199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:54.816 [2024-07-14 05:45:58.974238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:55464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.816 [2024-07-14 05:45:58.974255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:54.816 [2024-07-14 05:45:58.974277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:55592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.816 [2024-07-14 05:45:58.974293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:54.816 [2024-07-14 05:45:58.974315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:54904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.816 [2024-07-14 05:45:58.974331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:54.816 [2024-07-14 05:45:58.974353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:55032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.816 [2024-07-14 05:45:58.974373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:54.816 [2024-07-14 05:45:58.974412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:55160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.816 [2024-07-14 05:45:58.974428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:54.816 [2024-07-14 05:45:58.974449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:55248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.816 [2024-07-14 05:45:58.974465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:54.816 [2024-07-14 05:45:58.974486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:55376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.816 [2024-07-14 05:45:58.974517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:54.817 [2024-07-14 05:45:58.974539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:55672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.817 [2024-07-14 05:45:58.974554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:54.817 [2024-07-14 05:45:58.974575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:55440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.817 [2024-07-14 05:45:58.974590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:54.817 [2024-07-14 05:45:58.974611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:55536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.817 [2024-07-14 05:45:58.974626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:54.817 [2024-07-14 05:45:58.974662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:55664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.817 [2024-07-14 05:45:58.974678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:54.817 [2024-07-14 05:45:58.974699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:54880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.817 [2024-07-14 05:45:58.974715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:54.817 [2024-07-14 05:45:58.974736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:55136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.817 [2024-07-14 05:45:58.974752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:54.817 [2024-07-14 05:45:58.974790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:55392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.817 [2024-07-14 05:45:58.974807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:54.817 [2024-07-14 05:45:58.974829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:55456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.817 [2024-07-14 05:45:58.974860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:54.817 [2024-07-14 05:45:58.974906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:55520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.817 [2024-07-14 05:45:58.974924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:54.817 [2024-07-14 05:45:58.974950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:55584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.817 [2024-07-14 05:45:58.974968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:54.817 [2024-07-14 05:45:58.974990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:55648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.817 [2024-07-14 05:45:58.975007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:54.817 [2024-07-14 05:45:58.975030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:55712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.817 [2024-07-14 05:45:58.975047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:54.817 [2024-07-14 05:45:58.975855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:56016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.817 [2024-07-14 05:45:58.975889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:54.817 [2024-07-14 05:45:58.975926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:56032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.817 [2024-07-14 05:45:58.975945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:54.817 [2024-07-14 05:45:58.975969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:56048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.818 [2024-07-14 05:45:58.975987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:54.818 [2024-07-14 05:45:58.976010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:54808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.818 [2024-07-14 05:45:58.976027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:54.818 [2024-07-14 05:45:58.976050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:54936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.818 [2024-07-14 05:45:58.976066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:54.818 [2024-07-14 05:45:58.976089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:55064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.818 [2024-07-14 05:45:58.976106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:54.818 [2024-07-14 05:45:58.976129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:56056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.818 [2024-07-14 05:45:58.976146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:54.818 [2024-07-14 05:45:58.976184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:56072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.818 [2024-07-14 05:45:58.976201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:54.818 [2024-07-14 05:45:58.976238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:56088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.818 [2024-07-14 05:45:58.976258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:54.818 [2024-07-14 05:45:58.976302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:56104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.818 [2024-07-14 05:45:58.976320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:54.818 [2024-07-14 05:45:58.976342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:56120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.818 [2024-07-14 05:45:58.976359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:54.818 [2024-07-14 05:45:58.976380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:55216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.818 [2024-07-14 05:45:58.976396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:54.818 [2024-07-14 05:45:58.976418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:55344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.818 [2024-07-14 05:45:58.976434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:54.818 [2024-07-14 05:45:58.976470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:56136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.818 [2024-07-14 05:45:58.976488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:54.818 [2024-07-14 05:45:58.976511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:55768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.818 [2024-07-14 05:45:58.976527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:54.818 [2024-07-14 05:45:58.977497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:55472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.818 [2024-07-14 05:45:58.977526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:54.818 [2024-07-14 05:45:58.977572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:55632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.818 [2024-07-14 05:45:58.977591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:54.818 [2024-07-14 05:45:58.977614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:56152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.818 [2024-07-14 05:45:58.977633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:54.818 [2024-07-14 05:45:58.977656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:56168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.818 [2024-07-14 05:45:58.977675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:54.818 [2024-07-14 05:45:58.977697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:56184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.818 [2024-07-14 05:45:58.977715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:54.818 [2024-07-14 05:45:58.977737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:56200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.818 [2024-07-14 05:45:58.977754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:54.818 [2024-07-14 05:45:58.977776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:56216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.818 [2024-07-14 05:45:58.977798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:54.818 [2024-07-14 05:45:58.977822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:54888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.818 [2024-07-14 05:45:58.977839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:54.818 [2024-07-14 05:45:58.977861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:55016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.818 [2024-07-14 05:45:58.977893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:54.818 [2024-07-14 05:45:58.977930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:55144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.818 [2024-07-14 05:45:58.977947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:54.818 [2024-07-14 05:45:58.977970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:55784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.818 [2024-07-14 05:45:58.977986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:54.818 [2024-07-14 05:45:58.978009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:55816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.818 [2024-07-14 05:45:58.978026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:54.818 [2024-07-14 05:45:58.978048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:55848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.818 [2024-07-14 05:45:58.978065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:54.818 [2024-07-14 05:45:58.978087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:55880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.818 [2024-07-14 05:45:58.978104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:54.818 [2024-07-14 05:45:58.978126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:55912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.818 [2024-07-14 05:45:58.978143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:54.818 [2024-07-14 05:45:58.978185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:55328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.818 [2024-07-14 05:45:58.978201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:54.818 [2024-07-14 05:45:58.978222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:55952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.818 [2024-07-14 05:45:58.978239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:54.818 [2024-07-14 05:45:58.978260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:55984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.818 [2024-07-14 05:45:58.978276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:54.818 [2024-07-14 05:45:58.978296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:55464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.818 [2024-07-14 05:45:58.978331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:54.818 [2024-07-14 05:45:58.978356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:54904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.818 [2024-07-14 05:45:58.978373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:54.818 [2024-07-14 05:45:58.978400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:55160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.818 [2024-07-14 05:45:58.978417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:54.818 [2024-07-14 05:45:58.978438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:55376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.818 [2024-07-14 05:45:58.978455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:54.818 [2024-07-14 05:45:58.978476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:55440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.818 [2024-07-14 05:45:58.978493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:54.819 [2024-07-14 05:45:58.978514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:55664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.819 [2024-07-14 05:45:58.978530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:54.819 [2024-07-14 05:45:58.978552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:55136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.819 [2024-07-14 05:45:58.978574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:54.819 [2024-07-14 05:45:58.978596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:55456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.819 [2024-07-14 05:45:58.978612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:54.819 [2024-07-14 05:45:58.978633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:55584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.819 [2024-07-14 05:45:58.978649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:54.819 [2024-07-14 05:45:58.978686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:55712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.819 [2024-07-14 05:45:58.978702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:54.819 [2024-07-14 05:45:58.979397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:55808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.819 [2024-07-14 05:45:58.979422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:54.819 [2024-07-14 05:45:58.979449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:55840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.819 [2024-07-14 05:45:58.979467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:54.819 [2024-07-14 05:45:58.979506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:55872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.819 [2024-07-14 05:45:58.979522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:54.819 [2024-07-14 05:45:58.979548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:55904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.819 [2024-07-14 05:45:58.979564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:54.819 [2024-07-14 05:45:58.979586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:56032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.819 [2024-07-14 05:45:58.979616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:54.819 [2024-07-14 05:45:58.979638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:54808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.819 [2024-07-14 05:45:58.979653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:54.819 [2024-07-14 05:45:58.979673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:55064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.819 [2024-07-14 05:45:58.979689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:54.819 [2024-07-14 05:45:58.979709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:56072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.819 [2024-07-14 05:45:58.979724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:54.819 [2024-07-14 05:45:58.979750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:56104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.819 [2024-07-14 05:45:58.979766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:54.819 [2024-07-14 05:45:58.979786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:55216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.819 [2024-07-14 05:45:58.979802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:54.819 [2024-07-14 05:45:58.979822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:56136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.819 [2024-07-14 05:45:58.979857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:54.819 [2024-07-14 05:45:58.981076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:55944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.819 [2024-07-14 05:45:58.981101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:54.819 [2024-07-14 05:45:58.981134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:55976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.819 [2024-07-14 05:45:58.981153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:54.819 [2024-07-14 05:45:58.981175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:56008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.819 [2024-07-14 05:45:58.981192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:54.819 [2024-07-14 05:45:58.981215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:56224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.819 [2024-07-14 05:45:58.981247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:54.819 [2024-07-14 05:45:58.981274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:56240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.819 [2024-07-14 05:45:58.981291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:54.819 [2024-07-14 05:45:58.981312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:56256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.819 [2024-07-14 05:45:58.981329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:54.819 [2024-07-14 05:45:58.981350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:56272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.819 [2024-07-14 05:45:58.981366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:54.819 [2024-07-14 05:45:58.981388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:55096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.819 [2024-07-14 05:45:58.981404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:54.819 [2024-07-14 05:45:58.981426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:55776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.819 [2024-07-14 05:45:58.981442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:54.819 [2024-07-14 05:45:58.981464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:55600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.819 [2024-07-14 05:45:58.981480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:54.819 [2024-07-14 05:45:58.981501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:55632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.819 [2024-07-14 05:45:58.981517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:54.819 [2024-07-14 05:45:58.981539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:56168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.819 [2024-07-14 05:45:58.981570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:54.819 [2024-07-14 05:45:58.981592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:56200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.819 [2024-07-14 05:45:58.981608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:54.819 [2024-07-14 05:45:58.981631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:54888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.819 [2024-07-14 05:45:58.981648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:54.819 [2024-07-14 05:45:58.981670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:55144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.819 [2024-07-14 05:45:58.981686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:54.819 [2024-07-14 05:45:58.981721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:55816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.819 [2024-07-14 05:45:58.981738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:54.819 [2024-07-14 05:45:58.981759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:55880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.819 [2024-07-14 05:45:58.981779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:54.819 [2024-07-14 05:45:58.981806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:55328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.819 [2024-07-14 05:45:58.981822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:54.819 [2024-07-14 05:45:58.981843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:55984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.819 [2024-07-14 05:45:58.981887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:54.819 [2024-07-14 05:45:58.981936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:54904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.819 [2024-07-14 05:45:58.981954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:54.819 [2024-07-14 05:45:58.981977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:55376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.819 [2024-07-14 05:45:58.981994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:54.819 [2024-07-14 05:45:58.982016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:55664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.819 [2024-07-14 05:45:58.982033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:54.819 [2024-07-14 05:45:58.982055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:55456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.819 [2024-07-14 05:45:58.982072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:54.819 [2024-07-14 05:45:58.982094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:55712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.819 [2024-07-14 05:45:58.982111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:54.819 [2024-07-14 05:45:58.982133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:56280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.819 [2024-07-14 05:45:58.982164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:54.820 [2024-07-14 05:45:58.982187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:56296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.820 [2024-07-14 05:45:58.982203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:54.820 [2024-07-14 05:45:58.982240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:56040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.820 [2024-07-14 05:45:58.982256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:54.820 [2024-07-14 05:45:58.982277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:55840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.820 [2024-07-14 05:45:58.982293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:54.820 [2024-07-14 05:45:58.982313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:55904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.820 [2024-07-14 05:45:58.982332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:54.820 [2024-07-14 05:45:58.982354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:54808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.820 [2024-07-14 05:45:58.982370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:54.820 [2024-07-14 05:45:58.982390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:56072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.820 [2024-07-14 05:45:58.982406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:54.820 [2024-07-14 05:45:58.982427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:55216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.820 [2024-07-14 05:45:58.982443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:54.820 [2024-07-14 05:45:58.984364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:56064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.820 [2024-07-14 05:45:58.984405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:54.820 [2024-07-14 05:45:58.984433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:56096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.820 [2024-07-14 05:45:58.984451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:54.820 [2024-07-14 05:45:58.984473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:56128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.820 [2024-07-14 05:45:58.984491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:54.820 [2024-07-14 05:45:58.984513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:56312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.820 [2024-07-14 05:45:58.984530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:54.820 [2024-07-14 05:45:58.984557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:56328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.820 [2024-07-14 05:45:58.984575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:54.820 [2024-07-14 05:45:58.984597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:56344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.820 [2024-07-14 05:45:58.984613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.820 [2024-07-14 05:45:58.984636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:56360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.820 [2024-07-14 05:45:58.984653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:54.820 [2024-07-14 05:45:58.984689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:56376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.820 [2024-07-14 05:45:58.984706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:54.820 [2024-07-14 05:45:58.984728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:56392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.820 [2024-07-14 05:45:58.984749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:54.820 [2024-07-14 05:45:58.984787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:56408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.820 [2024-07-14 05:45:58.984804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:54.820 [2024-07-14 05:45:58.984827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:56424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.820 [2024-07-14 05:45:58.984860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:54.820 [2024-07-14 05:45:58.984895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:56160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.820 [2024-07-14 05:45:58.984913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:54.820 [2024-07-14 05:45:58.984937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:56192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.820 [2024-07-14 05:45:58.984955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:54.820 [2024-07-14 05:45:58.984978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:55976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.820 [2024-07-14 05:45:58.984995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:54.820 [2024-07-14 05:45:58.985018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:56224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.820 [2024-07-14 05:45:58.985035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:54.820 [2024-07-14 05:45:58.985058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:56256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.820 [2024-07-14 05:45:58.985075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:54.820 [2024-07-14 05:45:58.985098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:55096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.820 [2024-07-14 05:45:58.985116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:54.820 [2024-07-14 05:45:58.985138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:55600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.820 [2024-07-14 05:45:58.985155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:54.820 [2024-07-14 05:45:58.985201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:56168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.820 [2024-07-14 05:45:58.985218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:54.820 [2024-07-14 05:45:58.985240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:54888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.820 [2024-07-14 05:45:58.985257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:54.820 [2024-07-14 05:45:58.985294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:55816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.820 [2024-07-14 05:45:58.985310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:54.820 [2024-07-14 05:45:58.985337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:55328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.820 [2024-07-14 05:45:58.985354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:54.820 [2024-07-14 05:45:58.985375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:54904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.820 [2024-07-14 05:45:58.985391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:54.820 [2024-07-14 05:45:58.985430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:55664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.820 [2024-07-14 05:45:58.985447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:54.820 [2024-07-14 05:45:58.985469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:55712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.820 [2024-07-14 05:45:58.985485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:54.820 [2024-07-14 05:45:58.985507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:56296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.820 [2024-07-14 05:45:58.985524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:54.820 [2024-07-14 05:45:58.985546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:55840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.820 [2024-07-14 05:45:58.985563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:54.820 [2024-07-14 05:45:58.985585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:54808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.820 [2024-07-14 05:45:58.985601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:54.820 [2024-07-14 05:45:58.985623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:55216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.820 [2024-07-14 05:45:58.985640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:54.820 [2024-07-14 05:45:58.986258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:55832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.820 [2024-07-14 05:45:58.986284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:54.820 [2024-07-14 05:45:58.986312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:55896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.820 [2024-07-14 05:45:58.986330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:54.820 [2024-07-14 05:45:58.986354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:55936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.820 [2024-07-14 05:45:58.986372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:54.820 [2024-07-14 05:45:58.986394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:56000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.820 [2024-07-14 05:45:58.986412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:54.820 [2024-07-14 05:45:58.986443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:55248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.820 [2024-07-14 05:45:58.986461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:54.820 [2024-07-14 05:45:58.986485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:56440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.821 [2024-07-14 05:45:58.986518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:54.821 [2024-07-14 05:45:58.986547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:56456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.821 [2024-07-14 05:45:58.986580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:54.821 [2024-07-14 05:45:58.986603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:56472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.821 [2024-07-14 05:45:58.986619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:54.821 [2024-07-14 05:45:58.986641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:56488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.821 [2024-07-14 05:45:58.986658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:54.821 [2024-07-14 05:45:58.987826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:56048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.821 [2024-07-14 05:45:58.987871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:54.821 [2024-07-14 05:45:58.987917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:56088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.821 [2024-07-14 05:45:58.987937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:54.821 [2024-07-14 05:45:58.987961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:56496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.821 [2024-07-14 05:45:58.987978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:54.821 [2024-07-14 05:45:58.988000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:56512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.821 [2024-07-14 05:45:58.988017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:54.821 [2024-07-14 05:45:58.988041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:56528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.821 [2024-07-14 05:45:58.988058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:54.821 [2024-07-14 05:45:58.988081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:56544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.821 [2024-07-14 05:45:58.988098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:54.821 [2024-07-14 05:45:58.988126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:56560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.821 [2024-07-14 05:45:58.988144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:54.821 [2024-07-14 05:45:58.988178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:56576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.821 [2024-07-14 05:45:58.988199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:54.821 [2024-07-14 05:45:58.988239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:56592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.821 [2024-07-14 05:45:58.988256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:54.821 [2024-07-14 05:45:58.988278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:56096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.821 [2024-07-14 05:45:58.988295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:54.821 [2024-07-14 05:45:58.988316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:56312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.821 [2024-07-14 05:45:58.988333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:54.821 [2024-07-14 05:45:58.988355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:56344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.821 [2024-07-14 05:45:58.988371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:54.821 [2024-07-14 05:45:58.988393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:56376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.821 [2024-07-14 05:45:58.988410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:54.821 [2024-07-14 05:45:58.988432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:56408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.821 [2024-07-14 05:45:58.988448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:54.821 [2024-07-14 05:45:58.988470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:56160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.821 [2024-07-14 05:45:58.988486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:54.821 [2024-07-14 05:45:58.988509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:55976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.821 [2024-07-14 05:45:58.988526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:54.821 [2024-07-14 05:45:58.988547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:56256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.821 [2024-07-14 05:45:58.988563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:54.821 [2024-07-14 05:45:58.988585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:55600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.821 [2024-07-14 05:45:58.988601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:54.821 [2024-07-14 05:45:58.988637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:54888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.821 [2024-07-14 05:45:58.988653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:54.821 [2024-07-14 05:45:58.988675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:55328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.821 [2024-07-14 05:45:58.988695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:54.821 [2024-07-14 05:45:58.988716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:55664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.821 [2024-07-14 05:45:58.988732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:54.821 [2024-07-14 05:45:58.988753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:56296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.821 [2024-07-14 05:45:58.988769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:54.821 [2024-07-14 05:45:58.988795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:54808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.821 [2024-07-14 05:45:58.988812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:54.821 [2024-07-14 05:45:58.988849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:56232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.821 [2024-07-14 05:45:58.988871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:54.821 [2024-07-14 05:45:58.988913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:56264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.821 [2024-07-14 05:45:58.988930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:54.821 [2024-07-14 05:45:58.988953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:56184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.821 [2024-07-14 05:45:58.988970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:54.821 [2024-07-14 05:45:58.988991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:55896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.821 [2024-07-14 05:45:58.989008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:54.821 [2024-07-14 05:45:58.989030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:56000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.821 [2024-07-14 05:45:58.989047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:54.821 [2024-07-14 05:45:58.989069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:56440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.821 [2024-07-14 05:45:58.989085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:54.821 [2024-07-14 05:45:58.989109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:56472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.821 [2024-07-14 05:45:58.989126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:54.821 [2024-07-14 05:45:59.004841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:55784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.821 [2024-07-14 05:45:59.004883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:54.821 [2024-07-14 05:45:59.004915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:55912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.821 [2024-07-14 05:45:59.004934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:54.821 [2024-07-14 05:45:59.004964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:55160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.821 [2024-07-14 05:45:59.004982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:54.821 [2024-07-14 05:45:59.005004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:56288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.821 [2024-07-14 05:45:59.005021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:54.821 [2024-07-14 05:45:59.005043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:56032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.821 [2024-07-14 05:45:59.005075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:54.821 [2024-07-14 05:45:59.005098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:56616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.821 [2024-07-14 05:45:59.005115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:54.821 [2024-07-14 05:45:59.005154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:56632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.821 [2024-07-14 05:45:59.005171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:54.821 [2024-07-14 05:45:59.005208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:56648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.821 [2024-07-14 05:45:59.005225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:54.822 [2024-07-14 05:45:59.005267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:56664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.822 [2024-07-14 05:45:59.005286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:54.822 [2024-07-14 05:45:59.005309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:56104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.822 [2024-07-14 05:45:59.005342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:54.822 [2024-07-14 05:45:59.005365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:56680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.822 [2024-07-14 05:45:59.005382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:54.822 [2024-07-14 05:45:59.005405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:56696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.822 [2024-07-14 05:45:59.005422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:54.822 [2024-07-14 05:45:59.005444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:56712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.822 [2024-07-14 05:45:59.005461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:54.822 [2024-07-14 05:45:59.005484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:56728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.822 [2024-07-14 05:45:59.005500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:54.822 [2024-07-14 05:45:59.005527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:56088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.822 [2024-07-14 05:45:59.005544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:54.822 [2024-07-14 05:45:59.005567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:56512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.822 [2024-07-14 05:45:59.005590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:54.822 [2024-07-14 05:45:59.005614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:56544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.822 [2024-07-14 05:45:59.005632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:54.822 [2024-07-14 05:45:59.005670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:56576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.822 [2024-07-14 05:45:59.005687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:54.822 [2024-07-14 05:45:59.005724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:56096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.822 [2024-07-14 05:45:59.005740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:54.822 [2024-07-14 05:45:59.005762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:56344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.822 [2024-07-14 05:45:59.005777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:54.822 [2024-07-14 05:45:59.005798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:56408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.822 [2024-07-14 05:45:59.005830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:54.822 [2024-07-14 05:45:59.005853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:55976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.822 [2024-07-14 05:45:59.005895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:54.822 [2024-07-14 05:45:59.005922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:55600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.822 [2024-07-14 05:45:59.005940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:54.822 [2024-07-14 05:45:59.005962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:55328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.822 [2024-07-14 05:45:59.005979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:54.822 [2024-07-14 05:45:59.006002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:56296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.822 [2024-07-14 05:45:59.006019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:54.822 [2024-07-14 05:45:59.006041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:56232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.822 [2024-07-14 05:45:59.006059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:54.822 [2024-07-14 05:45:59.006082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:56184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.822 [2024-07-14 05:45:59.006103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:54.822 [2024-07-14 05:45:59.006127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:56000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.822 [2024-07-14 05:45:59.006144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:54.822 [2024-07-14 05:45:59.006167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:56472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.822 [2024-07-14 05:45:59.006199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:54.822 [2024-07-14 05:45:59.006222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:56336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.822 [2024-07-14 05:45:59.006240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:54.822 [2024-07-14 05:45:59.006262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:56368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.822 [2024-07-14 05:45:59.006278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:54.822 [2024-07-14 05:45:59.006315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:56400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.822 [2024-07-14 05:45:59.006331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:54.822 [2024-07-14 05:45:59.006353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:56432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.822 [2024-07-14 05:45:59.006369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:54.822 [2024-07-14 05:45:59.006390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:56272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.822 [2024-07-14 05:45:59.006406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:54.822 [2024-07-14 05:45:59.006443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:55880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.822 [2024-07-14 05:45:59.006460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:54.822 [2024-07-14 05:45:59.006482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:55376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.822 [2024-07-14 05:45:59.006498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:54.822 [2024-07-14 05:45:59.006521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:56072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.822 [2024-07-14 05:45:59.006537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:54.822 [2024-07-14 05:45:59.008077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:56464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.822 [2024-07-14 05:45:59.008102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:54.822 [2024-07-14 05:45:59.008131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:56736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.822 [2024-07-14 05:45:59.008156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:54.822 [2024-07-14 05:45:59.008181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:56752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.822 [2024-07-14 05:45:59.008199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:54.822 [2024-07-14 05:45:59.008222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:56768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.822 [2024-07-14 05:45:59.008240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:54.822 [2024-07-14 05:45:59.008263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:56784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.822 [2024-07-14 05:45:59.008280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:54.822 [2024-07-14 05:45:59.008318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:56800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.822 [2024-07-14 05:45:59.008335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:54.822 [2024-07-14 05:45:59.008358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:56816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.823 [2024-07-14 05:45:59.008374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:54.823 [2024-07-14 05:45:59.008396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:56832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.823 [2024-07-14 05:45:59.008412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:54.823 [2024-07-14 05:45:59.008434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:56848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.823 [2024-07-14 05:45:59.008450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:54.823 [2024-07-14 05:45:59.008472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:56864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.823 [2024-07-14 05:45:59.008488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:54.823 [2024-07-14 05:45:59.008510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:56880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.823 [2024-07-14 05:45:59.008527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:54.823 [2024-07-14 05:45:59.008548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:56520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.823 [2024-07-14 05:45:59.008565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:54.823 [2024-07-14 05:45:59.008587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:56552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.823 [2024-07-14 05:45:59.008604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:54.823 [2024-07-14 05:45:59.008626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:56584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.823 [2024-07-14 05:45:59.008643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:54.823 [2024-07-14 05:45:59.008669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:56328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.823 [2024-07-14 05:45:59.008687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:54.823 [2024-07-14 05:45:59.008724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:56392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.823 [2024-07-14 05:45:59.008740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:54.823 [2024-07-14 05:45:59.008761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:56224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.823 [2024-07-14 05:45:59.008777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:54.823 [2024-07-14 05:45:59.008799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:55816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.823 [2024-07-14 05:45:59.008815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:54.823 [2024-07-14 05:45:59.008836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:55912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.823 [2024-07-14 05:45:59.008876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:54.823 [2024-07-14 05:45:59.008903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:56288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.823 [2024-07-14 05:45:59.008920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:54.823 [2024-07-14 05:45:59.008942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:56616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.823 [2024-07-14 05:45:59.008959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:54.823 [2024-07-14 05:45:59.008980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:56648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.823 [2024-07-14 05:45:59.008997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:54.823 [2024-07-14 05:45:59.009018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:56104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.823 [2024-07-14 05:45:59.009035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:54.823 [2024-07-14 05:45:59.009057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:56696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.823 [2024-07-14 05:45:59.009073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:54.823 [2024-07-14 05:45:59.009095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:56728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.823 [2024-07-14 05:45:59.009111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:54.823 [2024-07-14 05:45:59.009134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:56512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.823 [2024-07-14 05:45:59.009165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:54.823 [2024-07-14 05:45:59.009192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:56576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.823 [2024-07-14 05:45:59.009209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:54.823 [2024-07-14 05:45:59.009247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:56344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.823 [2024-07-14 05:45:59.009263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:54.823 [2024-07-14 05:45:59.009283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:55976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.823 [2024-07-14 05:45:59.009299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.823 [2024-07-14 05:45:59.009319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:55328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.823 [2024-07-14 05:45:59.009335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:54.823 [2024-07-14 05:45:59.009356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:56232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.823 [2024-07-14 05:45:59.009372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:54.823 [2024-07-14 05:45:59.009393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:56000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.823 [2024-07-14 05:45:59.009408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:54.823 [2024-07-14 05:45:59.009429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:56336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.823 [2024-07-14 05:45:59.009444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:54.823 [2024-07-14 05:45:59.009465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:56400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.823 [2024-07-14 05:45:59.009480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:54.823 [2024-07-14 05:45:59.009501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:56272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.823 [2024-07-14 05:45:59.009517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:54.823 [2024-07-14 05:45:59.009539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:55376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.823 [2024-07-14 05:45:59.009556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:54.823 [2024-07-14 05:45:59.011454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:56888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.823 [2024-07-14 05:45:59.011477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:54.823 [2024-07-14 05:45:59.011503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:56904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.823 [2024-07-14 05:45:59.011521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:54.823 [2024-07-14 05:45:59.011542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:56920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.823 [2024-07-14 05:45:59.011563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:54.823 [2024-07-14 05:45:59.011585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:56488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.823 [2024-07-14 05:45:59.011601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:54.823 [2024-07-14 05:45:59.011626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:56936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.823 [2024-07-14 05:45:59.011659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:54.823 [2024-07-14 05:45:59.011683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:56952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.823 [2024-07-14 05:45:59.011714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:54.823 [2024-07-14 05:45:59.011738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:56968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.823 [2024-07-14 05:45:59.011755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:54.823 [2024-07-14 05:45:59.011777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:56984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.823 [2024-07-14 05:45:59.011794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:54.823 [2024-07-14 05:45:59.011815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:57000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.823 [2024-07-14 05:45:59.011847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:54.823 [2024-07-14 05:45:59.011879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:56608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.823 [2024-07-14 05:45:59.011898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:54.823 [2024-07-14 05:45:59.011921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:56640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.823 [2024-07-14 05:45:59.011939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:54.823 [2024-07-14 05:45:59.011962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:56672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.823 [2024-07-14 05:45:59.011978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:54.824 [2024-07-14 05:45:59.012000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:56704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.824 [2024-07-14 05:45:59.012018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:54.824 [2024-07-14 05:45:59.012041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:56736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.824 [2024-07-14 05:45:59.012060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:54.824 [2024-07-14 05:45:59.012082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:56768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.824 [2024-07-14 05:45:59.012104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:54.824 [2024-07-14 05:45:59.012128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:56800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.824 [2024-07-14 05:45:59.012146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:54.824 [2024-07-14 05:45:59.012168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:56832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.824 [2024-07-14 05:45:59.012200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:54.824 [2024-07-14 05:45:59.012224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:56864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.824 [2024-07-14 05:45:59.012241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:54.824 [2024-07-14 05:45:59.012263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:56520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.824 [2024-07-14 05:45:59.012295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:54.824 [2024-07-14 05:45:59.012317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:56584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.824 [2024-07-14 05:45:59.012334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:54.824 [2024-07-14 05:45:59.012371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:56392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.824 [2024-07-14 05:45:59.012390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:54.824 [2024-07-14 05:45:59.012411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:55816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.824 [2024-07-14 05:45:59.012428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:54.824 [2024-07-14 05:45:59.012450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:56288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.824 [2024-07-14 05:45:59.012466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:54.824 [2024-07-14 05:45:59.012507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:56648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.824 [2024-07-14 05:45:59.012524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:54.824 [2024-07-14 05:45:59.012563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:56696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.824 [2024-07-14 05:45:59.012581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:54.824 [2024-07-14 05:45:59.013456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:56512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.824 [2024-07-14 05:45:59.013495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:54.824 [2024-07-14 05:45:59.013548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:56344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.824 [2024-07-14 05:45:59.013568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:54.824 [2024-07-14 05:45:59.013596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:55328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.824 [2024-07-14 05:45:59.013612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:54.824 [2024-07-14 05:45:59.013633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:56000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.824 [2024-07-14 05:45:59.013648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:54.824 [2024-07-14 05:45:59.013704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:56400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.824 [2024-07-14 05:45:59.013724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:54.824 [2024-07-14 05:45:59.013763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:55376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.824 [2024-07-14 05:45:59.013780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:54.824 [2024-07-14 05:45:59.013802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:56528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.824 [2024-07-14 05:45:59.013819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:54.824 [2024-07-14 05:45:59.013841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:56592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.824 [2024-07-14 05:45:59.013857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:54.824 [2024-07-14 05:45:59.013905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:56376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.824 [2024-07-14 05:45:59.013923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:54.824 [2024-07-14 05:45:59.013945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:57016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.824 [2024-07-14 05:45:59.013963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:54.824 [2024-07-14 05:45:59.013985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:57032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.824 [2024-07-14 05:45:59.014003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:54.824 [2024-07-14 05:45:59.014024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:57048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.824 [2024-07-14 05:45:59.014042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:54.824 [2024-07-14 05:45:59.014065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:56440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.824 [2024-07-14 05:45:59.014082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:54.824 [2024-07-14 05:45:59.014574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:57064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.824 [2024-07-14 05:45:59.014598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:54.824 [2024-07-14 05:45:59.014629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:56744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.824 [2024-07-14 05:45:59.014648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:54.824 [2024-07-14 05:45:59.014671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:56776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.824 [2024-07-14 05:45:59.014688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:54.824 [2024-07-14 05:45:59.014710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:56808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.824 [2024-07-14 05:45:59.014726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:54.824 [2024-07-14 05:45:59.014749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:56840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.824 [2024-07-14 05:45:59.014766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:54.824 [2024-07-14 05:45:59.014802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:57088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.824 [2024-07-14 05:45:59.014819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:54.824 [2024-07-14 05:45:59.014841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:57104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.824 [2024-07-14 05:45:59.014857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:54.824 [2024-07-14 05:45:59.014902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:57120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.824 [2024-07-14 05:45:59.014921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:54.824 [2024-07-14 05:45:59.014958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:57136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.824 [2024-07-14 05:45:59.014975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:54.824 [2024-07-14 05:45:59.014997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:57152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.824 [2024-07-14 05:45:59.015015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:54.824 [2024-07-14 05:45:59.015036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:56856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.824 [2024-07-14 05:45:59.015053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:54.824 Received shutdown signal, test time was about 32.018671 seconds 00:31:54.824 00:31:54.824 Latency(us) 00:31:54.824 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:54.824 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:31:54.824 Verification LBA range: start 0x0 length 0x4000 00:31:54.824 Nvme0n1 : 32.02 8062.52 31.49 0.00 0.00 15849.82 257.90 4026531.84 00:31:54.824 =================================================================================================================== 00:31:54.824 Total : 8062.52 31.49 0.00 0.00 15849.82 257.90 4026531.84 00:31:54.824 05:46:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:55.084 05:46:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:31:55.084 05:46:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:55.084 05:46:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:31:55.084 05:46:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:55.084 05:46:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:31:55.084 05:46:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:55.084 05:46:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:31:55.084 05:46:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:55.084 05:46:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:55.084 rmmod nvme_tcp 00:31:55.084 rmmod nvme_fabrics 00:31:55.084 rmmod nvme_keyring 00:31:55.084 05:46:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:55.084 05:46:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:31:55.084 05:46:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:31:55.084 05:46:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 3360242 ']' 00:31:55.084 05:46:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 3360242 00:31:55.084 05:46:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 3360242 ']' 00:31:55.084 05:46:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 3360242 00:31:55.084 05:46:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:31:55.084 05:46:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:55.084 05:46:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3360242 00:31:55.084 05:46:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:55.084 05:46:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:55.084 05:46:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3360242' 00:31:55.084 killing process with pid 3360242 00:31:55.084 05:46:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 3360242 00:31:55.084 05:46:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 3360242 00:31:55.345 05:46:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:55.345 05:46:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:55.345 05:46:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:55.345 05:46:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:55.345 05:46:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:55.345 05:46:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:55.345 05:46:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:55.345 05:46:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:57.257 05:46:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:57.257 00:31:57.257 real 0m40.600s 00:31:57.257 user 2m2.156s 00:31:57.257 sys 0m10.414s 00:31:57.257 05:46:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:57.257 05:46:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:57.257 ************************************ 00:31:57.257 END TEST nvmf_host_multipath_status 00:31:57.257 ************************************ 00:31:57.257 05:46:04 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:57.257 05:46:04 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:31:57.257 05:46:04 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:57.257 05:46:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:57.516 ************************************ 00:31:57.516 START TEST nvmf_discovery_remove_ifc 00:31:57.516 ************************************ 00:31:57.516 05:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:57.516 * Looking for test storage... 00:31:57.516 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:57.516 05:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:57.516 05:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:31:57.516 05:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:57.516 05:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:57.516 05:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:57.516 05:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:57.516 05:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:57.516 05:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:57.516 05:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:57.516 05:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:57.516 05:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:57.516 05:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:57.516 05:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:57.516 05:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:57.516 05:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:57.516 05:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:57.516 05:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:57.516 05:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:57.516 05:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:57.516 05:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:57.516 05:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:57.516 05:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:57.516 05:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.517 05:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.517 05:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.517 05:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:31:57.517 05:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.517 05:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:31:57.517 05:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:57.517 05:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:57.517 05:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:57.517 05:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:57.517 05:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:57.517 05:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:57.517 05:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:57.517 05:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:57.517 05:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:31:57.517 05:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:31:57.517 05:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:31:57.517 05:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:31:57.517 05:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:31:57.517 05:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:31:57.517 05:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:31:57.517 05:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:57.517 05:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:57.517 05:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:57.517 05:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:57.517 05:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:57.517 05:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:57.517 05:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:57.517 05:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:57.517 05:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:57.517 05:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:57.517 05:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:31:57.517 05:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:59.460 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:59.460 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:31:59.460 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:59.460 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:59.460 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:59.460 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:59.460 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:59.460 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:31:59.460 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:59.460 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:31:59.460 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:31:59.460 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:31:59.460 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:31:59.460 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:31:59.460 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:31:59.460 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:59.460 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:59.460 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:59.460 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:59.460 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:59.460 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:59.460 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:59.460 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:59.460 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:59.460 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:59.460 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:59.460 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:59.460 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:59.460 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:59.460 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:59.460 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:59.460 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:59.460 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:59.460 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:59.460 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:59.460 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:59.460 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:59.461 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:59.461 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:59.461 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:59.461 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:59.461 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.147 ms 00:31:59.461 00:31:59.461 --- 10.0.0.2 ping statistics --- 00:31:59.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:59.461 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:59.461 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:59.461 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:31:59.461 00:31:59.461 --- 10.0.0.1 ping statistics --- 00:31:59.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:59.461 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=3366610 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 3366610 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 3366610 ']' 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:59.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:59.461 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:59.461 [2024-07-14 05:46:06.564814] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:31:59.461 [2024-07-14 05:46:06.564909] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:59.720 EAL: No free 2048 kB hugepages reported on node 1 00:31:59.720 [2024-07-14 05:46:06.633786] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:59.720 [2024-07-14 05:46:06.724246] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:59.720 [2024-07-14 05:46:06.724307] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:59.720 [2024-07-14 05:46:06.724333] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:59.720 [2024-07-14 05:46:06.724346] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:59.720 [2024-07-14 05:46:06.724359] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:59.720 [2024-07-14 05:46:06.724395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:59.979 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:59.979 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:31:59.979 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:59.979 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:59.979 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:59.979 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:59.979 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:31:59.979 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.979 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:59.979 [2024-07-14 05:46:06.870207] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:59.979 [2024-07-14 05:46:06.878394] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:59.979 null0 00:31:59.979 [2024-07-14 05:46:06.910353] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:59.979 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.979 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3366632 00:31:59.979 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:31:59.979 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3366632 /tmp/host.sock 00:31:59.979 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 3366632 ']' 00:31:59.979 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:31:59.979 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:59.979 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:59.979 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:59.979 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:59.979 05:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:59.979 [2024-07-14 05:46:06.974981] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:31:59.979 [2024-07-14 05:46:06.975048] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3366632 ] 00:31:59.979 EAL: No free 2048 kB hugepages reported on node 1 00:31:59.979 [2024-07-14 05:46:07.037142] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:00.238 [2024-07-14 05:46:07.128606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:00.238 05:46:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:00.238 05:46:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:32:00.238 05:46:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:00.238 05:46:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:32:00.238 05:46:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:00.238 05:46:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:00.238 05:46:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:00.238 05:46:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:32:00.238 05:46:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:00.238 05:46:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:00.238 05:46:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:00.238 05:46:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:32:00.238 05:46:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:00.238 05:46:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:01.609 [2024-07-14 05:46:08.298268] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:01.609 [2024-07-14 05:46:08.298293] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:01.609 [2024-07-14 05:46:08.298314] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:01.609 [2024-07-14 05:46:08.384589] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:32:01.609 [2024-07-14 05:46:08.448158] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:32:01.609 [2024-07-14 05:46:08.448248] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:32:01.609 [2024-07-14 05:46:08.448285] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:32:01.609 [2024-07-14 05:46:08.448307] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:01.609 [2024-07-14 05:46:08.448336] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:01.609 05:46:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:01.609 05:46:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:32:01.609 05:46:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:01.609 05:46:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:01.609 05:46:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:01.609 05:46:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:01.609 05:46:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:01.609 05:46:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:01.609 05:46:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:01.609 [2024-07-14 05:46:08.456501] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x15a3df0 was disconnected and freed. delete nvme_qpair. 00:32:01.609 05:46:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:01.609 05:46:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:32:01.609 05:46:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:32:01.609 05:46:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:32:01.609 05:46:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:32:01.609 05:46:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:01.610 05:46:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:01.610 05:46:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:01.610 05:46:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:01.610 05:46:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:01.610 05:46:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:01.610 05:46:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:01.610 05:46:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:01.610 05:46:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:01.610 05:46:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:02.542 05:46:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:02.542 05:46:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:02.542 05:46:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:02.542 05:46:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:02.542 05:46:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:02.542 05:46:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:02.542 05:46:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:02.542 05:46:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:02.542 05:46:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:02.542 05:46:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:03.917 05:46:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:03.917 05:46:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:03.917 05:46:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:03.917 05:46:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:03.917 05:46:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:03.917 05:46:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:03.917 05:46:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:03.917 05:46:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:03.917 05:46:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:03.917 05:46:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:04.852 05:46:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:04.852 05:46:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:04.852 05:46:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:04.852 05:46:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.852 05:46:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:04.852 05:46:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:04.852 05:46:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:04.852 05:46:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.852 05:46:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:04.852 05:46:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:05.787 05:46:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:05.787 05:46:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:05.787 05:46:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.787 05:46:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:05.787 05:46:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:05.787 05:46:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:05.787 05:46:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:05.787 05:46:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.787 05:46:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:05.787 05:46:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:06.719 05:46:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:06.719 05:46:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:06.719 05:46:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:06.719 05:46:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.719 05:46:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:06.719 05:46:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:06.719 05:46:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:06.719 05:46:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.719 05:46:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:06.719 05:46:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:06.975 [2024-07-14 05:46:13.889692] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:32:06.975 [2024-07-14 05:46:13.889761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:06.975 [2024-07-14 05:46:13.889785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.975 [2024-07-14 05:46:13.889814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:06.975 [2024-07-14 05:46:13.889829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.975 [2024-07-14 05:46:13.889844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:06.975 [2024-07-14 05:46:13.889858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.975 [2024-07-14 05:46:13.889892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:06.975 [2024-07-14 05:46:13.889922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.975 [2024-07-14 05:46:13.889935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:06.975 [2024-07-14 05:46:13.889947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.975 [2024-07-14 05:46:13.889960] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156af80 is same with the state(5) to be set 00:32:06.975 [2024-07-14 05:46:13.899710] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x156af80 (9): Bad file descriptor 00:32:06.976 [2024-07-14 05:46:13.909755] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:07.906 05:46:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:07.906 05:46:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:07.906 05:46:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.906 05:46:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:07.906 05:46:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:07.906 05:46:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:07.906 05:46:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:07.906 [2024-07-14 05:46:14.941895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:32:07.906 [2024-07-14 05:46:14.941956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x156af80 with addr=10.0.0.2, port=4420 00:32:07.906 [2024-07-14 05:46:14.941979] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156af80 is same with the state(5) to be set 00:32:07.906 [2024-07-14 05:46:14.942013] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x156af80 (9): Bad file descriptor 00:32:07.906 [2024-07-14 05:46:14.942405] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:32:07.906 [2024-07-14 05:46:14.942438] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:07.906 [2024-07-14 05:46:14.942456] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:07.906 [2024-07-14 05:46:14.942474] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:07.906 [2024-07-14 05:46:14.942499] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:07.906 [2024-07-14 05:46:14.942518] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:07.906 05:46:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.906 05:46:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:07.906 05:46:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:08.839 [2024-07-14 05:46:15.945014] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:08.839 [2024-07-14 05:46:15.945054] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:08.839 [2024-07-14 05:46:15.945078] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:08.840 [2024-07-14 05:46:15.945092] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:32:08.840 [2024-07-14 05:46:15.945113] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:08.840 [2024-07-14 05:46:15.945167] bdev_nvme.c:6735:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:32:08.840 [2024-07-14 05:46:15.945221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:08.840 [2024-07-14 05:46:15.945245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:08.840 [2024-07-14 05:46:15.945264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:08.840 [2024-07-14 05:46:15.945279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:08.840 [2024-07-14 05:46:15.945302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:08.840 [2024-07-14 05:46:15.945317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:08.840 [2024-07-14 05:46:15.945335] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:08.840 [2024-07-14 05:46:15.945349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:08.840 [2024-07-14 05:46:15.945366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:08.840 [2024-07-14 05:46:15.945380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:08.840 [2024-07-14 05:46:15.945394] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:32:09.098 [2024-07-14 05:46:15.945695] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x156a410 (9): Bad file descriptor 00:32:09.098 [2024-07-14 05:46:15.946717] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:32:09.098 [2024-07-14 05:46:15.946743] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:32:09.098 05:46:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:09.098 05:46:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:09.098 05:46:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:09.098 05:46:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.098 05:46:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:09.098 05:46:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:09.098 05:46:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:09.098 05:46:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.098 05:46:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:32:09.098 05:46:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:09.098 05:46:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:09.098 05:46:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:32:09.098 05:46:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:09.098 05:46:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:09.098 05:46:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:09.098 05:46:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.098 05:46:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:09.098 05:46:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:09.098 05:46:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:09.098 05:46:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.098 05:46:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:09.098 05:46:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:10.030 05:46:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:10.030 05:46:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:10.030 05:46:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:10.030 05:46:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.030 05:46:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:10.030 05:46:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:10.030 05:46:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:10.030 05:46:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.030 05:46:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:10.030 05:46:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:10.963 [2024-07-14 05:46:18.005966] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:10.963 [2024-07-14 05:46:18.005991] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:10.963 [2024-07-14 05:46:18.006012] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:11.220 05:46:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:11.220 [2024-07-14 05:46:18.133477] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:32:11.220 05:46:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:11.220 05:46:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.220 05:46:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:11.220 05:46:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:11.220 05:46:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:11.220 05:46:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:11.220 05:46:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.220 05:46:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:11.220 05:46:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:11.220 [2024-07-14 05:46:18.314911] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:32:11.220 [2024-07-14 05:46:18.314977] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:32:11.220 [2024-07-14 05:46:18.315011] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:32:11.220 [2024-07-14 05:46:18.315033] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:32:11.220 [2024-07-14 05:46:18.315046] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:11.220 [2024-07-14 05:46:18.322761] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1586ab0 was disconnected and freed. delete nvme_qpair. 00:32:12.155 05:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:12.155 05:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:12.155 05:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:12.155 05:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.155 05:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:12.155 05:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:12.155 05:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:12.155 05:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.155 05:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:32:12.155 05:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:32:12.155 05:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3366632 00:32:12.155 05:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 3366632 ']' 00:32:12.155 05:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 3366632 00:32:12.155 05:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:32:12.155 05:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:12.155 05:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3366632 00:32:12.155 05:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:32:12.155 05:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:32:12.155 05:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3366632' 00:32:12.155 killing process with pid 3366632 00:32:12.155 05:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 3366632 00:32:12.155 05:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 3366632 00:32:12.414 05:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:32:12.414 05:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:12.414 05:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:32:12.414 05:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:12.414 05:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:32:12.414 05:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:12.414 05:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:12.414 rmmod nvme_tcp 00:32:12.414 rmmod nvme_fabrics 00:32:12.414 rmmod nvme_keyring 00:32:12.414 05:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:12.414 05:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:32:12.414 05:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:32:12.414 05:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 3366610 ']' 00:32:12.414 05:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 3366610 00:32:12.414 05:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 3366610 ']' 00:32:12.414 05:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 3366610 00:32:12.414 05:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:32:12.414 05:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:12.414 05:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3366610 00:32:12.673 05:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:32:12.673 05:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:32:12.673 05:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3366610' 00:32:12.673 killing process with pid 3366610 00:32:12.673 05:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 3366610 00:32:12.673 05:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 3366610 00:32:12.673 05:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:12.673 05:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:12.673 05:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:12.673 05:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:12.673 05:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:12.673 05:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:12.673 05:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:12.673 05:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:15.201 05:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:15.201 00:32:15.201 real 0m17.448s 00:32:15.201 user 0m25.242s 00:32:15.201 sys 0m3.018s 00:32:15.201 05:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:15.201 05:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:15.201 ************************************ 00:32:15.201 END TEST nvmf_discovery_remove_ifc 00:32:15.201 ************************************ 00:32:15.201 05:46:21 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:32:15.201 05:46:21 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:32:15.201 05:46:21 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:15.201 05:46:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:15.201 ************************************ 00:32:15.201 START TEST nvmf_identify_kernel_target 00:32:15.201 ************************************ 00:32:15.201 05:46:21 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:32:15.201 * Looking for test storage... 00:32:15.201 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:15.201 05:46:21 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:15.201 05:46:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:32:15.201 05:46:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:15.201 05:46:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:15.201 05:46:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:15.201 05:46:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:15.201 05:46:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:15.201 05:46:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:15.201 05:46:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:15.201 05:46:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:15.201 05:46:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:15.201 05:46:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:15.201 05:46:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:15.201 05:46:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:15.201 05:46:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:15.201 05:46:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:15.201 05:46:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:15.201 05:46:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:15.201 05:46:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:15.201 05:46:21 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:15.201 05:46:21 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:15.201 05:46:21 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:15.201 05:46:21 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:15.201 05:46:21 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:15.201 05:46:21 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:15.201 05:46:21 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:32:15.201 05:46:21 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:15.201 05:46:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:32:15.201 05:46:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:15.201 05:46:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:15.201 05:46:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:15.201 05:46:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:15.201 05:46:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:15.201 05:46:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:15.201 05:46:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:15.201 05:46:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:15.201 05:46:21 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:32:15.201 05:46:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:15.201 05:46:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:15.201 05:46:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:15.201 05:46:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:15.201 05:46:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:15.201 05:46:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:15.201 05:46:21 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:15.201 05:46:21 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:15.201 05:46:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:15.201 05:46:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:15.201 05:46:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:32:15.201 05:46:21 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:32:17.155 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:17.155 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:32:17.155 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:17.155 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:17.155 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:17.155 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:17.155 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:17.155 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:32:17.155 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:17.155 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:32:17.155 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:32:17.155 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:32:17.155 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:32:17.155 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:32:17.155 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:32:17.155 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:17.155 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:17.155 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:17.155 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:17.155 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:17.155 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:17.155 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:17.155 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:17.155 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:17.155 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:17.155 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:17.155 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:17.155 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:17.155 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:17.155 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:17.155 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:17.155 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:17.155 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:17.155 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:17.155 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:17.155 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:17.155 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:17.155 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:17.155 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:17.155 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:17.155 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:17.155 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:17.155 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:17.156 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:17.156 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:17.156 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:17.156 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:32:17.156 00:32:17.156 --- 10.0.0.2 ping statistics --- 00:32:17.156 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:17.156 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:17.156 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:17.156 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:32:17.156 00:32:17.156 --- 10.0.0.1 ping statistics --- 00:32:17.156 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:17.156 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:17.156 05:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:18.093 Waiting for block devices as requested 00:32:18.093 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:18.353 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:18.353 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:18.353 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:18.353 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:18.612 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:18.612 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:18.612 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:18.612 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:18.870 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:18.870 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:18.871 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:18.871 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:19.127 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:19.127 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:19.127 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:19.127 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:19.385 05:46:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:19.385 05:46:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:19.385 05:46:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:19.385 05:46:26 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:32:19.385 05:46:26 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:19.385 05:46:26 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:32:19.385 05:46:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:19.385 05:46:26 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:19.385 05:46:26 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:19.385 No valid GPT data, bailing 00:32:19.385 05:46:26 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:19.385 05:46:26 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:32:19.385 05:46:26 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:32:19.385 05:46:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:19.385 05:46:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:19.385 05:46:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:19.385 05:46:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:19.385 05:46:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:19.385 05:46:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:32:19.385 05:46:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:32:19.385 05:46:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:19.385 05:46:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:32:19.385 05:46:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:19.385 05:46:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:32:19.385 05:46:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:32:19.385 05:46:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:32:19.385 05:46:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:19.385 05:46:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:32:19.385 00:32:19.385 Discovery Log Number of Records 2, Generation counter 2 00:32:19.385 =====Discovery Log Entry 0====== 00:32:19.385 trtype: tcp 00:32:19.385 adrfam: ipv4 00:32:19.385 subtype: current discovery subsystem 00:32:19.385 treq: not specified, sq flow control disable supported 00:32:19.385 portid: 1 00:32:19.385 trsvcid: 4420 00:32:19.385 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:19.385 traddr: 10.0.0.1 00:32:19.385 eflags: none 00:32:19.385 sectype: none 00:32:19.385 =====Discovery Log Entry 1====== 00:32:19.385 trtype: tcp 00:32:19.385 adrfam: ipv4 00:32:19.385 subtype: nvme subsystem 00:32:19.385 treq: not specified, sq flow control disable supported 00:32:19.385 portid: 1 00:32:19.385 trsvcid: 4420 00:32:19.385 subnqn: nqn.2016-06.io.spdk:testnqn 00:32:19.385 traddr: 10.0.0.1 00:32:19.385 eflags: none 00:32:19.385 sectype: none 00:32:19.385 05:46:26 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:32:19.385 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:32:19.385 EAL: No free 2048 kB hugepages reported on node 1 00:32:19.644 ===================================================== 00:32:19.644 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:32:19.644 ===================================================== 00:32:19.644 Controller Capabilities/Features 00:32:19.644 ================================ 00:32:19.644 Vendor ID: 0000 00:32:19.644 Subsystem Vendor ID: 0000 00:32:19.644 Serial Number: c1575ced8d9780d2b7dd 00:32:19.644 Model Number: Linux 00:32:19.644 Firmware Version: 6.7.0-68 00:32:19.644 Recommended Arb Burst: 0 00:32:19.644 IEEE OUI Identifier: 00 00 00 00:32:19.644 Multi-path I/O 00:32:19.644 May have multiple subsystem ports: No 00:32:19.644 May have multiple controllers: No 00:32:19.644 Associated with SR-IOV VF: No 00:32:19.644 Max Data Transfer Size: Unlimited 00:32:19.644 Max Number of Namespaces: 0 00:32:19.644 Max Number of I/O Queues: 1024 00:32:19.644 NVMe Specification Version (VS): 1.3 00:32:19.644 NVMe Specification Version (Identify): 1.3 00:32:19.644 Maximum Queue Entries: 1024 00:32:19.644 Contiguous Queues Required: No 00:32:19.644 Arbitration Mechanisms Supported 00:32:19.644 Weighted Round Robin: Not Supported 00:32:19.644 Vendor Specific: Not Supported 00:32:19.644 Reset Timeout: 7500 ms 00:32:19.644 Doorbell Stride: 4 bytes 00:32:19.644 NVM Subsystem Reset: Not Supported 00:32:19.644 Command Sets Supported 00:32:19.644 NVM Command Set: Supported 00:32:19.644 Boot Partition: Not Supported 00:32:19.644 Memory Page Size Minimum: 4096 bytes 00:32:19.644 Memory Page Size Maximum: 4096 bytes 00:32:19.644 Persistent Memory Region: Not Supported 00:32:19.644 Optional Asynchronous Events Supported 00:32:19.644 Namespace Attribute Notices: Not Supported 00:32:19.644 Firmware Activation Notices: Not Supported 00:32:19.644 ANA Change Notices: Not Supported 00:32:19.644 PLE Aggregate Log Change Notices: Not Supported 00:32:19.644 LBA Status Info Alert Notices: Not Supported 00:32:19.644 EGE Aggregate Log Change Notices: Not Supported 00:32:19.644 Normal NVM Subsystem Shutdown event: Not Supported 00:32:19.644 Zone Descriptor Change Notices: Not Supported 00:32:19.644 Discovery Log Change Notices: Supported 00:32:19.644 Controller Attributes 00:32:19.644 128-bit Host Identifier: Not Supported 00:32:19.644 Non-Operational Permissive Mode: Not Supported 00:32:19.644 NVM Sets: Not Supported 00:32:19.644 Read Recovery Levels: Not Supported 00:32:19.644 Endurance Groups: Not Supported 00:32:19.644 Predictable Latency Mode: Not Supported 00:32:19.644 Traffic Based Keep ALive: Not Supported 00:32:19.644 Namespace Granularity: Not Supported 00:32:19.644 SQ Associations: Not Supported 00:32:19.644 UUID List: Not Supported 00:32:19.644 Multi-Domain Subsystem: Not Supported 00:32:19.644 Fixed Capacity Management: Not Supported 00:32:19.644 Variable Capacity Management: Not Supported 00:32:19.644 Delete Endurance Group: Not Supported 00:32:19.644 Delete NVM Set: Not Supported 00:32:19.644 Extended LBA Formats Supported: Not Supported 00:32:19.644 Flexible Data Placement Supported: Not Supported 00:32:19.644 00:32:19.644 Controller Memory Buffer Support 00:32:19.644 ================================ 00:32:19.644 Supported: No 00:32:19.644 00:32:19.644 Persistent Memory Region Support 00:32:19.644 ================================ 00:32:19.644 Supported: No 00:32:19.644 00:32:19.644 Admin Command Set Attributes 00:32:19.644 ============================ 00:32:19.644 Security Send/Receive: Not Supported 00:32:19.644 Format NVM: Not Supported 00:32:19.644 Firmware Activate/Download: Not Supported 00:32:19.644 Namespace Management: Not Supported 00:32:19.644 Device Self-Test: Not Supported 00:32:19.644 Directives: Not Supported 00:32:19.644 NVMe-MI: Not Supported 00:32:19.644 Virtualization Management: Not Supported 00:32:19.644 Doorbell Buffer Config: Not Supported 00:32:19.644 Get LBA Status Capability: Not Supported 00:32:19.644 Command & Feature Lockdown Capability: Not Supported 00:32:19.644 Abort Command Limit: 1 00:32:19.644 Async Event Request Limit: 1 00:32:19.644 Number of Firmware Slots: N/A 00:32:19.644 Firmware Slot 1 Read-Only: N/A 00:32:19.644 Firmware Activation Without Reset: N/A 00:32:19.644 Multiple Update Detection Support: N/A 00:32:19.644 Firmware Update Granularity: No Information Provided 00:32:19.644 Per-Namespace SMART Log: No 00:32:19.644 Asymmetric Namespace Access Log Page: Not Supported 00:32:19.644 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:32:19.644 Command Effects Log Page: Not Supported 00:32:19.644 Get Log Page Extended Data: Supported 00:32:19.644 Telemetry Log Pages: Not Supported 00:32:19.644 Persistent Event Log Pages: Not Supported 00:32:19.644 Supported Log Pages Log Page: May Support 00:32:19.644 Commands Supported & Effects Log Page: Not Supported 00:32:19.644 Feature Identifiers & Effects Log Page:May Support 00:32:19.644 NVMe-MI Commands & Effects Log Page: May Support 00:32:19.644 Data Area 4 for Telemetry Log: Not Supported 00:32:19.644 Error Log Page Entries Supported: 1 00:32:19.644 Keep Alive: Not Supported 00:32:19.644 00:32:19.644 NVM Command Set Attributes 00:32:19.644 ========================== 00:32:19.644 Submission Queue Entry Size 00:32:19.644 Max: 1 00:32:19.644 Min: 1 00:32:19.644 Completion Queue Entry Size 00:32:19.644 Max: 1 00:32:19.644 Min: 1 00:32:19.644 Number of Namespaces: 0 00:32:19.644 Compare Command: Not Supported 00:32:19.644 Write Uncorrectable Command: Not Supported 00:32:19.644 Dataset Management Command: Not Supported 00:32:19.644 Write Zeroes Command: Not Supported 00:32:19.644 Set Features Save Field: Not Supported 00:32:19.644 Reservations: Not Supported 00:32:19.644 Timestamp: Not Supported 00:32:19.644 Copy: Not Supported 00:32:19.644 Volatile Write Cache: Not Present 00:32:19.644 Atomic Write Unit (Normal): 1 00:32:19.644 Atomic Write Unit (PFail): 1 00:32:19.644 Atomic Compare & Write Unit: 1 00:32:19.645 Fused Compare & Write: Not Supported 00:32:19.645 Scatter-Gather List 00:32:19.645 SGL Command Set: Supported 00:32:19.645 SGL Keyed: Not Supported 00:32:19.645 SGL Bit Bucket Descriptor: Not Supported 00:32:19.645 SGL Metadata Pointer: Not Supported 00:32:19.645 Oversized SGL: Not Supported 00:32:19.645 SGL Metadata Address: Not Supported 00:32:19.645 SGL Offset: Supported 00:32:19.645 Transport SGL Data Block: Not Supported 00:32:19.645 Replay Protected Memory Block: Not Supported 00:32:19.645 00:32:19.645 Firmware Slot Information 00:32:19.645 ========================= 00:32:19.645 Active slot: 0 00:32:19.645 00:32:19.645 00:32:19.645 Error Log 00:32:19.645 ========= 00:32:19.645 00:32:19.645 Active Namespaces 00:32:19.645 ================= 00:32:19.645 Discovery Log Page 00:32:19.645 ================== 00:32:19.645 Generation Counter: 2 00:32:19.645 Number of Records: 2 00:32:19.645 Record Format: 0 00:32:19.645 00:32:19.645 Discovery Log Entry 0 00:32:19.645 ---------------------- 00:32:19.645 Transport Type: 3 (TCP) 00:32:19.645 Address Family: 1 (IPv4) 00:32:19.645 Subsystem Type: 3 (Current Discovery Subsystem) 00:32:19.645 Entry Flags: 00:32:19.645 Duplicate Returned Information: 0 00:32:19.645 Explicit Persistent Connection Support for Discovery: 0 00:32:19.645 Transport Requirements: 00:32:19.645 Secure Channel: Not Specified 00:32:19.645 Port ID: 1 (0x0001) 00:32:19.645 Controller ID: 65535 (0xffff) 00:32:19.645 Admin Max SQ Size: 32 00:32:19.645 Transport Service Identifier: 4420 00:32:19.645 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:32:19.645 Transport Address: 10.0.0.1 00:32:19.645 Discovery Log Entry 1 00:32:19.645 ---------------------- 00:32:19.645 Transport Type: 3 (TCP) 00:32:19.645 Address Family: 1 (IPv4) 00:32:19.645 Subsystem Type: 2 (NVM Subsystem) 00:32:19.645 Entry Flags: 00:32:19.645 Duplicate Returned Information: 0 00:32:19.645 Explicit Persistent Connection Support for Discovery: 0 00:32:19.645 Transport Requirements: 00:32:19.645 Secure Channel: Not Specified 00:32:19.645 Port ID: 1 (0x0001) 00:32:19.645 Controller ID: 65535 (0xffff) 00:32:19.645 Admin Max SQ Size: 32 00:32:19.645 Transport Service Identifier: 4420 00:32:19.645 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:32:19.645 Transport Address: 10.0.0.1 00:32:19.645 05:46:26 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:19.645 EAL: No free 2048 kB hugepages reported on node 1 00:32:19.645 get_feature(0x01) failed 00:32:19.645 get_feature(0x02) failed 00:32:19.645 get_feature(0x04) failed 00:32:19.645 ===================================================== 00:32:19.645 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:19.645 ===================================================== 00:32:19.645 Controller Capabilities/Features 00:32:19.645 ================================ 00:32:19.645 Vendor ID: 0000 00:32:19.645 Subsystem Vendor ID: 0000 00:32:19.645 Serial Number: a002253499160a770964 00:32:19.645 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:32:19.645 Firmware Version: 6.7.0-68 00:32:19.645 Recommended Arb Burst: 6 00:32:19.645 IEEE OUI Identifier: 00 00 00 00:32:19.645 Multi-path I/O 00:32:19.645 May have multiple subsystem ports: Yes 00:32:19.645 May have multiple controllers: Yes 00:32:19.645 Associated with SR-IOV VF: No 00:32:19.645 Max Data Transfer Size: Unlimited 00:32:19.645 Max Number of Namespaces: 1024 00:32:19.645 Max Number of I/O Queues: 128 00:32:19.645 NVMe Specification Version (VS): 1.3 00:32:19.645 NVMe Specification Version (Identify): 1.3 00:32:19.645 Maximum Queue Entries: 1024 00:32:19.645 Contiguous Queues Required: No 00:32:19.645 Arbitration Mechanisms Supported 00:32:19.645 Weighted Round Robin: Not Supported 00:32:19.645 Vendor Specific: Not Supported 00:32:19.645 Reset Timeout: 7500 ms 00:32:19.645 Doorbell Stride: 4 bytes 00:32:19.645 NVM Subsystem Reset: Not Supported 00:32:19.645 Command Sets Supported 00:32:19.645 NVM Command Set: Supported 00:32:19.645 Boot Partition: Not Supported 00:32:19.645 Memory Page Size Minimum: 4096 bytes 00:32:19.645 Memory Page Size Maximum: 4096 bytes 00:32:19.645 Persistent Memory Region: Not Supported 00:32:19.645 Optional Asynchronous Events Supported 00:32:19.645 Namespace Attribute Notices: Supported 00:32:19.645 Firmware Activation Notices: Not Supported 00:32:19.645 ANA Change Notices: Supported 00:32:19.645 PLE Aggregate Log Change Notices: Not Supported 00:32:19.645 LBA Status Info Alert Notices: Not Supported 00:32:19.645 EGE Aggregate Log Change Notices: Not Supported 00:32:19.645 Normal NVM Subsystem Shutdown event: Not Supported 00:32:19.645 Zone Descriptor Change Notices: Not Supported 00:32:19.645 Discovery Log Change Notices: Not Supported 00:32:19.645 Controller Attributes 00:32:19.645 128-bit Host Identifier: Supported 00:32:19.645 Non-Operational Permissive Mode: Not Supported 00:32:19.645 NVM Sets: Not Supported 00:32:19.645 Read Recovery Levels: Not Supported 00:32:19.645 Endurance Groups: Not Supported 00:32:19.645 Predictable Latency Mode: Not Supported 00:32:19.645 Traffic Based Keep ALive: Supported 00:32:19.645 Namespace Granularity: Not Supported 00:32:19.645 SQ Associations: Not Supported 00:32:19.645 UUID List: Not Supported 00:32:19.645 Multi-Domain Subsystem: Not Supported 00:32:19.645 Fixed Capacity Management: Not Supported 00:32:19.645 Variable Capacity Management: Not Supported 00:32:19.645 Delete Endurance Group: Not Supported 00:32:19.645 Delete NVM Set: Not Supported 00:32:19.645 Extended LBA Formats Supported: Not Supported 00:32:19.645 Flexible Data Placement Supported: Not Supported 00:32:19.645 00:32:19.645 Controller Memory Buffer Support 00:32:19.645 ================================ 00:32:19.645 Supported: No 00:32:19.645 00:32:19.645 Persistent Memory Region Support 00:32:19.645 ================================ 00:32:19.645 Supported: No 00:32:19.645 00:32:19.645 Admin Command Set Attributes 00:32:19.645 ============================ 00:32:19.645 Security Send/Receive: Not Supported 00:32:19.645 Format NVM: Not Supported 00:32:19.645 Firmware Activate/Download: Not Supported 00:32:19.645 Namespace Management: Not Supported 00:32:19.645 Device Self-Test: Not Supported 00:32:19.645 Directives: Not Supported 00:32:19.645 NVMe-MI: Not Supported 00:32:19.645 Virtualization Management: Not Supported 00:32:19.645 Doorbell Buffer Config: Not Supported 00:32:19.645 Get LBA Status Capability: Not Supported 00:32:19.645 Command & Feature Lockdown Capability: Not Supported 00:32:19.645 Abort Command Limit: 4 00:32:19.645 Async Event Request Limit: 4 00:32:19.645 Number of Firmware Slots: N/A 00:32:19.645 Firmware Slot 1 Read-Only: N/A 00:32:19.645 Firmware Activation Without Reset: N/A 00:32:19.645 Multiple Update Detection Support: N/A 00:32:19.645 Firmware Update Granularity: No Information Provided 00:32:19.645 Per-Namespace SMART Log: Yes 00:32:19.645 Asymmetric Namespace Access Log Page: Supported 00:32:19.645 ANA Transition Time : 10 sec 00:32:19.645 00:32:19.645 Asymmetric Namespace Access Capabilities 00:32:19.645 ANA Optimized State : Supported 00:32:19.645 ANA Non-Optimized State : Supported 00:32:19.645 ANA Inaccessible State : Supported 00:32:19.645 ANA Persistent Loss State : Supported 00:32:19.645 ANA Change State : Supported 00:32:19.645 ANAGRPID is not changed : No 00:32:19.645 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:32:19.645 00:32:19.645 ANA Group Identifier Maximum : 128 00:32:19.645 Number of ANA Group Identifiers : 128 00:32:19.645 Max Number of Allowed Namespaces : 1024 00:32:19.645 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:32:19.645 Command Effects Log Page: Supported 00:32:19.645 Get Log Page Extended Data: Supported 00:32:19.645 Telemetry Log Pages: Not Supported 00:32:19.645 Persistent Event Log Pages: Not Supported 00:32:19.645 Supported Log Pages Log Page: May Support 00:32:19.645 Commands Supported & Effects Log Page: Not Supported 00:32:19.645 Feature Identifiers & Effects Log Page:May Support 00:32:19.645 NVMe-MI Commands & Effects Log Page: May Support 00:32:19.645 Data Area 4 for Telemetry Log: Not Supported 00:32:19.645 Error Log Page Entries Supported: 128 00:32:19.645 Keep Alive: Supported 00:32:19.645 Keep Alive Granularity: 1000 ms 00:32:19.645 00:32:19.645 NVM Command Set Attributes 00:32:19.645 ========================== 00:32:19.645 Submission Queue Entry Size 00:32:19.645 Max: 64 00:32:19.645 Min: 64 00:32:19.645 Completion Queue Entry Size 00:32:19.645 Max: 16 00:32:19.645 Min: 16 00:32:19.645 Number of Namespaces: 1024 00:32:19.645 Compare Command: Not Supported 00:32:19.645 Write Uncorrectable Command: Not Supported 00:32:19.645 Dataset Management Command: Supported 00:32:19.645 Write Zeroes Command: Supported 00:32:19.645 Set Features Save Field: Not Supported 00:32:19.645 Reservations: Not Supported 00:32:19.645 Timestamp: Not Supported 00:32:19.645 Copy: Not Supported 00:32:19.645 Volatile Write Cache: Present 00:32:19.645 Atomic Write Unit (Normal): 1 00:32:19.645 Atomic Write Unit (PFail): 1 00:32:19.645 Atomic Compare & Write Unit: 1 00:32:19.645 Fused Compare & Write: Not Supported 00:32:19.645 Scatter-Gather List 00:32:19.645 SGL Command Set: Supported 00:32:19.645 SGL Keyed: Not Supported 00:32:19.645 SGL Bit Bucket Descriptor: Not Supported 00:32:19.645 SGL Metadata Pointer: Not Supported 00:32:19.645 Oversized SGL: Not Supported 00:32:19.645 SGL Metadata Address: Not Supported 00:32:19.645 SGL Offset: Supported 00:32:19.645 Transport SGL Data Block: Not Supported 00:32:19.645 Replay Protected Memory Block: Not Supported 00:32:19.645 00:32:19.645 Firmware Slot Information 00:32:19.645 ========================= 00:32:19.645 Active slot: 0 00:32:19.645 00:32:19.645 Asymmetric Namespace Access 00:32:19.645 =========================== 00:32:19.645 Change Count : 0 00:32:19.645 Number of ANA Group Descriptors : 1 00:32:19.645 ANA Group Descriptor : 0 00:32:19.645 ANA Group ID : 1 00:32:19.645 Number of NSID Values : 1 00:32:19.645 Change Count : 0 00:32:19.645 ANA State : 1 00:32:19.645 Namespace Identifier : 1 00:32:19.645 00:32:19.645 Commands Supported and Effects 00:32:19.645 ============================== 00:32:19.645 Admin Commands 00:32:19.645 -------------- 00:32:19.645 Get Log Page (02h): Supported 00:32:19.645 Identify (06h): Supported 00:32:19.645 Abort (08h): Supported 00:32:19.645 Set Features (09h): Supported 00:32:19.645 Get Features (0Ah): Supported 00:32:19.645 Asynchronous Event Request (0Ch): Supported 00:32:19.645 Keep Alive (18h): Supported 00:32:19.645 I/O Commands 00:32:19.645 ------------ 00:32:19.645 Flush (00h): Supported 00:32:19.645 Write (01h): Supported LBA-Change 00:32:19.645 Read (02h): Supported 00:32:19.645 Write Zeroes (08h): Supported LBA-Change 00:32:19.645 Dataset Management (09h): Supported 00:32:19.645 00:32:19.645 Error Log 00:32:19.645 ========= 00:32:19.645 Entry: 0 00:32:19.645 Error Count: 0x3 00:32:19.645 Submission Queue Id: 0x0 00:32:19.645 Command Id: 0x5 00:32:19.645 Phase Bit: 0 00:32:19.645 Status Code: 0x2 00:32:19.645 Status Code Type: 0x0 00:32:19.645 Do Not Retry: 1 00:32:19.645 Error Location: 0x28 00:32:19.645 LBA: 0x0 00:32:19.645 Namespace: 0x0 00:32:19.645 Vendor Log Page: 0x0 00:32:19.645 ----------- 00:32:19.645 Entry: 1 00:32:19.645 Error Count: 0x2 00:32:19.645 Submission Queue Id: 0x0 00:32:19.645 Command Id: 0x5 00:32:19.645 Phase Bit: 0 00:32:19.645 Status Code: 0x2 00:32:19.645 Status Code Type: 0x0 00:32:19.645 Do Not Retry: 1 00:32:19.645 Error Location: 0x28 00:32:19.645 LBA: 0x0 00:32:19.645 Namespace: 0x0 00:32:19.645 Vendor Log Page: 0x0 00:32:19.645 ----------- 00:32:19.645 Entry: 2 00:32:19.645 Error Count: 0x1 00:32:19.645 Submission Queue Id: 0x0 00:32:19.645 Command Id: 0x4 00:32:19.645 Phase Bit: 0 00:32:19.645 Status Code: 0x2 00:32:19.645 Status Code Type: 0x0 00:32:19.645 Do Not Retry: 1 00:32:19.645 Error Location: 0x28 00:32:19.645 LBA: 0x0 00:32:19.645 Namespace: 0x0 00:32:19.645 Vendor Log Page: 0x0 00:32:19.645 00:32:19.645 Number of Queues 00:32:19.645 ================ 00:32:19.645 Number of I/O Submission Queues: 128 00:32:19.645 Number of I/O Completion Queues: 128 00:32:19.645 00:32:19.645 ZNS Specific Controller Data 00:32:19.645 ============================ 00:32:19.645 Zone Append Size Limit: 0 00:32:19.645 00:32:19.645 00:32:19.645 Active Namespaces 00:32:19.645 ================= 00:32:19.645 get_feature(0x05) failed 00:32:19.645 Namespace ID:1 00:32:19.645 Command Set Identifier: NVM (00h) 00:32:19.645 Deallocate: Supported 00:32:19.645 Deallocated/Unwritten Error: Not Supported 00:32:19.645 Deallocated Read Value: Unknown 00:32:19.645 Deallocate in Write Zeroes: Not Supported 00:32:19.645 Deallocated Guard Field: 0xFFFF 00:32:19.645 Flush: Supported 00:32:19.645 Reservation: Not Supported 00:32:19.645 Namespace Sharing Capabilities: Multiple Controllers 00:32:19.645 Size (in LBAs): 1953525168 (931GiB) 00:32:19.645 Capacity (in LBAs): 1953525168 (931GiB) 00:32:19.645 Utilization (in LBAs): 1953525168 (931GiB) 00:32:19.645 UUID: 70a2c3f9-3568-4986-9675-c29dc245cf94 00:32:19.645 Thin Provisioning: Not Supported 00:32:19.645 Per-NS Atomic Units: Yes 00:32:19.645 Atomic Boundary Size (Normal): 0 00:32:19.645 Atomic Boundary Size (PFail): 0 00:32:19.645 Atomic Boundary Offset: 0 00:32:19.645 NGUID/EUI64 Never Reused: No 00:32:19.645 ANA group ID: 1 00:32:19.645 Namespace Write Protected: No 00:32:19.645 Number of LBA Formats: 1 00:32:19.645 Current LBA Format: LBA Format #00 00:32:19.645 LBA Format #00: Data Size: 512 Metadata Size: 0 00:32:19.645 00:32:19.645 05:46:26 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:32:19.645 05:46:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:19.645 05:46:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:32:19.645 05:46:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:19.645 05:46:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:32:19.645 05:46:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:19.645 05:46:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:19.645 rmmod nvme_tcp 00:32:19.645 rmmod nvme_fabrics 00:32:19.646 05:46:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:19.646 05:46:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:32:19.646 05:46:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:32:19.646 05:46:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:32:19.646 05:46:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:19.646 05:46:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:19.646 05:46:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:19.646 05:46:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:19.646 05:46:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:19.646 05:46:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:19.646 05:46:26 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:19.646 05:46:26 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:22.174 05:46:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:22.174 05:46:28 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:32:22.175 05:46:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:22.175 05:46:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:32:22.175 05:46:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:22.175 05:46:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:22.175 05:46:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:22.175 05:46:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:22.175 05:46:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:22.175 05:46:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:22.175 05:46:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:23.111 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:23.111 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:23.111 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:23.111 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:23.111 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:23.111 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:23.111 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:23.111 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:23.111 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:23.111 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:23.111 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:23.111 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:23.111 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:23.111 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:23.111 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:23.111 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:24.048 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:32:24.048 00:32:24.048 real 0m9.179s 00:32:24.048 user 0m1.954s 00:32:24.048 sys 0m3.251s 00:32:24.048 05:46:31 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:24.048 05:46:31 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:32:24.048 ************************************ 00:32:24.048 END TEST nvmf_identify_kernel_target 00:32:24.048 ************************************ 00:32:24.048 05:46:31 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:32:24.048 05:46:31 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:32:24.048 05:46:31 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:24.048 05:46:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:24.048 ************************************ 00:32:24.048 START TEST nvmf_auth_host 00:32:24.048 ************************************ 00:32:24.048 05:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:32:24.048 * Looking for test storage... 00:32:24.048 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:24.048 05:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:24.048 05:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:32:24.048 05:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:24.048 05:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:24.048 05:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:24.048 05:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:24.048 05:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:24.048 05:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:24.048 05:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:24.048 05:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:24.048 05:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:24.048 05:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:24.048 05:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:24.306 05:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:24.306 05:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:24.306 05:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:24.306 05:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:24.306 05:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:24.306 05:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:24.306 05:46:31 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:24.306 05:46:31 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:24.306 05:46:31 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:24.306 05:46:31 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:24.307 05:46:31 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:24.307 05:46:31 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:24.307 05:46:31 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:32:24.307 05:46:31 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:24.307 05:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:32:24.307 05:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:24.307 05:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:24.307 05:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:24.307 05:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:24.307 05:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:24.307 05:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:24.307 05:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:24.307 05:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:24.307 05:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:32:24.307 05:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:32:24.307 05:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:32:24.307 05:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:32:24.307 05:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:24.307 05:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:24.307 05:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:32:24.307 05:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:32:24.307 05:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:32:24.307 05:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:24.307 05:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:24.307 05:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:24.307 05:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:24.307 05:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:24.307 05:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:24.307 05:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:24.307 05:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:24.307 05:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:24.307 05:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:24.307 05:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:32:24.307 05:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.210 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:26.210 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:32:26.210 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:26.210 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:26.210 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:26.210 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:26.210 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:26.210 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:32:26.210 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:26.210 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:32:26.210 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:32:26.210 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:32:26.210 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:32:26.210 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:32:26.210 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:32:26.210 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:26.210 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:26.210 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:26.210 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:26.210 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:26.210 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:26.210 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:26.210 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:26.210 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:26.210 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:26.210 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:26.210 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:26.210 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:26.210 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:26.210 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:26.210 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:26.210 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:26.210 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:26.210 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:26.210 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:26.210 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:26.210 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:26.210 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:26.210 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:26.210 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:26.210 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:26.210 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:26.210 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:26.210 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:26.210 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:26.210 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:26.210 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:26.210 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:26.210 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:26.210 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:26.210 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:26.210 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:26.210 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:26.210 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:26.210 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:26.210 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:26.210 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:26.210 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:26.210 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:26.210 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:26.210 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:26.210 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:26.210 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:26.210 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:26.210 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:26.210 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:26.210 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:26.210 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:26.210 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:26.210 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:26.211 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:26.211 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:26.211 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:32:26.211 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:26.211 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:26.211 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:26.211 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:26.211 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:26.211 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:26.211 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:26.211 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:26.211 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:26.211 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:26.211 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:26.211 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:26.211 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:26.211 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:26.211 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:26.211 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:26.468 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:26.468 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:26.468 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:26.468 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:26.468 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:26.468 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:26.468 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:26.468 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:26.468 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:32:26.468 00:32:26.468 --- 10.0.0.2 ping statistics --- 00:32:26.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:26.468 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:32:26.468 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:26.468 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:26.468 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:32:26.468 00:32:26.468 --- 10.0.0.1 ping statistics --- 00:32:26.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:26.468 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:32:26.468 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:26.468 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:32:26.468 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:26.468 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:26.468 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:26.468 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:26.468 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:26.468 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:26.468 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:26.468 05:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:32:26.468 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:26.468 05:46:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:32:26.468 05:46:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.468 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=3373690 00:32:26.468 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:32:26.468 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 3373690 00:32:26.468 05:46:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 3373690 ']' 00:32:26.468 05:46:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:26.468 05:46:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:26.468 05:46:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:26.468 05:46:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:26.468 05:46:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.726 05:46:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:26.726 05:46:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:32:26.726 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:26.726 05:46:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:26.726 05:46:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.726 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:26.726 05:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:32:26.726 05:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:32:26.726 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:26.726 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:26.726 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:26.726 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:26.726 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:26.726 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:26.726 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c9ada5ae08c6df4bf58e703d3d6da912 00:32:26.726 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:26.726 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.uU6 00:32:26.726 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c9ada5ae08c6df4bf58e703d3d6da912 0 00:32:26.726 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c9ada5ae08c6df4bf58e703d3d6da912 0 00:32:26.726 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:26.726 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:26.726 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c9ada5ae08c6df4bf58e703d3d6da912 00:32:26.726 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:26.726 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:26.984 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.uU6 00:32:26.984 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.uU6 00:32:26.984 05:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.uU6 00:32:26.984 05:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:32:26.984 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:26.984 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:26.984 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:26.984 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:32:26.984 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:32:26.984 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:26.984 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e9cebc89cc725ee2c0c56e336a5d230b79c25f640d9dd6534af9774cbcfa277b 00:32:26.984 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:32:26.984 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.6Dj 00:32:26.984 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e9cebc89cc725ee2c0c56e336a5d230b79c25f640d9dd6534af9774cbcfa277b 3 00:32:26.984 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e9cebc89cc725ee2c0c56e336a5d230b79c25f640d9dd6534af9774cbcfa277b 3 00:32:26.984 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:26.984 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:26.984 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e9cebc89cc725ee2c0c56e336a5d230b79c25f640d9dd6534af9774cbcfa277b 00:32:26.984 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:32:26.984 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:26.984 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.6Dj 00:32:26.984 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.6Dj 00:32:26.984 05:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.6Dj 00:32:26.984 05:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:32:26.984 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:26.984 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:26.984 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:26.984 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:26.984 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:26.984 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:26.984 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ada219e50e92f6a49924881f48dfe594ad1f50d8af379b73 00:32:26.984 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:26.984 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.lTO 00:32:26.984 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ada219e50e92f6a49924881f48dfe594ad1f50d8af379b73 0 00:32:26.984 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ada219e50e92f6a49924881f48dfe594ad1f50d8af379b73 0 00:32:26.984 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:26.984 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:26.984 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ada219e50e92f6a49924881f48dfe594ad1f50d8af379b73 00:32:26.984 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:26.984 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:26.984 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.lTO 00:32:26.984 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.lTO 00:32:26.984 05:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.lTO 00:32:26.984 05:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:32:26.984 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:26.984 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:26.984 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:26.984 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:32:26.984 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:26.984 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:26.984 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=3f3dffbb54f2b0e719065db89d545b0cc74875b3930f671b 00:32:26.984 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:32:26.984 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.dFY 00:32:26.984 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 3f3dffbb54f2b0e719065db89d545b0cc74875b3930f671b 2 00:32:26.984 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 3f3dffbb54f2b0e719065db89d545b0cc74875b3930f671b 2 00:32:26.984 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:26.984 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:26.984 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=3f3dffbb54f2b0e719065db89d545b0cc74875b3930f671b 00:32:26.984 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:32:26.984 05:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:26.984 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.dFY 00:32:26.984 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.dFY 00:32:26.984 05:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.dFY 00:32:26.984 05:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:26.984 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:26.984 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:26.984 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:26.984 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:32:26.984 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:26.984 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:26.984 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e53a4e503ec1b9c2bf9e0085d14ede77 00:32:26.984 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:32:26.984 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.rBq 00:32:26.984 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e53a4e503ec1b9c2bf9e0085d14ede77 1 00:32:26.984 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e53a4e503ec1b9c2bf9e0085d14ede77 1 00:32:26.984 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:26.984 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:26.984 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e53a4e503ec1b9c2bf9e0085d14ede77 00:32:26.984 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:32:26.984 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:27.243 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.rBq 00:32:27.243 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.rBq 00:32:27.243 05:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.rBq 00:32:27.243 05:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:27.243 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:27.243 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:27.243 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:27.243 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:32:27.243 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:27.243 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:27.243 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=336685fe963eb85c749aa02690040b98 00:32:27.243 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:32:27.243 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.O0L 00:32:27.243 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 336685fe963eb85c749aa02690040b98 1 00:32:27.243 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 336685fe963eb85c749aa02690040b98 1 00:32:27.243 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:27.243 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:27.243 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=336685fe963eb85c749aa02690040b98 00:32:27.243 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:32:27.243 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:27.243 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.O0L 00:32:27.243 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.O0L 00:32:27.243 05:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.O0L 00:32:27.243 05:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:32:27.243 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:27.243 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:27.243 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:27.243 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:32:27.243 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:27.243 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:27.243 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7b9949e6c98c75afbac9fdbe9fb3a25eb415cd50602c8c68 00:32:27.243 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:32:27.243 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.4r4 00:32:27.243 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7b9949e6c98c75afbac9fdbe9fb3a25eb415cd50602c8c68 2 00:32:27.243 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7b9949e6c98c75afbac9fdbe9fb3a25eb415cd50602c8c68 2 00:32:27.243 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:27.243 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:27.243 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7b9949e6c98c75afbac9fdbe9fb3a25eb415cd50602c8c68 00:32:27.243 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:32:27.243 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:27.243 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.4r4 00:32:27.243 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.4r4 00:32:27.243 05:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.4r4 00:32:27.243 05:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:32:27.243 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:27.243 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:27.243 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:27.243 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:27.243 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:27.243 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:27.243 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ff5de724468db43f4bdacceb39a3706e 00:32:27.243 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:27.243 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.DiV 00:32:27.243 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ff5de724468db43f4bdacceb39a3706e 0 00:32:27.243 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ff5de724468db43f4bdacceb39a3706e 0 00:32:27.243 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:27.243 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:27.243 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ff5de724468db43f4bdacceb39a3706e 00:32:27.243 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:27.243 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:27.243 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.DiV 00:32:27.243 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.DiV 00:32:27.243 05:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.DiV 00:32:27.244 05:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:32:27.244 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:27.244 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:27.244 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:27.244 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:32:27.244 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:32:27.244 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:27.244 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0e9b6d201adb45a9dffc864909ddd3f4dcb4030ad59cf19a5a652c495d521aed 00:32:27.244 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:32:27.244 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Fkw 00:32:27.244 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0e9b6d201adb45a9dffc864909ddd3f4dcb4030ad59cf19a5a652c495d521aed 3 00:32:27.244 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0e9b6d201adb45a9dffc864909ddd3f4dcb4030ad59cf19a5a652c495d521aed 3 00:32:27.244 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:27.244 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:27.244 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0e9b6d201adb45a9dffc864909ddd3f4dcb4030ad59cf19a5a652c495d521aed 00:32:27.244 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:32:27.244 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:27.244 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Fkw 00:32:27.244 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Fkw 00:32:27.244 05:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.Fkw 00:32:27.244 05:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:32:27.244 05:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3373690 00:32:27.244 05:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 3373690 ']' 00:32:27.244 05:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:27.244 05:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:27.244 05:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:27.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:27.244 05:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:27.244 05:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.502 05:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:27.502 05:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:32:27.502 05:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:27.502 05:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.uU6 00:32:27.502 05:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.502 05:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.502 05:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.502 05:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.6Dj ]] 00:32:27.502 05:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.6Dj 00:32:27.502 05:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.502 05:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.502 05:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.502 05:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:27.502 05:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.lTO 00:32:27.502 05:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.502 05:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.502 05:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.502 05:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.dFY ]] 00:32:27.502 05:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.dFY 00:32:27.502 05:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.502 05:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.502 05:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.502 05:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:27.502 05:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.rBq 00:32:27.502 05:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.502 05:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.502 05:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.502 05:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.O0L ]] 00:32:27.503 05:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.O0L 00:32:27.503 05:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.503 05:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.503 05:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.503 05:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:27.503 05:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.4r4 00:32:27.503 05:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.503 05:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.503 05:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.503 05:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.DiV ]] 00:32:27.503 05:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.DiV 00:32:27.503 05:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.503 05:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.503 05:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.503 05:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:27.503 05:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Fkw 00:32:27.503 05:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.503 05:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.760 05:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.760 05:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:32:27.760 05:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:32:27.760 05:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:32:27.760 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:27.760 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:27.760 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:27.760 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:27.760 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:27.760 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:27.760 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:27.760 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:27.760 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:27.760 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:27.760 05:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:32:27.760 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:32:27.760 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:27.760 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:27.760 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:27.760 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:27.760 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:32:27.760 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:27.760 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:27.760 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:27.760 05:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:28.695 Waiting for block devices as requested 00:32:28.695 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:28.695 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:28.952 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:28.952 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:29.210 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:29.210 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:29.210 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:29.210 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:29.469 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:29.469 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:29.469 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:29.469 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:29.726 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:29.726 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:29.726 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:29.726 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:29.984 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:30.270 05:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:30.270 05:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:30.270 05:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:30.270 05:46:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:32:30.270 05:46:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:30.270 05:46:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:32:30.270 05:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:30.270 05:46:37 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:30.270 05:46:37 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:30.270 No valid GPT data, bailing 00:32:30.270 05:46:37 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:30.270 05:46:37 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:32:30.270 05:46:37 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:32:30.270 05:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:30.270 05:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:30.270 05:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:30.270 05:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:30.270 05:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:30.270 05:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:32:30.270 05:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:32:30.270 05:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:30.270 05:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:32:30.270 05:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:30.270 05:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:32:30.270 05:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:32:30.270 05:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:32:30.270 05:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:30.270 05:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:32:30.533 00:32:30.533 Discovery Log Number of Records 2, Generation counter 2 00:32:30.533 =====Discovery Log Entry 0====== 00:32:30.533 trtype: tcp 00:32:30.533 adrfam: ipv4 00:32:30.533 subtype: current discovery subsystem 00:32:30.533 treq: not specified, sq flow control disable supported 00:32:30.533 portid: 1 00:32:30.533 trsvcid: 4420 00:32:30.533 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:30.533 traddr: 10.0.0.1 00:32:30.533 eflags: none 00:32:30.533 sectype: none 00:32:30.533 =====Discovery Log Entry 1====== 00:32:30.533 trtype: tcp 00:32:30.533 adrfam: ipv4 00:32:30.533 subtype: nvme subsystem 00:32:30.533 treq: not specified, sq flow control disable supported 00:32:30.533 portid: 1 00:32:30.533 trsvcid: 4420 00:32:30.533 subnqn: nqn.2024-02.io.spdk:cnode0 00:32:30.533 traddr: 10.0.0.1 00:32:30.533 eflags: none 00:32:30.533 sectype: none 00:32:30.533 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:30.533 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:32:30.533 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:30.533 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:30.533 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:30.533 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:30.533 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:30.533 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:30.533 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWRhMjE5ZTUwZTkyZjZhNDk5MjQ4ODFmNDhkZmU1OTRhZDFmNTBkOGFmMzc5Yjcz+NuzFg==: 00:32:30.533 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2YzZGZmYmI1NGYyYjBlNzE5MDY1ZGI4OWQ1NDViMGNjNzQ4NzViMzkzMGY2NzFiDXOVbQ==: 00:32:30.533 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:30.533 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:30.533 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWRhMjE5ZTUwZTkyZjZhNDk5MjQ4ODFmNDhkZmU1OTRhZDFmNTBkOGFmMzc5Yjcz+NuzFg==: 00:32:30.533 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2YzZGZmYmI1NGYyYjBlNzE5MDY1ZGI4OWQ1NDViMGNjNzQ4NzViMzkzMGY2NzFiDXOVbQ==: ]] 00:32:30.533 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2YzZGZmYmI1NGYyYjBlNzE5MDY1ZGI4OWQ1NDViMGNjNzQ4NzViMzkzMGY2NzFiDXOVbQ==: 00:32:30.533 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:30.533 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:32:30.533 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.534 nvme0n1 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzlhZGE1YWUwOGM2ZGY0YmY1OGU3MDNkM2Q2ZGE5MTL0fxlQ: 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTljZWJjODljYzcyNWVlMmMwYzU2ZTMzNmE1ZDIzMGI3OWMyNWY2NDBkOWRkNjUzNGFmOTc3NGNiY2ZhMjc3Yi4NmVg=: 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzlhZGE1YWUwOGM2ZGY0YmY1OGU3MDNkM2Q2ZGE5MTL0fxlQ: 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTljZWJjODljYzcyNWVlMmMwYzU2ZTMzNmE1ZDIzMGI3OWMyNWY2NDBkOWRkNjUzNGFmOTc3NGNiY2ZhMjc3Yi4NmVg=: ]] 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTljZWJjODljYzcyNWVlMmMwYzU2ZTMzNmE1ZDIzMGI3OWMyNWY2NDBkOWRkNjUzNGFmOTc3NGNiY2ZhMjc3Yi4NmVg=: 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.534 05:46:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.794 nvme0n1 00:32:30.794 05:46:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.794 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:30.794 05:46:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.794 05:46:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.794 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:30.794 05:46:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.794 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:30.794 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:30.794 05:46:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.794 05:46:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.794 05:46:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.794 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:30.794 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:30.794 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:30.794 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:30.794 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:30.794 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:30.794 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWRhMjE5ZTUwZTkyZjZhNDk5MjQ4ODFmNDhkZmU1OTRhZDFmNTBkOGFmMzc5Yjcz+NuzFg==: 00:32:30.794 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2YzZGZmYmI1NGYyYjBlNzE5MDY1ZGI4OWQ1NDViMGNjNzQ4NzViMzkzMGY2NzFiDXOVbQ==: 00:32:30.794 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:30.794 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:30.794 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWRhMjE5ZTUwZTkyZjZhNDk5MjQ4ODFmNDhkZmU1OTRhZDFmNTBkOGFmMzc5Yjcz+NuzFg==: 00:32:30.794 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2YzZGZmYmI1NGYyYjBlNzE5MDY1ZGI4OWQ1NDViMGNjNzQ4NzViMzkzMGY2NzFiDXOVbQ==: ]] 00:32:30.794 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2YzZGZmYmI1NGYyYjBlNzE5MDY1ZGI4OWQ1NDViMGNjNzQ4NzViMzkzMGY2NzFiDXOVbQ==: 00:32:30.794 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:32:30.794 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:30.794 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:30.794 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:30.794 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:30.794 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:30.794 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:30.794 05:46:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.794 05:46:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.794 05:46:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.794 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:30.794 05:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:30.794 05:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:30.794 05:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:30.794 05:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:30.794 05:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:30.794 05:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:30.794 05:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:30.794 05:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:30.794 05:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:30.794 05:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:30.794 05:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:30.794 05:46:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.794 05:46:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.053 nvme0n1 00:32:31.053 05:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.053 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:31.053 05:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.053 05:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.053 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:31.053 05:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.053 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:31.053 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:31.053 05:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.053 05:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.053 05:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.053 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:31.053 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:32:31.053 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:31.053 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:31.053 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:31.053 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:31.053 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTUzYTRlNTAzZWMxYjljMmJmOWUwMDg1ZDE0ZWRlNzdac6UY: 00:32:31.053 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzM2Njg1ZmU5NjNlYjg1Yzc0OWFhMDI2OTAwNDBiOTiimVRV: 00:32:31.053 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:31.053 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:31.053 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTUzYTRlNTAzZWMxYjljMmJmOWUwMDg1ZDE0ZWRlNzdac6UY: 00:32:31.053 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzM2Njg1ZmU5NjNlYjg1Yzc0OWFhMDI2OTAwNDBiOTiimVRV: ]] 00:32:31.053 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzM2Njg1ZmU5NjNlYjg1Yzc0OWFhMDI2OTAwNDBiOTiimVRV: 00:32:31.053 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:32:31.053 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:31.053 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:31.053 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:31.053 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:31.053 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:31.053 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:31.053 05:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.053 05:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.053 05:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.053 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:31.053 05:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:31.053 05:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:31.053 05:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:31.053 05:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:31.053 05:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:31.053 05:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:31.053 05:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:31.053 05:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:31.053 05:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:31.053 05:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:31.053 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:31.053 05:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.053 05:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.313 nvme0n1 00:32:31.313 05:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.313 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:31.313 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:31.313 05:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.313 05:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.313 05:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.313 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:31.313 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:31.313 05:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.313 05:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.313 05:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.313 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:31.313 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:32:31.313 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:31.313 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:31.313 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:31.313 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:31.313 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2I5OTQ5ZTZjOThjNzVhZmJhYzlmZGJlOWZiM2EyNWViNDE1Y2Q1MDYwMmM4YzY4l4iYsQ==: 00:32:31.313 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmY1ZGU3MjQ0NjhkYjQzZjRiZGFjY2ViMzlhMzcwNmXu+zH6: 00:32:31.313 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:31.313 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:31.313 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2I5OTQ5ZTZjOThjNzVhZmJhYzlmZGJlOWZiM2EyNWViNDE1Y2Q1MDYwMmM4YzY4l4iYsQ==: 00:32:31.313 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmY1ZGU3MjQ0NjhkYjQzZjRiZGFjY2ViMzlhMzcwNmXu+zH6: ]] 00:32:31.313 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmY1ZGU3MjQ0NjhkYjQzZjRiZGFjY2ViMzlhMzcwNmXu+zH6: 00:32:31.313 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:32:31.313 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:31.313 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:31.313 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:31.313 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:31.313 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:31.313 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:31.313 05:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.313 05:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.313 05:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.313 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:31.313 05:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:31.313 05:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:31.313 05:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:31.313 05:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:31.313 05:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:31.313 05:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:31.313 05:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:31.313 05:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:31.313 05:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:31.313 05:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:31.313 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:31.313 05:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.313 05:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.572 nvme0n1 00:32:31.572 05:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.572 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:31.572 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:31.572 05:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.572 05:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.572 05:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.572 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:31.572 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:31.572 05:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.572 05:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.572 05:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.572 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:31.572 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:32:31.572 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:31.572 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:31.572 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:31.572 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:31.572 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGU5YjZkMjAxYWRiNDVhOWRmZmM4NjQ5MDlkZGQzZjRkY2I0MDMwYWQ1OWNmMTlhNWE2NTJjNDk1ZDUyMWFlZCKZ7r8=: 00:32:31.572 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:31.572 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:31.572 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:31.572 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGU5YjZkMjAxYWRiNDVhOWRmZmM4NjQ5MDlkZGQzZjRkY2I0MDMwYWQ1OWNmMTlhNWE2NTJjNDk1ZDUyMWFlZCKZ7r8=: 00:32:31.572 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:31.572 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:32:31.572 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:31.572 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:31.572 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:31.572 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:31.572 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:31.572 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:31.572 05:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.572 05:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.572 05:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.572 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:31.572 05:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:31.572 05:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:31.572 05:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:31.572 05:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:31.572 05:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:31.572 05:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:31.572 05:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:31.572 05:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:31.572 05:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:31.572 05:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:31.572 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:31.572 05:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.572 05:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.832 nvme0n1 00:32:31.832 05:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.832 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:31.832 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:31.832 05:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.832 05:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.832 05:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.832 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:31.832 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:31.832 05:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.832 05:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.832 05:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.832 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:31.832 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:31.832 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:32:31.832 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:31.832 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:31.832 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:31.832 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:31.832 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzlhZGE1YWUwOGM2ZGY0YmY1OGU3MDNkM2Q2ZGE5MTL0fxlQ: 00:32:31.832 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTljZWJjODljYzcyNWVlMmMwYzU2ZTMzNmE1ZDIzMGI3OWMyNWY2NDBkOWRkNjUzNGFmOTc3NGNiY2ZhMjc3Yi4NmVg=: 00:32:31.832 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:31.832 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:31.832 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzlhZGE1YWUwOGM2ZGY0YmY1OGU3MDNkM2Q2ZGE5MTL0fxlQ: 00:32:31.832 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTljZWJjODljYzcyNWVlMmMwYzU2ZTMzNmE1ZDIzMGI3OWMyNWY2NDBkOWRkNjUzNGFmOTc3NGNiY2ZhMjc3Yi4NmVg=: ]] 00:32:31.832 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTljZWJjODljYzcyNWVlMmMwYzU2ZTMzNmE1ZDIzMGI3OWMyNWY2NDBkOWRkNjUzNGFmOTc3NGNiY2ZhMjc3Yi4NmVg=: 00:32:31.832 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:32:31.832 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:31.832 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:31.832 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:31.832 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:31.832 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:31.832 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:31.832 05:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.832 05:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.832 05:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.832 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:31.832 05:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:31.832 05:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:31.832 05:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:31.832 05:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:31.832 05:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:31.832 05:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:31.832 05:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:31.832 05:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:31.832 05:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:31.832 05:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:31.832 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:31.832 05:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.832 05:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.091 nvme0n1 00:32:32.091 05:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.091 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:32.091 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:32.091 05:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.091 05:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.091 05:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.091 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:32.091 05:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:32.091 05:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.091 05:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.091 05:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.091 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:32.091 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:32:32.091 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:32.091 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:32.091 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:32.091 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:32.091 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWRhMjE5ZTUwZTkyZjZhNDk5MjQ4ODFmNDhkZmU1OTRhZDFmNTBkOGFmMzc5Yjcz+NuzFg==: 00:32:32.091 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2YzZGZmYmI1NGYyYjBlNzE5MDY1ZGI4OWQ1NDViMGNjNzQ4NzViMzkzMGY2NzFiDXOVbQ==: 00:32:32.091 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:32.091 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:32.091 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWRhMjE5ZTUwZTkyZjZhNDk5MjQ4ODFmNDhkZmU1OTRhZDFmNTBkOGFmMzc5Yjcz+NuzFg==: 00:32:32.091 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2YzZGZmYmI1NGYyYjBlNzE5MDY1ZGI4OWQ1NDViMGNjNzQ4NzViMzkzMGY2NzFiDXOVbQ==: ]] 00:32:32.091 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2YzZGZmYmI1NGYyYjBlNzE5MDY1ZGI4OWQ1NDViMGNjNzQ4NzViMzkzMGY2NzFiDXOVbQ==: 00:32:32.091 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:32:32.091 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:32.091 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:32.091 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:32.091 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:32.091 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:32.091 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:32.091 05:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.091 05:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.091 05:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.091 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:32.091 05:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:32.091 05:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:32.091 05:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:32.091 05:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:32.091 05:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:32.091 05:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:32.091 05:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:32.091 05:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:32.091 05:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:32.091 05:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:32.091 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:32.091 05:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.091 05:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.350 nvme0n1 00:32:32.350 05:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.350 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:32.350 05:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.350 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:32.350 05:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.350 05:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.350 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:32.350 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:32.350 05:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.350 05:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.350 05:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.350 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:32.350 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:32:32.350 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:32.350 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:32.350 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:32.350 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:32.350 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTUzYTRlNTAzZWMxYjljMmJmOWUwMDg1ZDE0ZWRlNzdac6UY: 00:32:32.350 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzM2Njg1ZmU5NjNlYjg1Yzc0OWFhMDI2OTAwNDBiOTiimVRV: 00:32:32.350 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:32.350 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:32.350 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTUzYTRlNTAzZWMxYjljMmJmOWUwMDg1ZDE0ZWRlNzdac6UY: 00:32:32.350 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzM2Njg1ZmU5NjNlYjg1Yzc0OWFhMDI2OTAwNDBiOTiimVRV: ]] 00:32:32.350 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzM2Njg1ZmU5NjNlYjg1Yzc0OWFhMDI2OTAwNDBiOTiimVRV: 00:32:32.350 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:32:32.350 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:32.350 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:32.350 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:32.350 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:32.350 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:32.350 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:32.350 05:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.350 05:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.350 05:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.350 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:32.350 05:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:32.350 05:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:32.350 05:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:32.350 05:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:32.350 05:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:32.350 05:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:32.350 05:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:32.350 05:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:32.350 05:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:32.350 05:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:32.350 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:32.350 05:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.350 05:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.608 nvme0n1 00:32:32.608 05:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.608 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:32.608 05:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.608 05:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.608 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:32.608 05:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.608 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:32.608 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:32.608 05:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.608 05:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.608 05:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.608 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:32.608 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:32:32.608 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:32.608 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:32.608 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:32.608 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:32.608 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2I5OTQ5ZTZjOThjNzVhZmJhYzlmZGJlOWZiM2EyNWViNDE1Y2Q1MDYwMmM4YzY4l4iYsQ==: 00:32:32.608 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmY1ZGU3MjQ0NjhkYjQzZjRiZGFjY2ViMzlhMzcwNmXu+zH6: 00:32:32.608 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:32.608 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:32.608 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2I5OTQ5ZTZjOThjNzVhZmJhYzlmZGJlOWZiM2EyNWViNDE1Y2Q1MDYwMmM4YzY4l4iYsQ==: 00:32:32.608 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmY1ZGU3MjQ0NjhkYjQzZjRiZGFjY2ViMzlhMzcwNmXu+zH6: ]] 00:32:32.608 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmY1ZGU3MjQ0NjhkYjQzZjRiZGFjY2ViMzlhMzcwNmXu+zH6: 00:32:32.608 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:32:32.608 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:32.608 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:32.608 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:32.608 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:32.608 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:32.608 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:32.608 05:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.608 05:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.608 05:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.608 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:32.608 05:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:32.608 05:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:32.608 05:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:32.608 05:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:32.608 05:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:32.608 05:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:32.608 05:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:32.608 05:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:32.608 05:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:32.608 05:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:32.608 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:32.608 05:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.608 05:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.866 nvme0n1 00:32:32.866 05:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.866 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:32.866 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:32.866 05:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.866 05:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.866 05:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.866 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:32.866 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:32.866 05:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.866 05:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.866 05:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.866 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:32.866 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:32:32.866 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:32.866 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:32.866 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:32.866 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:32.866 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGU5YjZkMjAxYWRiNDVhOWRmZmM4NjQ5MDlkZGQzZjRkY2I0MDMwYWQ1OWNmMTlhNWE2NTJjNDk1ZDUyMWFlZCKZ7r8=: 00:32:32.866 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:32.866 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:32.866 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:32.866 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGU5YjZkMjAxYWRiNDVhOWRmZmM4NjQ5MDlkZGQzZjRkY2I0MDMwYWQ1OWNmMTlhNWE2NTJjNDk1ZDUyMWFlZCKZ7r8=: 00:32:32.866 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:32.866 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:32:32.866 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:32.866 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:32.866 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:32.866 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:32.866 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:32.866 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:32.866 05:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.866 05:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.866 05:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.866 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:32.866 05:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:32.866 05:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:32.866 05:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:32.866 05:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:32.866 05:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:32.866 05:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:32.866 05:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:32.866 05:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:32.866 05:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:32.866 05:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:32.866 05:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:32.866 05:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.866 05:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.125 nvme0n1 00:32:33.125 05:46:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.125 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:33.125 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:33.125 05:46:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.125 05:46:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.125 05:46:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.125 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:33.125 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:33.125 05:46:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.125 05:46:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.125 05:46:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.125 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:33.125 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:33.125 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:32:33.125 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:33.125 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:33.125 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:33.125 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:33.125 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzlhZGE1YWUwOGM2ZGY0YmY1OGU3MDNkM2Q2ZGE5MTL0fxlQ: 00:32:33.125 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTljZWJjODljYzcyNWVlMmMwYzU2ZTMzNmE1ZDIzMGI3OWMyNWY2NDBkOWRkNjUzNGFmOTc3NGNiY2ZhMjc3Yi4NmVg=: 00:32:33.125 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:33.125 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:33.125 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzlhZGE1YWUwOGM2ZGY0YmY1OGU3MDNkM2Q2ZGE5MTL0fxlQ: 00:32:33.125 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTljZWJjODljYzcyNWVlMmMwYzU2ZTMzNmE1ZDIzMGI3OWMyNWY2NDBkOWRkNjUzNGFmOTc3NGNiY2ZhMjc3Yi4NmVg=: ]] 00:32:33.125 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTljZWJjODljYzcyNWVlMmMwYzU2ZTMzNmE1ZDIzMGI3OWMyNWY2NDBkOWRkNjUzNGFmOTc3NGNiY2ZhMjc3Yi4NmVg=: 00:32:33.125 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:32:33.125 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:33.125 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:33.125 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:33.125 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:33.125 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:33.125 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:33.125 05:46:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.125 05:46:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.125 05:46:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.125 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:33.125 05:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:33.125 05:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:33.125 05:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:33.125 05:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:33.125 05:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:33.125 05:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:33.125 05:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:33.125 05:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:33.125 05:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:33.125 05:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:33.125 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:33.125 05:46:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.125 05:46:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.384 nvme0n1 00:32:33.384 05:46:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.384 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:33.384 05:46:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.384 05:46:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.384 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:33.384 05:46:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.384 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:33.384 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:33.384 05:46:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.384 05:46:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.384 05:46:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.384 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:33.384 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:32:33.384 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:33.384 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:33.384 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:33.384 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:33.384 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWRhMjE5ZTUwZTkyZjZhNDk5MjQ4ODFmNDhkZmU1OTRhZDFmNTBkOGFmMzc5Yjcz+NuzFg==: 00:32:33.384 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2YzZGZmYmI1NGYyYjBlNzE5MDY1ZGI4OWQ1NDViMGNjNzQ4NzViMzkzMGY2NzFiDXOVbQ==: 00:32:33.384 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:33.384 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:33.384 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWRhMjE5ZTUwZTkyZjZhNDk5MjQ4ODFmNDhkZmU1OTRhZDFmNTBkOGFmMzc5Yjcz+NuzFg==: 00:32:33.384 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2YzZGZmYmI1NGYyYjBlNzE5MDY1ZGI4OWQ1NDViMGNjNzQ4NzViMzkzMGY2NzFiDXOVbQ==: ]] 00:32:33.384 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2YzZGZmYmI1NGYyYjBlNzE5MDY1ZGI4OWQ1NDViMGNjNzQ4NzViMzkzMGY2NzFiDXOVbQ==: 00:32:33.384 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:32:33.384 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:33.384 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:33.384 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:33.384 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:33.384 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:33.384 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:33.384 05:46:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.384 05:46:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.384 05:46:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.384 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:33.384 05:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:33.384 05:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:33.384 05:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:33.384 05:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:33.384 05:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:33.384 05:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:33.384 05:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:33.384 05:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:33.384 05:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:33.384 05:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:33.384 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:33.384 05:46:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.384 05:46:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.641 nvme0n1 00:32:33.641 05:46:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.641 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:33.641 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:33.641 05:46:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.641 05:46:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.899 05:46:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.899 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:33.899 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:33.899 05:46:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.899 05:46:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.899 05:46:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.899 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:33.899 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:32:33.899 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:33.899 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:33.899 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:33.899 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:33.899 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTUzYTRlNTAzZWMxYjljMmJmOWUwMDg1ZDE0ZWRlNzdac6UY: 00:32:33.899 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzM2Njg1ZmU5NjNlYjg1Yzc0OWFhMDI2OTAwNDBiOTiimVRV: 00:32:33.899 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:33.899 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:33.899 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTUzYTRlNTAzZWMxYjljMmJmOWUwMDg1ZDE0ZWRlNzdac6UY: 00:32:33.899 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzM2Njg1ZmU5NjNlYjg1Yzc0OWFhMDI2OTAwNDBiOTiimVRV: ]] 00:32:33.899 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzM2Njg1ZmU5NjNlYjg1Yzc0OWFhMDI2OTAwNDBiOTiimVRV: 00:32:33.899 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:32:33.899 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:33.899 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:33.899 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:33.899 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:33.899 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:33.899 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:33.899 05:46:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.899 05:46:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.899 05:46:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.899 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:33.899 05:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:33.899 05:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:33.899 05:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:33.899 05:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:33.899 05:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:33.899 05:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:33.899 05:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:33.899 05:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:33.899 05:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:33.899 05:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:33.899 05:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:33.899 05:46:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.899 05:46:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.157 nvme0n1 00:32:34.157 05:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.157 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:34.157 05:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.157 05:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.157 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:34.157 05:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.157 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:34.157 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:34.157 05:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.157 05:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.157 05:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.158 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:34.158 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:32:34.158 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:34.158 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:34.158 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:34.158 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:34.158 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2I5OTQ5ZTZjOThjNzVhZmJhYzlmZGJlOWZiM2EyNWViNDE1Y2Q1MDYwMmM4YzY4l4iYsQ==: 00:32:34.158 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmY1ZGU3MjQ0NjhkYjQzZjRiZGFjY2ViMzlhMzcwNmXu+zH6: 00:32:34.158 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:34.158 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:34.158 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2I5OTQ5ZTZjOThjNzVhZmJhYzlmZGJlOWZiM2EyNWViNDE1Y2Q1MDYwMmM4YzY4l4iYsQ==: 00:32:34.158 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmY1ZGU3MjQ0NjhkYjQzZjRiZGFjY2ViMzlhMzcwNmXu+zH6: ]] 00:32:34.158 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmY1ZGU3MjQ0NjhkYjQzZjRiZGFjY2ViMzlhMzcwNmXu+zH6: 00:32:34.158 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:32:34.158 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:34.158 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:34.158 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:34.158 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:34.158 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:34.158 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:34.158 05:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.158 05:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.158 05:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.158 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:34.158 05:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:34.158 05:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:34.158 05:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:34.158 05:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:34.158 05:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:34.158 05:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:34.158 05:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:34.158 05:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:34.158 05:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:34.158 05:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:34.158 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:34.158 05:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.158 05:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.416 nvme0n1 00:32:34.416 05:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.416 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:34.416 05:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.416 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:34.416 05:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.416 05:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.416 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:34.416 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:34.416 05:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.416 05:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.416 05:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.416 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:34.416 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:32:34.416 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:34.416 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:34.416 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:34.416 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:34.417 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGU5YjZkMjAxYWRiNDVhOWRmZmM4NjQ5MDlkZGQzZjRkY2I0MDMwYWQ1OWNmMTlhNWE2NTJjNDk1ZDUyMWFlZCKZ7r8=: 00:32:34.417 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:34.417 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:34.417 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:34.417 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGU5YjZkMjAxYWRiNDVhOWRmZmM4NjQ5MDlkZGQzZjRkY2I0MDMwYWQ1OWNmMTlhNWE2NTJjNDk1ZDUyMWFlZCKZ7r8=: 00:32:34.417 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:34.417 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:32:34.417 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:34.417 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:34.417 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:34.417 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:34.417 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:34.417 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:34.417 05:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.417 05:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.417 05:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.417 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:34.417 05:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:34.417 05:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:34.417 05:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:34.417 05:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:34.417 05:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:34.417 05:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:34.417 05:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:34.417 05:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:34.417 05:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:34.417 05:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:34.417 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:34.417 05:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.417 05:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.981 nvme0n1 00:32:34.981 05:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.982 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:34.982 05:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.982 05:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.982 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:34.982 05:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.982 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:34.982 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:34.982 05:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.982 05:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.982 05:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.982 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:34.982 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:34.982 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:32:34.982 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:34.982 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:34.982 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:34.982 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:34.982 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzlhZGE1YWUwOGM2ZGY0YmY1OGU3MDNkM2Q2ZGE5MTL0fxlQ: 00:32:34.982 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTljZWJjODljYzcyNWVlMmMwYzU2ZTMzNmE1ZDIzMGI3OWMyNWY2NDBkOWRkNjUzNGFmOTc3NGNiY2ZhMjc3Yi4NmVg=: 00:32:34.982 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:34.982 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:34.982 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzlhZGE1YWUwOGM2ZGY0YmY1OGU3MDNkM2Q2ZGE5MTL0fxlQ: 00:32:34.982 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTljZWJjODljYzcyNWVlMmMwYzU2ZTMzNmE1ZDIzMGI3OWMyNWY2NDBkOWRkNjUzNGFmOTc3NGNiY2ZhMjc3Yi4NmVg=: ]] 00:32:34.982 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTljZWJjODljYzcyNWVlMmMwYzU2ZTMzNmE1ZDIzMGI3OWMyNWY2NDBkOWRkNjUzNGFmOTc3NGNiY2ZhMjc3Yi4NmVg=: 00:32:34.982 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:32:34.982 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:34.982 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:34.982 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:34.982 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:34.982 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:34.982 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:34.982 05:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.982 05:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.982 05:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.982 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:34.982 05:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:34.982 05:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:34.982 05:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:34.982 05:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:34.982 05:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:34.982 05:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:34.982 05:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:34.982 05:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:34.982 05:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:34.982 05:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:34.982 05:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:34.982 05:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.982 05:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.549 nvme0n1 00:32:35.549 05:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.549 05:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:35.549 05:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.549 05:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.549 05:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:35.549 05:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.549 05:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:35.549 05:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:35.549 05:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.549 05:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.549 05:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.549 05:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:35.549 05:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:32:35.549 05:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:35.549 05:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:35.549 05:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:35.549 05:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:35.549 05:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWRhMjE5ZTUwZTkyZjZhNDk5MjQ4ODFmNDhkZmU1OTRhZDFmNTBkOGFmMzc5Yjcz+NuzFg==: 00:32:35.549 05:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2YzZGZmYmI1NGYyYjBlNzE5MDY1ZGI4OWQ1NDViMGNjNzQ4NzViMzkzMGY2NzFiDXOVbQ==: 00:32:35.549 05:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:35.549 05:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:35.550 05:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWRhMjE5ZTUwZTkyZjZhNDk5MjQ4ODFmNDhkZmU1OTRhZDFmNTBkOGFmMzc5Yjcz+NuzFg==: 00:32:35.550 05:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2YzZGZmYmI1NGYyYjBlNzE5MDY1ZGI4OWQ1NDViMGNjNzQ4NzViMzkzMGY2NzFiDXOVbQ==: ]] 00:32:35.550 05:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2YzZGZmYmI1NGYyYjBlNzE5MDY1ZGI4OWQ1NDViMGNjNzQ4NzViMzkzMGY2NzFiDXOVbQ==: 00:32:35.550 05:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:32:35.550 05:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:35.550 05:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:35.550 05:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:35.550 05:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:35.550 05:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:35.550 05:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:35.550 05:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.550 05:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.550 05:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.550 05:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:35.550 05:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:35.550 05:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:35.550 05:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:35.550 05:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:35.550 05:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:35.550 05:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:35.550 05:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:35.550 05:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:35.550 05:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:35.550 05:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:35.550 05:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:35.550 05:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.550 05:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.116 nvme0n1 00:32:36.116 05:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.116 05:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:36.116 05:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:36.116 05:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.116 05:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.116 05:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.116 05:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:36.116 05:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:36.116 05:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.116 05:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.116 05:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.116 05:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:36.116 05:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:32:36.116 05:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:36.116 05:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:36.116 05:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:36.116 05:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:36.116 05:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTUzYTRlNTAzZWMxYjljMmJmOWUwMDg1ZDE0ZWRlNzdac6UY: 00:32:36.116 05:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzM2Njg1ZmU5NjNlYjg1Yzc0OWFhMDI2OTAwNDBiOTiimVRV: 00:32:36.116 05:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:36.116 05:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:36.116 05:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTUzYTRlNTAzZWMxYjljMmJmOWUwMDg1ZDE0ZWRlNzdac6UY: 00:32:36.116 05:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzM2Njg1ZmU5NjNlYjg1Yzc0OWFhMDI2OTAwNDBiOTiimVRV: ]] 00:32:36.116 05:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzM2Njg1ZmU5NjNlYjg1Yzc0OWFhMDI2OTAwNDBiOTiimVRV: 00:32:36.116 05:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:32:36.116 05:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:36.116 05:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:36.116 05:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:36.116 05:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:36.116 05:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:36.116 05:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:36.116 05:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.116 05:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.116 05:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.116 05:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:36.116 05:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:36.116 05:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:36.116 05:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:36.116 05:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:36.116 05:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:36.116 05:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:36.116 05:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:36.116 05:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:36.117 05:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:36.117 05:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:36.117 05:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:36.117 05:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.117 05:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.680 nvme0n1 00:32:36.680 05:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.680 05:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:36.680 05:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:36.680 05:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.680 05:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.680 05:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.680 05:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:36.680 05:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:36.680 05:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.680 05:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.680 05:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.680 05:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:36.680 05:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:32:36.680 05:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:36.680 05:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:36.680 05:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:36.680 05:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:36.680 05:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2I5OTQ5ZTZjOThjNzVhZmJhYzlmZGJlOWZiM2EyNWViNDE1Y2Q1MDYwMmM4YzY4l4iYsQ==: 00:32:36.680 05:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmY1ZGU3MjQ0NjhkYjQzZjRiZGFjY2ViMzlhMzcwNmXu+zH6: 00:32:36.680 05:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:36.680 05:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:36.680 05:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2I5OTQ5ZTZjOThjNzVhZmJhYzlmZGJlOWZiM2EyNWViNDE1Y2Q1MDYwMmM4YzY4l4iYsQ==: 00:32:36.680 05:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmY1ZGU3MjQ0NjhkYjQzZjRiZGFjY2ViMzlhMzcwNmXu+zH6: ]] 00:32:36.680 05:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmY1ZGU3MjQ0NjhkYjQzZjRiZGFjY2ViMzlhMzcwNmXu+zH6: 00:32:36.680 05:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:32:36.680 05:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:36.680 05:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:36.680 05:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:36.680 05:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:36.680 05:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:36.680 05:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:36.680 05:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.680 05:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.680 05:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.680 05:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:36.680 05:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:36.680 05:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:36.680 05:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:36.680 05:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:36.680 05:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:36.680 05:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:36.680 05:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:36.680 05:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:36.680 05:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:36.680 05:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:36.680 05:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:36.680 05:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.680 05:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.243 nvme0n1 00:32:37.243 05:46:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.243 05:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:37.243 05:46:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.243 05:46:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.243 05:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:37.243 05:46:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.243 05:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:37.243 05:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:37.243 05:46:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.243 05:46:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.243 05:46:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.243 05:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:37.243 05:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:32:37.243 05:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:37.243 05:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:37.243 05:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:37.243 05:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:37.243 05:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGU5YjZkMjAxYWRiNDVhOWRmZmM4NjQ5MDlkZGQzZjRkY2I0MDMwYWQ1OWNmMTlhNWE2NTJjNDk1ZDUyMWFlZCKZ7r8=: 00:32:37.243 05:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:37.243 05:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:37.243 05:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:37.243 05:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGU5YjZkMjAxYWRiNDVhOWRmZmM4NjQ5MDlkZGQzZjRkY2I0MDMwYWQ1OWNmMTlhNWE2NTJjNDk1ZDUyMWFlZCKZ7r8=: 00:32:37.243 05:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:37.243 05:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:32:37.243 05:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:37.244 05:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:37.244 05:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:37.244 05:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:37.244 05:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:37.244 05:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:37.244 05:46:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.244 05:46:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.244 05:46:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.244 05:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:37.244 05:46:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:37.244 05:46:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:37.244 05:46:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:37.244 05:46:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:37.244 05:46:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:37.244 05:46:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:37.244 05:46:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:37.244 05:46:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:37.244 05:46:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:37.244 05:46:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:37.244 05:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:37.244 05:46:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.244 05:46:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.807 nvme0n1 00:32:37.807 05:46:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.807 05:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:37.807 05:46:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.807 05:46:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.807 05:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:37.807 05:46:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.807 05:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:37.807 05:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:37.807 05:46:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.807 05:46:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.807 05:46:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.807 05:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:37.807 05:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:37.807 05:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:32:37.807 05:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:37.807 05:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:37.807 05:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:37.807 05:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:37.807 05:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzlhZGE1YWUwOGM2ZGY0YmY1OGU3MDNkM2Q2ZGE5MTL0fxlQ: 00:32:37.807 05:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTljZWJjODljYzcyNWVlMmMwYzU2ZTMzNmE1ZDIzMGI3OWMyNWY2NDBkOWRkNjUzNGFmOTc3NGNiY2ZhMjc3Yi4NmVg=: 00:32:37.807 05:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:37.807 05:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:37.807 05:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzlhZGE1YWUwOGM2ZGY0YmY1OGU3MDNkM2Q2ZGE5MTL0fxlQ: 00:32:37.807 05:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTljZWJjODljYzcyNWVlMmMwYzU2ZTMzNmE1ZDIzMGI3OWMyNWY2NDBkOWRkNjUzNGFmOTc3NGNiY2ZhMjc3Yi4NmVg=: ]] 00:32:37.807 05:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTljZWJjODljYzcyNWVlMmMwYzU2ZTMzNmE1ZDIzMGI3OWMyNWY2NDBkOWRkNjUzNGFmOTc3NGNiY2ZhMjc3Yi4NmVg=: 00:32:37.807 05:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:32:37.807 05:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:37.807 05:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:37.807 05:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:37.807 05:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:37.807 05:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:37.807 05:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:37.807 05:46:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.807 05:46:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.807 05:46:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.807 05:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:37.807 05:46:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:37.807 05:46:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:37.807 05:46:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:37.807 05:46:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:37.807 05:46:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:37.807 05:46:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:37.807 05:46:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:37.807 05:46:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:37.807 05:46:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:37.808 05:46:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:37.808 05:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:37.808 05:46:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.808 05:46:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.738 nvme0n1 00:32:38.738 05:46:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.738 05:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:38.738 05:46:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.738 05:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:38.738 05:46:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.738 05:46:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.738 05:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:38.738 05:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:38.738 05:46:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.738 05:46:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.995 05:46:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.995 05:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:38.995 05:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:32:38.995 05:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:38.995 05:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:38.995 05:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:38.995 05:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:38.995 05:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWRhMjE5ZTUwZTkyZjZhNDk5MjQ4ODFmNDhkZmU1OTRhZDFmNTBkOGFmMzc5Yjcz+NuzFg==: 00:32:38.995 05:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2YzZGZmYmI1NGYyYjBlNzE5MDY1ZGI4OWQ1NDViMGNjNzQ4NzViMzkzMGY2NzFiDXOVbQ==: 00:32:38.995 05:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:38.995 05:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:38.995 05:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWRhMjE5ZTUwZTkyZjZhNDk5MjQ4ODFmNDhkZmU1OTRhZDFmNTBkOGFmMzc5Yjcz+NuzFg==: 00:32:38.995 05:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2YzZGZmYmI1NGYyYjBlNzE5MDY1ZGI4OWQ1NDViMGNjNzQ4NzViMzkzMGY2NzFiDXOVbQ==: ]] 00:32:38.995 05:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2YzZGZmYmI1NGYyYjBlNzE5MDY1ZGI4OWQ1NDViMGNjNzQ4NzViMzkzMGY2NzFiDXOVbQ==: 00:32:38.995 05:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:32:38.995 05:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:38.995 05:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:38.995 05:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:38.995 05:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:38.995 05:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:38.995 05:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:38.995 05:46:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.995 05:46:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.995 05:46:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.995 05:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:38.995 05:46:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:38.995 05:46:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:38.995 05:46:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:38.995 05:46:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:38.995 05:46:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:38.995 05:46:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:38.995 05:46:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:38.995 05:46:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:38.995 05:46:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:38.995 05:46:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:38.995 05:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:38.995 05:46:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.995 05:46:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.927 nvme0n1 00:32:39.927 05:46:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.927 05:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:39.927 05:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:39.927 05:46:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.927 05:46:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.927 05:46:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.927 05:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:39.927 05:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:39.927 05:46:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.927 05:46:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.927 05:46:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.928 05:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:39.928 05:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:32:39.928 05:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:39.928 05:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:39.928 05:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:39.928 05:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:39.928 05:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTUzYTRlNTAzZWMxYjljMmJmOWUwMDg1ZDE0ZWRlNzdac6UY: 00:32:39.928 05:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzM2Njg1ZmU5NjNlYjg1Yzc0OWFhMDI2OTAwNDBiOTiimVRV: 00:32:39.928 05:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:39.928 05:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:39.928 05:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTUzYTRlNTAzZWMxYjljMmJmOWUwMDg1ZDE0ZWRlNzdac6UY: 00:32:39.928 05:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzM2Njg1ZmU5NjNlYjg1Yzc0OWFhMDI2OTAwNDBiOTiimVRV: ]] 00:32:39.928 05:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzM2Njg1ZmU5NjNlYjg1Yzc0OWFhMDI2OTAwNDBiOTiimVRV: 00:32:39.928 05:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:32:39.928 05:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:39.928 05:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:39.928 05:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:39.928 05:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:39.928 05:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:39.928 05:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:39.928 05:46:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.928 05:46:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.928 05:46:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.928 05:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:39.928 05:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:39.928 05:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:39.928 05:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:39.928 05:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:39.928 05:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:39.928 05:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:39.928 05:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:39.928 05:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:39.928 05:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:39.928 05:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:39.928 05:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:39.928 05:46:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.928 05:46:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.863 nvme0n1 00:32:40.863 05:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.863 05:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:40.863 05:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:40.863 05:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.863 05:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.863 05:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.863 05:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:40.863 05:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:40.863 05:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.863 05:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.863 05:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.863 05:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:40.863 05:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:32:40.863 05:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:40.863 05:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:40.863 05:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:40.863 05:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:40.863 05:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2I5OTQ5ZTZjOThjNzVhZmJhYzlmZGJlOWZiM2EyNWViNDE1Y2Q1MDYwMmM4YzY4l4iYsQ==: 00:32:40.863 05:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmY1ZGU3MjQ0NjhkYjQzZjRiZGFjY2ViMzlhMzcwNmXu+zH6: 00:32:40.863 05:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:40.863 05:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:40.863 05:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2I5OTQ5ZTZjOThjNzVhZmJhYzlmZGJlOWZiM2EyNWViNDE1Y2Q1MDYwMmM4YzY4l4iYsQ==: 00:32:40.863 05:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmY1ZGU3MjQ0NjhkYjQzZjRiZGFjY2ViMzlhMzcwNmXu+zH6: ]] 00:32:40.863 05:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmY1ZGU3MjQ0NjhkYjQzZjRiZGFjY2ViMzlhMzcwNmXu+zH6: 00:32:40.863 05:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:32:40.863 05:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:40.863 05:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:40.863 05:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:40.863 05:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:40.863 05:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:40.863 05:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:40.863 05:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.863 05:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.863 05:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.863 05:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:40.863 05:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:40.863 05:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:40.863 05:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:40.863 05:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:40.863 05:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:40.863 05:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:40.863 05:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:40.863 05:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:40.863 05:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:40.863 05:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:40.863 05:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:40.863 05:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.863 05:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.798 nvme0n1 00:32:41.798 05:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.798 05:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:41.798 05:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:41.798 05:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.798 05:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.798 05:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.798 05:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:41.798 05:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:41.798 05:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.798 05:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.798 05:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.798 05:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:41.798 05:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:32:41.798 05:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:41.798 05:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:41.798 05:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:41.798 05:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:41.798 05:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGU5YjZkMjAxYWRiNDVhOWRmZmM4NjQ5MDlkZGQzZjRkY2I0MDMwYWQ1OWNmMTlhNWE2NTJjNDk1ZDUyMWFlZCKZ7r8=: 00:32:41.798 05:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:41.798 05:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:41.798 05:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:41.798 05:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGU5YjZkMjAxYWRiNDVhOWRmZmM4NjQ5MDlkZGQzZjRkY2I0MDMwYWQ1OWNmMTlhNWE2NTJjNDk1ZDUyMWFlZCKZ7r8=: 00:32:41.798 05:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:41.798 05:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:32:41.798 05:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:41.798 05:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:41.798 05:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:41.798 05:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:41.798 05:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:41.798 05:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:41.798 05:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.798 05:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.798 05:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.798 05:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:41.798 05:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:41.798 05:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:41.798 05:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:41.798 05:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:41.798 05:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:41.798 05:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:41.798 05:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:41.798 05:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:41.798 05:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:41.798 05:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:41.798 05:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:41.798 05:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.798 05:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.732 nvme0n1 00:32:42.732 05:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.732 05:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:42.732 05:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.732 05:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:42.732 05:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.990 05:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.990 05:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:42.990 05:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:42.990 05:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.990 05:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.990 05:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.990 05:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:42.990 05:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:42.990 05:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:42.990 05:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:32:42.990 05:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:42.990 05:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:42.990 05:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:42.990 05:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:42.990 05:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzlhZGE1YWUwOGM2ZGY0YmY1OGU3MDNkM2Q2ZGE5MTL0fxlQ: 00:32:42.990 05:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTljZWJjODljYzcyNWVlMmMwYzU2ZTMzNmE1ZDIzMGI3OWMyNWY2NDBkOWRkNjUzNGFmOTc3NGNiY2ZhMjc3Yi4NmVg=: 00:32:42.990 05:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:42.990 05:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:42.990 05:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzlhZGE1YWUwOGM2ZGY0YmY1OGU3MDNkM2Q2ZGE5MTL0fxlQ: 00:32:42.990 05:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTljZWJjODljYzcyNWVlMmMwYzU2ZTMzNmE1ZDIzMGI3OWMyNWY2NDBkOWRkNjUzNGFmOTc3NGNiY2ZhMjc3Yi4NmVg=: ]] 00:32:42.990 05:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTljZWJjODljYzcyNWVlMmMwYzU2ZTMzNmE1ZDIzMGI3OWMyNWY2NDBkOWRkNjUzNGFmOTc3NGNiY2ZhMjc3Yi4NmVg=: 00:32:42.990 05:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:32:42.990 05:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:42.990 05:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:42.990 05:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:42.990 05:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:42.990 05:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:42.990 05:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:42.990 05:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.990 05:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.991 05:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.991 05:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:42.991 05:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:42.991 05:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:42.991 05:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:42.991 05:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:42.991 05:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:42.991 05:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:42.991 05:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:42.991 05:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:42.991 05:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:42.991 05:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:42.991 05:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:42.991 05:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.991 05:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.991 nvme0n1 00:32:42.991 05:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.991 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:42.991 05:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.991 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:42.991 05:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.991 05:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.249 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:43.249 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:43.249 05:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.249 05:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.249 05:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.249 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:43.249 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:32:43.249 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:43.249 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:43.249 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:43.249 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:43.249 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWRhMjE5ZTUwZTkyZjZhNDk5MjQ4ODFmNDhkZmU1OTRhZDFmNTBkOGFmMzc5Yjcz+NuzFg==: 00:32:43.249 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2YzZGZmYmI1NGYyYjBlNzE5MDY1ZGI4OWQ1NDViMGNjNzQ4NzViMzkzMGY2NzFiDXOVbQ==: 00:32:43.249 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:43.249 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:43.249 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWRhMjE5ZTUwZTkyZjZhNDk5MjQ4ODFmNDhkZmU1OTRhZDFmNTBkOGFmMzc5Yjcz+NuzFg==: 00:32:43.249 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2YzZGZmYmI1NGYyYjBlNzE5MDY1ZGI4OWQ1NDViMGNjNzQ4NzViMzkzMGY2NzFiDXOVbQ==: ]] 00:32:43.249 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2YzZGZmYmI1NGYyYjBlNzE5MDY1ZGI4OWQ1NDViMGNjNzQ4NzViMzkzMGY2NzFiDXOVbQ==: 00:32:43.249 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:32:43.249 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:43.249 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:43.249 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:43.249 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:43.249 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:43.250 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:43.250 05:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.250 05:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.250 05:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.250 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:43.250 05:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:43.250 05:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:43.250 05:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:43.250 05:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:43.250 05:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:43.250 05:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:43.250 05:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:43.250 05:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:43.250 05:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:43.250 05:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:43.250 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:43.250 05:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.250 05:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.250 nvme0n1 00:32:43.250 05:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.250 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:43.250 05:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.250 05:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.250 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:43.250 05:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.250 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:43.250 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:43.250 05:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.250 05:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.250 05:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.250 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:43.250 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:32:43.250 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:43.250 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:43.250 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:43.250 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:43.250 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTUzYTRlNTAzZWMxYjljMmJmOWUwMDg1ZDE0ZWRlNzdac6UY: 00:32:43.250 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzM2Njg1ZmU5NjNlYjg1Yzc0OWFhMDI2OTAwNDBiOTiimVRV: 00:32:43.250 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:43.250 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:43.250 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTUzYTRlNTAzZWMxYjljMmJmOWUwMDg1ZDE0ZWRlNzdac6UY: 00:32:43.250 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzM2Njg1ZmU5NjNlYjg1Yzc0OWFhMDI2OTAwNDBiOTiimVRV: ]] 00:32:43.250 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzM2Njg1ZmU5NjNlYjg1Yzc0OWFhMDI2OTAwNDBiOTiimVRV: 00:32:43.250 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:32:43.250 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:43.250 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:43.250 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:43.250 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:43.250 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:43.250 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:43.250 05:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.250 05:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.508 05:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.508 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:43.508 05:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:43.508 05:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:43.509 05:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:43.509 05:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:43.509 05:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:43.509 05:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:43.509 05:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:43.509 05:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:43.509 05:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:43.509 05:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:43.509 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:43.509 05:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.509 05:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.509 nvme0n1 00:32:43.509 05:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.509 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:43.509 05:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.509 05:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.509 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:43.509 05:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.509 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:43.509 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:43.509 05:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.509 05:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.509 05:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.509 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:43.509 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:32:43.509 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:43.509 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:43.509 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:43.509 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:43.509 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2I5OTQ5ZTZjOThjNzVhZmJhYzlmZGJlOWZiM2EyNWViNDE1Y2Q1MDYwMmM4YzY4l4iYsQ==: 00:32:43.509 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmY1ZGU3MjQ0NjhkYjQzZjRiZGFjY2ViMzlhMzcwNmXu+zH6: 00:32:43.509 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:43.509 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:43.509 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2I5OTQ5ZTZjOThjNzVhZmJhYzlmZGJlOWZiM2EyNWViNDE1Y2Q1MDYwMmM4YzY4l4iYsQ==: 00:32:43.509 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmY1ZGU3MjQ0NjhkYjQzZjRiZGFjY2ViMzlhMzcwNmXu+zH6: ]] 00:32:43.509 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmY1ZGU3MjQ0NjhkYjQzZjRiZGFjY2ViMzlhMzcwNmXu+zH6: 00:32:43.509 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:32:43.509 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:43.509 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:43.509 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:43.509 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:43.509 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:43.509 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:43.509 05:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.509 05:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.509 05:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.509 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:43.509 05:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:43.509 05:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:43.509 05:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:43.509 05:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:43.509 05:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:43.509 05:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:43.509 05:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:43.509 05:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:43.509 05:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:43.509 05:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:43.509 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:43.509 05:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.509 05:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.767 nvme0n1 00:32:43.767 05:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.767 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:43.767 05:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.767 05:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.767 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:43.767 05:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.767 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:43.767 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:43.767 05:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.767 05:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.767 05:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.767 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:43.767 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:32:43.767 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:43.767 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:43.767 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:43.767 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:43.767 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGU5YjZkMjAxYWRiNDVhOWRmZmM4NjQ5MDlkZGQzZjRkY2I0MDMwYWQ1OWNmMTlhNWE2NTJjNDk1ZDUyMWFlZCKZ7r8=: 00:32:43.767 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:43.767 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:43.767 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:43.767 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGU5YjZkMjAxYWRiNDVhOWRmZmM4NjQ5MDlkZGQzZjRkY2I0MDMwYWQ1OWNmMTlhNWE2NTJjNDk1ZDUyMWFlZCKZ7r8=: 00:32:43.767 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:43.767 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:32:43.768 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:43.768 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:43.768 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:43.768 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:43.768 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:43.768 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:43.768 05:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.768 05:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.768 05:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.768 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:43.768 05:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:43.768 05:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:43.768 05:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:43.768 05:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:43.768 05:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:43.768 05:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:43.768 05:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:43.768 05:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:43.768 05:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:43.768 05:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:43.768 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:43.768 05:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.768 05:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.070 nvme0n1 00:32:44.070 05:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.070 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:44.070 05:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.070 05:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.070 05:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:44.070 05:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.070 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:44.070 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:44.070 05:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.070 05:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.070 05:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.070 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:44.070 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:44.070 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:32:44.070 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:44.070 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:44.070 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:44.070 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:44.070 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzlhZGE1YWUwOGM2ZGY0YmY1OGU3MDNkM2Q2ZGE5MTL0fxlQ: 00:32:44.070 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTljZWJjODljYzcyNWVlMmMwYzU2ZTMzNmE1ZDIzMGI3OWMyNWY2NDBkOWRkNjUzNGFmOTc3NGNiY2ZhMjc3Yi4NmVg=: 00:32:44.070 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:44.070 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:44.070 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzlhZGE1YWUwOGM2ZGY0YmY1OGU3MDNkM2Q2ZGE5MTL0fxlQ: 00:32:44.070 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTljZWJjODljYzcyNWVlMmMwYzU2ZTMzNmE1ZDIzMGI3OWMyNWY2NDBkOWRkNjUzNGFmOTc3NGNiY2ZhMjc3Yi4NmVg=: ]] 00:32:44.070 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTljZWJjODljYzcyNWVlMmMwYzU2ZTMzNmE1ZDIzMGI3OWMyNWY2NDBkOWRkNjUzNGFmOTc3NGNiY2ZhMjc3Yi4NmVg=: 00:32:44.070 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:32:44.070 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:44.070 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:44.070 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:44.070 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:44.070 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:44.070 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:44.070 05:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.070 05:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.070 05:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.070 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:44.070 05:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:44.070 05:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:44.070 05:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:44.070 05:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:44.070 05:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:44.070 05:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:44.070 05:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:44.070 05:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:44.070 05:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:44.070 05:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:44.070 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:44.070 05:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.070 05:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.343 nvme0n1 00:32:44.343 05:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.343 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:44.343 05:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.343 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:44.343 05:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.343 05:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.343 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:44.343 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:44.343 05:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.343 05:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.343 05:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.343 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:44.343 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:32:44.343 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:44.343 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:44.343 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:44.343 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:44.343 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWRhMjE5ZTUwZTkyZjZhNDk5MjQ4ODFmNDhkZmU1OTRhZDFmNTBkOGFmMzc5Yjcz+NuzFg==: 00:32:44.343 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2YzZGZmYmI1NGYyYjBlNzE5MDY1ZGI4OWQ1NDViMGNjNzQ4NzViMzkzMGY2NzFiDXOVbQ==: 00:32:44.343 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:44.343 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:44.343 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWRhMjE5ZTUwZTkyZjZhNDk5MjQ4ODFmNDhkZmU1OTRhZDFmNTBkOGFmMzc5Yjcz+NuzFg==: 00:32:44.343 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2YzZGZmYmI1NGYyYjBlNzE5MDY1ZGI4OWQ1NDViMGNjNzQ4NzViMzkzMGY2NzFiDXOVbQ==: ]] 00:32:44.343 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2YzZGZmYmI1NGYyYjBlNzE5MDY1ZGI4OWQ1NDViMGNjNzQ4NzViMzkzMGY2NzFiDXOVbQ==: 00:32:44.343 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:32:44.343 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:44.343 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:44.343 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:44.343 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:44.343 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:44.343 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:44.343 05:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.343 05:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.343 05:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.343 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:44.343 05:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:44.343 05:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:44.343 05:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:44.343 05:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:44.343 05:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:44.343 05:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:44.343 05:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:44.343 05:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:44.343 05:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:44.343 05:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:44.343 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:44.343 05:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.343 05:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.601 nvme0n1 00:32:44.601 05:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.601 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:44.601 05:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.601 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:44.601 05:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.601 05:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.601 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:44.601 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:44.601 05:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.601 05:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.601 05:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.601 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:44.601 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:32:44.601 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:44.601 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:44.601 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:44.601 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:44.601 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTUzYTRlNTAzZWMxYjljMmJmOWUwMDg1ZDE0ZWRlNzdac6UY: 00:32:44.601 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzM2Njg1ZmU5NjNlYjg1Yzc0OWFhMDI2OTAwNDBiOTiimVRV: 00:32:44.601 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:44.601 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:44.601 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTUzYTRlNTAzZWMxYjljMmJmOWUwMDg1ZDE0ZWRlNzdac6UY: 00:32:44.601 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzM2Njg1ZmU5NjNlYjg1Yzc0OWFhMDI2OTAwNDBiOTiimVRV: ]] 00:32:44.601 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzM2Njg1ZmU5NjNlYjg1Yzc0OWFhMDI2OTAwNDBiOTiimVRV: 00:32:44.601 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:32:44.601 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:44.601 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:44.601 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:44.601 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:44.601 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:44.601 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:44.601 05:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.601 05:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.601 05:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.601 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:44.601 05:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:44.601 05:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:44.601 05:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:44.601 05:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:44.601 05:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:44.601 05:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:44.601 05:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:44.601 05:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:44.601 05:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:44.601 05:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:44.601 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:44.601 05:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.601 05:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.859 nvme0n1 00:32:44.859 05:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.859 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:44.859 05:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.859 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:44.859 05:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.859 05:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.859 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:44.859 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:44.859 05:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.859 05:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.859 05:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.859 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:44.859 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:32:44.859 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:44.859 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:44.859 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:44.859 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:44.859 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2I5OTQ5ZTZjOThjNzVhZmJhYzlmZGJlOWZiM2EyNWViNDE1Y2Q1MDYwMmM4YzY4l4iYsQ==: 00:32:44.859 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmY1ZGU3MjQ0NjhkYjQzZjRiZGFjY2ViMzlhMzcwNmXu+zH6: 00:32:44.859 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:44.859 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:44.859 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2I5OTQ5ZTZjOThjNzVhZmJhYzlmZGJlOWZiM2EyNWViNDE1Y2Q1MDYwMmM4YzY4l4iYsQ==: 00:32:44.859 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmY1ZGU3MjQ0NjhkYjQzZjRiZGFjY2ViMzlhMzcwNmXu+zH6: ]] 00:32:44.859 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmY1ZGU3MjQ0NjhkYjQzZjRiZGFjY2ViMzlhMzcwNmXu+zH6: 00:32:44.859 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:32:44.859 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:44.859 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:44.859 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:44.859 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:44.859 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:44.859 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:44.859 05:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.859 05:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.859 05:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.859 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:44.859 05:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:44.859 05:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:44.859 05:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:44.859 05:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:44.859 05:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:44.859 05:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:44.859 05:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:44.859 05:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:44.859 05:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:44.859 05:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:44.859 05:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:44.859 05:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.859 05:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.117 nvme0n1 00:32:45.117 05:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.117 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:45.117 05:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.117 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:45.117 05:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.117 05:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.117 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:45.117 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:45.117 05:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.117 05:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.117 05:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.117 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:45.117 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:32:45.117 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:45.117 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:45.117 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:45.117 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:45.117 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGU5YjZkMjAxYWRiNDVhOWRmZmM4NjQ5MDlkZGQzZjRkY2I0MDMwYWQ1OWNmMTlhNWE2NTJjNDk1ZDUyMWFlZCKZ7r8=: 00:32:45.117 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:45.117 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:45.117 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:45.117 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGU5YjZkMjAxYWRiNDVhOWRmZmM4NjQ5MDlkZGQzZjRkY2I0MDMwYWQ1OWNmMTlhNWE2NTJjNDk1ZDUyMWFlZCKZ7r8=: 00:32:45.117 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:45.117 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:32:45.117 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:45.117 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:45.117 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:45.117 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:45.117 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:45.117 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:45.117 05:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.117 05:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.117 05:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.117 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:45.117 05:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:45.117 05:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:45.117 05:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:45.117 05:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:45.117 05:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:45.117 05:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:45.117 05:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:45.117 05:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:45.117 05:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:45.117 05:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:45.117 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:45.117 05:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.117 05:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.375 nvme0n1 00:32:45.375 05:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.375 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:45.375 05:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.375 05:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.375 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:45.375 05:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.375 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:45.376 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:45.376 05:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.376 05:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.376 05:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.376 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:45.376 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:45.376 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:32:45.376 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:45.376 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:45.376 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:45.376 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:45.376 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzlhZGE1YWUwOGM2ZGY0YmY1OGU3MDNkM2Q2ZGE5MTL0fxlQ: 00:32:45.376 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTljZWJjODljYzcyNWVlMmMwYzU2ZTMzNmE1ZDIzMGI3OWMyNWY2NDBkOWRkNjUzNGFmOTc3NGNiY2ZhMjc3Yi4NmVg=: 00:32:45.376 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:45.376 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:45.376 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzlhZGE1YWUwOGM2ZGY0YmY1OGU3MDNkM2Q2ZGE5MTL0fxlQ: 00:32:45.376 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTljZWJjODljYzcyNWVlMmMwYzU2ZTMzNmE1ZDIzMGI3OWMyNWY2NDBkOWRkNjUzNGFmOTc3NGNiY2ZhMjc3Yi4NmVg=: ]] 00:32:45.376 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTljZWJjODljYzcyNWVlMmMwYzU2ZTMzNmE1ZDIzMGI3OWMyNWY2NDBkOWRkNjUzNGFmOTc3NGNiY2ZhMjc3Yi4NmVg=: 00:32:45.376 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:32:45.376 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:45.376 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:45.376 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:45.376 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:45.376 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:45.376 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:45.376 05:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.376 05:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.376 05:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.376 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:45.376 05:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:45.376 05:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:45.376 05:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:45.376 05:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:45.376 05:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:45.376 05:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:45.376 05:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:45.376 05:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:45.376 05:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:45.376 05:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:45.376 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:45.376 05:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.376 05:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.635 nvme0n1 00:32:45.635 05:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.635 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:45.635 05:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.635 05:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.635 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:45.635 05:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.635 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:45.635 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:45.635 05:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.635 05:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.635 05:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.635 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:45.635 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:32:45.635 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:45.635 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:45.635 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:45.635 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:45.635 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWRhMjE5ZTUwZTkyZjZhNDk5MjQ4ODFmNDhkZmU1OTRhZDFmNTBkOGFmMzc5Yjcz+NuzFg==: 00:32:45.635 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2YzZGZmYmI1NGYyYjBlNzE5MDY1ZGI4OWQ1NDViMGNjNzQ4NzViMzkzMGY2NzFiDXOVbQ==: 00:32:45.635 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:45.635 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:45.635 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWRhMjE5ZTUwZTkyZjZhNDk5MjQ4ODFmNDhkZmU1OTRhZDFmNTBkOGFmMzc5Yjcz+NuzFg==: 00:32:45.635 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2YzZGZmYmI1NGYyYjBlNzE5MDY1ZGI4OWQ1NDViMGNjNzQ4NzViMzkzMGY2NzFiDXOVbQ==: ]] 00:32:45.635 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2YzZGZmYmI1NGYyYjBlNzE5MDY1ZGI4OWQ1NDViMGNjNzQ4NzViMzkzMGY2NzFiDXOVbQ==: 00:32:45.635 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:32:45.635 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:45.635 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:45.635 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:45.635 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:45.635 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:45.635 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:45.635 05:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.636 05:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.636 05:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.636 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:45.636 05:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:45.636 05:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:45.636 05:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:45.636 05:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:45.636 05:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:45.636 05:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:45.636 05:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:45.636 05:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:45.636 05:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:45.636 05:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:45.636 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:45.636 05:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.636 05:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.894 nvme0n1 00:32:45.894 05:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.894 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:45.894 05:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.894 05:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.894 05:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:45.894 05:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.152 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:46.152 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:46.152 05:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.152 05:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.152 05:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.152 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:46.152 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:32:46.152 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:46.152 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:46.152 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:46.152 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:46.152 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTUzYTRlNTAzZWMxYjljMmJmOWUwMDg1ZDE0ZWRlNzdac6UY: 00:32:46.152 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzM2Njg1ZmU5NjNlYjg1Yzc0OWFhMDI2OTAwNDBiOTiimVRV: 00:32:46.152 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:46.152 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:46.152 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTUzYTRlNTAzZWMxYjljMmJmOWUwMDg1ZDE0ZWRlNzdac6UY: 00:32:46.152 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzM2Njg1ZmU5NjNlYjg1Yzc0OWFhMDI2OTAwNDBiOTiimVRV: ]] 00:32:46.152 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzM2Njg1ZmU5NjNlYjg1Yzc0OWFhMDI2OTAwNDBiOTiimVRV: 00:32:46.152 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:32:46.152 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:46.152 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:46.152 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:46.152 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:46.152 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:46.152 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:46.152 05:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.152 05:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.152 05:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.152 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:46.152 05:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:46.152 05:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:46.152 05:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:46.152 05:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:46.152 05:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:46.152 05:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:46.152 05:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:46.152 05:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:46.152 05:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:46.152 05:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:46.152 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:46.152 05:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.152 05:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.411 nvme0n1 00:32:46.411 05:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.411 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:46.411 05:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.411 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:46.411 05:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.411 05:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.411 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:46.411 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:46.411 05:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.411 05:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.411 05:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.411 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:46.411 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:32:46.411 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:46.411 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:46.411 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:46.411 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:46.411 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2I5OTQ5ZTZjOThjNzVhZmJhYzlmZGJlOWZiM2EyNWViNDE1Y2Q1MDYwMmM4YzY4l4iYsQ==: 00:32:46.411 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmY1ZGU3MjQ0NjhkYjQzZjRiZGFjY2ViMzlhMzcwNmXu+zH6: 00:32:46.411 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:46.411 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:46.411 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2I5OTQ5ZTZjOThjNzVhZmJhYzlmZGJlOWZiM2EyNWViNDE1Y2Q1MDYwMmM4YzY4l4iYsQ==: 00:32:46.411 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmY1ZGU3MjQ0NjhkYjQzZjRiZGFjY2ViMzlhMzcwNmXu+zH6: ]] 00:32:46.411 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmY1ZGU3MjQ0NjhkYjQzZjRiZGFjY2ViMzlhMzcwNmXu+zH6: 00:32:46.411 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:32:46.411 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:46.411 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:46.411 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:46.411 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:46.411 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:46.411 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:46.411 05:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.411 05:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.411 05:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.411 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:46.411 05:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:46.411 05:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:46.411 05:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:46.411 05:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:46.411 05:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:46.411 05:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:46.411 05:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:46.411 05:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:46.411 05:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:46.411 05:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:46.411 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:46.411 05:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.411 05:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.671 nvme0n1 00:32:46.671 05:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.671 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:46.671 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:46.671 05:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.671 05:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.671 05:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.671 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:46.671 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:46.671 05:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.671 05:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.671 05:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.671 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:46.671 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:32:46.671 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:46.671 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:46.671 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:46.671 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:46.671 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGU5YjZkMjAxYWRiNDVhOWRmZmM4NjQ5MDlkZGQzZjRkY2I0MDMwYWQ1OWNmMTlhNWE2NTJjNDk1ZDUyMWFlZCKZ7r8=: 00:32:46.671 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:46.671 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:46.671 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:46.671 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGU5YjZkMjAxYWRiNDVhOWRmZmM4NjQ5MDlkZGQzZjRkY2I0MDMwYWQ1OWNmMTlhNWE2NTJjNDk1ZDUyMWFlZCKZ7r8=: 00:32:46.671 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:46.671 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:32:46.671 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:46.671 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:46.671 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:46.671 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:46.671 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:46.671 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:46.671 05:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.671 05:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.671 05:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.671 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:46.671 05:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:46.671 05:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:46.671 05:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:46.671 05:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:46.671 05:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:46.671 05:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:46.671 05:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:46.671 05:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:46.671 05:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:46.671 05:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:46.671 05:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:46.671 05:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.671 05:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.928 nvme0n1 00:32:46.928 05:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.928 05:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:46.928 05:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:46.928 05:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.928 05:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.186 05:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.186 05:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:47.186 05:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:47.186 05:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.186 05:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.186 05:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.186 05:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:47.186 05:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:47.186 05:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:32:47.186 05:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:47.186 05:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:47.186 05:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:47.186 05:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:47.186 05:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzlhZGE1YWUwOGM2ZGY0YmY1OGU3MDNkM2Q2ZGE5MTL0fxlQ: 00:32:47.186 05:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTljZWJjODljYzcyNWVlMmMwYzU2ZTMzNmE1ZDIzMGI3OWMyNWY2NDBkOWRkNjUzNGFmOTc3NGNiY2ZhMjc3Yi4NmVg=: 00:32:47.186 05:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:47.186 05:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:47.186 05:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzlhZGE1YWUwOGM2ZGY0YmY1OGU3MDNkM2Q2ZGE5MTL0fxlQ: 00:32:47.186 05:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTljZWJjODljYzcyNWVlMmMwYzU2ZTMzNmE1ZDIzMGI3OWMyNWY2NDBkOWRkNjUzNGFmOTc3NGNiY2ZhMjc3Yi4NmVg=: ]] 00:32:47.186 05:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTljZWJjODljYzcyNWVlMmMwYzU2ZTMzNmE1ZDIzMGI3OWMyNWY2NDBkOWRkNjUzNGFmOTc3NGNiY2ZhMjc3Yi4NmVg=: 00:32:47.186 05:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:32:47.186 05:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:47.186 05:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:47.186 05:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:47.186 05:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:47.186 05:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:47.186 05:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:47.186 05:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.186 05:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.186 05:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.186 05:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:47.186 05:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:47.186 05:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:47.186 05:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:47.186 05:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:47.186 05:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:47.186 05:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:47.186 05:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:47.186 05:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:47.186 05:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:47.186 05:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:47.186 05:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:47.186 05:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.186 05:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.751 nvme0n1 00:32:47.751 05:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.751 05:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:47.751 05:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.751 05:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.751 05:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:47.751 05:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.751 05:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:47.751 05:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:47.751 05:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.751 05:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.751 05:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.751 05:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:47.751 05:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:32:47.751 05:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:47.751 05:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:47.751 05:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:47.751 05:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:47.751 05:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWRhMjE5ZTUwZTkyZjZhNDk5MjQ4ODFmNDhkZmU1OTRhZDFmNTBkOGFmMzc5Yjcz+NuzFg==: 00:32:47.751 05:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2YzZGZmYmI1NGYyYjBlNzE5MDY1ZGI4OWQ1NDViMGNjNzQ4NzViMzkzMGY2NzFiDXOVbQ==: 00:32:47.751 05:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:47.751 05:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:47.751 05:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWRhMjE5ZTUwZTkyZjZhNDk5MjQ4ODFmNDhkZmU1OTRhZDFmNTBkOGFmMzc5Yjcz+NuzFg==: 00:32:47.751 05:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2YzZGZmYmI1NGYyYjBlNzE5MDY1ZGI4OWQ1NDViMGNjNzQ4NzViMzkzMGY2NzFiDXOVbQ==: ]] 00:32:47.751 05:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2YzZGZmYmI1NGYyYjBlNzE5MDY1ZGI4OWQ1NDViMGNjNzQ4NzViMzkzMGY2NzFiDXOVbQ==: 00:32:47.751 05:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:32:47.751 05:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:47.751 05:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:47.751 05:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:47.751 05:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:47.751 05:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:47.751 05:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:47.751 05:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.751 05:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.751 05:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.751 05:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:47.751 05:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:47.751 05:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:47.751 05:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:47.751 05:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:47.751 05:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:47.751 05:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:47.751 05:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:47.751 05:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:47.751 05:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:47.751 05:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:47.751 05:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:47.751 05:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.751 05:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.317 nvme0n1 00:32:48.317 05:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.317 05:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:48.317 05:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.317 05:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.317 05:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:48.317 05:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.317 05:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:48.317 05:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:48.317 05:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.317 05:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.317 05:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.317 05:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:48.317 05:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:32:48.317 05:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:48.317 05:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:48.317 05:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:48.317 05:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:48.317 05:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTUzYTRlNTAzZWMxYjljMmJmOWUwMDg1ZDE0ZWRlNzdac6UY: 00:32:48.317 05:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzM2Njg1ZmU5NjNlYjg1Yzc0OWFhMDI2OTAwNDBiOTiimVRV: 00:32:48.317 05:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:48.317 05:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:48.317 05:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTUzYTRlNTAzZWMxYjljMmJmOWUwMDg1ZDE0ZWRlNzdac6UY: 00:32:48.317 05:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzM2Njg1ZmU5NjNlYjg1Yzc0OWFhMDI2OTAwNDBiOTiimVRV: ]] 00:32:48.317 05:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzM2Njg1ZmU5NjNlYjg1Yzc0OWFhMDI2OTAwNDBiOTiimVRV: 00:32:48.317 05:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:32:48.318 05:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:48.318 05:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:48.318 05:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:48.318 05:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:48.318 05:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:48.318 05:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:48.318 05:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.318 05:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.318 05:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.318 05:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:48.318 05:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:48.318 05:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:48.318 05:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:48.318 05:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:48.318 05:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:48.318 05:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:48.318 05:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:48.318 05:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:48.318 05:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:48.318 05:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:48.318 05:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:48.318 05:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.318 05:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.884 nvme0n1 00:32:48.884 05:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.884 05:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:48.884 05:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.884 05:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:48.884 05:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.884 05:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.884 05:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:48.884 05:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:48.884 05:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.884 05:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.884 05:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.884 05:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:48.884 05:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:32:48.884 05:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:48.884 05:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:48.884 05:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:48.884 05:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:48.884 05:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2I5OTQ5ZTZjOThjNzVhZmJhYzlmZGJlOWZiM2EyNWViNDE1Y2Q1MDYwMmM4YzY4l4iYsQ==: 00:32:48.884 05:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmY1ZGU3MjQ0NjhkYjQzZjRiZGFjY2ViMzlhMzcwNmXu+zH6: 00:32:48.884 05:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:48.884 05:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:48.884 05:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2I5OTQ5ZTZjOThjNzVhZmJhYzlmZGJlOWZiM2EyNWViNDE1Y2Q1MDYwMmM4YzY4l4iYsQ==: 00:32:48.884 05:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmY1ZGU3MjQ0NjhkYjQzZjRiZGFjY2ViMzlhMzcwNmXu+zH6: ]] 00:32:48.884 05:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmY1ZGU3MjQ0NjhkYjQzZjRiZGFjY2ViMzlhMzcwNmXu+zH6: 00:32:48.884 05:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:32:48.884 05:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:48.884 05:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:48.884 05:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:48.884 05:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:48.884 05:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:48.884 05:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:48.884 05:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.884 05:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.884 05:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.884 05:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:48.884 05:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:48.884 05:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:48.884 05:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:48.884 05:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:48.884 05:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:48.884 05:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:48.884 05:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:48.884 05:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:48.884 05:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:48.884 05:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:48.884 05:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:48.884 05:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.884 05:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.449 nvme0n1 00:32:49.449 05:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.449 05:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:49.449 05:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.449 05:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.449 05:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:49.449 05:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.449 05:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:49.449 05:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:49.449 05:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.449 05:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.449 05:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.449 05:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:49.449 05:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:32:49.449 05:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:49.449 05:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:49.449 05:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:49.449 05:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:49.449 05:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGU5YjZkMjAxYWRiNDVhOWRmZmM4NjQ5MDlkZGQzZjRkY2I0MDMwYWQ1OWNmMTlhNWE2NTJjNDk1ZDUyMWFlZCKZ7r8=: 00:32:49.449 05:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:49.449 05:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:49.449 05:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:49.449 05:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGU5YjZkMjAxYWRiNDVhOWRmZmM4NjQ5MDlkZGQzZjRkY2I0MDMwYWQ1OWNmMTlhNWE2NTJjNDk1ZDUyMWFlZCKZ7r8=: 00:32:49.449 05:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:49.449 05:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:32:49.449 05:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:49.449 05:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:49.449 05:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:49.449 05:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:49.449 05:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:49.449 05:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:49.449 05:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.449 05:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.449 05:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.449 05:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:49.449 05:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:49.449 05:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:49.449 05:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:49.449 05:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:49.449 05:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:49.449 05:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:49.449 05:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:49.449 05:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:49.449 05:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:49.449 05:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:49.449 05:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:49.449 05:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.449 05:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.015 nvme0n1 00:32:50.015 05:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.015 05:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:50.015 05:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.015 05:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.015 05:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:50.015 05:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.273 05:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:50.273 05:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:50.273 05:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.273 05:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.273 05:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.273 05:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:50.273 05:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:50.273 05:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:32:50.273 05:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:50.273 05:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:50.273 05:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:50.273 05:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:50.273 05:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzlhZGE1YWUwOGM2ZGY0YmY1OGU3MDNkM2Q2ZGE5MTL0fxlQ: 00:32:50.273 05:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTljZWJjODljYzcyNWVlMmMwYzU2ZTMzNmE1ZDIzMGI3OWMyNWY2NDBkOWRkNjUzNGFmOTc3NGNiY2ZhMjc3Yi4NmVg=: 00:32:50.273 05:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:50.273 05:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:50.273 05:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzlhZGE1YWUwOGM2ZGY0YmY1OGU3MDNkM2Q2ZGE5MTL0fxlQ: 00:32:50.273 05:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTljZWJjODljYzcyNWVlMmMwYzU2ZTMzNmE1ZDIzMGI3OWMyNWY2NDBkOWRkNjUzNGFmOTc3NGNiY2ZhMjc3Yi4NmVg=: ]] 00:32:50.273 05:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTljZWJjODljYzcyNWVlMmMwYzU2ZTMzNmE1ZDIzMGI3OWMyNWY2NDBkOWRkNjUzNGFmOTc3NGNiY2ZhMjc3Yi4NmVg=: 00:32:50.273 05:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:32:50.273 05:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:50.273 05:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:50.273 05:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:50.273 05:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:50.273 05:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:50.273 05:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:50.273 05:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.273 05:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.273 05:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.273 05:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:50.273 05:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:50.273 05:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:50.273 05:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:50.273 05:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:50.273 05:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:50.273 05:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:50.273 05:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:50.273 05:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:50.273 05:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:50.273 05:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:50.273 05:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:50.273 05:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.273 05:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.206 nvme0n1 00:32:51.206 05:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.206 05:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:51.206 05:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:51.206 05:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.206 05:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.206 05:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.206 05:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:51.206 05:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:51.206 05:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.206 05:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.206 05:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.207 05:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:51.207 05:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:32:51.207 05:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:51.207 05:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:51.207 05:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:51.207 05:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:51.207 05:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWRhMjE5ZTUwZTkyZjZhNDk5MjQ4ODFmNDhkZmU1OTRhZDFmNTBkOGFmMzc5Yjcz+NuzFg==: 00:32:51.207 05:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2YzZGZmYmI1NGYyYjBlNzE5MDY1ZGI4OWQ1NDViMGNjNzQ4NzViMzkzMGY2NzFiDXOVbQ==: 00:32:51.207 05:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:51.207 05:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:51.207 05:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWRhMjE5ZTUwZTkyZjZhNDk5MjQ4ODFmNDhkZmU1OTRhZDFmNTBkOGFmMzc5Yjcz+NuzFg==: 00:32:51.207 05:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2YzZGZmYmI1NGYyYjBlNzE5MDY1ZGI4OWQ1NDViMGNjNzQ4NzViMzkzMGY2NzFiDXOVbQ==: ]] 00:32:51.207 05:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2YzZGZmYmI1NGYyYjBlNzE5MDY1ZGI4OWQ1NDViMGNjNzQ4NzViMzkzMGY2NzFiDXOVbQ==: 00:32:51.207 05:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:32:51.207 05:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:51.207 05:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:51.207 05:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:51.207 05:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:51.207 05:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:51.207 05:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:51.207 05:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.207 05:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.207 05:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.207 05:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:51.207 05:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:51.207 05:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:51.207 05:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:51.207 05:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:51.207 05:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:51.207 05:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:51.207 05:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:51.207 05:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:51.207 05:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:51.207 05:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:51.207 05:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:51.207 05:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.207 05:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.140 nvme0n1 00:32:52.140 05:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.140 05:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:52.140 05:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.140 05:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.140 05:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:52.140 05:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.140 05:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:52.140 05:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:52.140 05:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.140 05:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.140 05:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.140 05:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:52.140 05:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:32:52.140 05:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:52.140 05:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:52.140 05:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:52.140 05:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:52.140 05:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTUzYTRlNTAzZWMxYjljMmJmOWUwMDg1ZDE0ZWRlNzdac6UY: 00:32:52.140 05:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzM2Njg1ZmU5NjNlYjg1Yzc0OWFhMDI2OTAwNDBiOTiimVRV: 00:32:52.140 05:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:52.140 05:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:52.140 05:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTUzYTRlNTAzZWMxYjljMmJmOWUwMDg1ZDE0ZWRlNzdac6UY: 00:32:52.140 05:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzM2Njg1ZmU5NjNlYjg1Yzc0OWFhMDI2OTAwNDBiOTiimVRV: ]] 00:32:52.140 05:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzM2Njg1ZmU5NjNlYjg1Yzc0OWFhMDI2OTAwNDBiOTiimVRV: 00:32:52.140 05:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:32:52.140 05:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:52.140 05:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:52.140 05:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:52.140 05:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:52.140 05:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:52.140 05:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:52.140 05:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.140 05:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.140 05:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.140 05:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:52.140 05:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:52.140 05:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:52.140 05:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:52.140 05:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:52.140 05:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:52.140 05:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:52.140 05:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:52.140 05:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:52.140 05:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:52.140 05:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:52.140 05:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:52.140 05:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.140 05:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.514 nvme0n1 00:32:53.514 05:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.514 05:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:53.514 05:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:53.514 05:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.514 05:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.514 05:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.514 05:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:53.514 05:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:53.514 05:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.514 05:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.514 05:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.514 05:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:53.514 05:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:32:53.514 05:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:53.514 05:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:53.514 05:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:53.514 05:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:53.514 05:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2I5OTQ5ZTZjOThjNzVhZmJhYzlmZGJlOWZiM2EyNWViNDE1Y2Q1MDYwMmM4YzY4l4iYsQ==: 00:32:53.514 05:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmY1ZGU3MjQ0NjhkYjQzZjRiZGFjY2ViMzlhMzcwNmXu+zH6: 00:32:53.514 05:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:53.514 05:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:53.514 05:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2I5OTQ5ZTZjOThjNzVhZmJhYzlmZGJlOWZiM2EyNWViNDE1Y2Q1MDYwMmM4YzY4l4iYsQ==: 00:32:53.514 05:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmY1ZGU3MjQ0NjhkYjQzZjRiZGFjY2ViMzlhMzcwNmXu+zH6: ]] 00:32:53.514 05:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmY1ZGU3MjQ0NjhkYjQzZjRiZGFjY2ViMzlhMzcwNmXu+zH6: 00:32:53.514 05:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:32:53.514 05:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:53.514 05:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:53.514 05:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:53.514 05:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:53.514 05:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:53.514 05:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:53.514 05:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.514 05:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.514 05:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.514 05:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:53.514 05:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:53.514 05:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:53.514 05:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:53.514 05:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:53.514 05:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:53.514 05:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:53.514 05:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:53.514 05:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:53.514 05:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:53.514 05:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:53.514 05:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:53.514 05:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.514 05:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.449 nvme0n1 00:32:54.449 05:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.449 05:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:54.449 05:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:54.449 05:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.449 05:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.449 05:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.449 05:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:54.449 05:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:54.449 05:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.449 05:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.449 05:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.449 05:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:54.449 05:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:32:54.449 05:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:54.449 05:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:54.449 05:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:54.449 05:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:54.449 05:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGU5YjZkMjAxYWRiNDVhOWRmZmM4NjQ5MDlkZGQzZjRkY2I0MDMwYWQ1OWNmMTlhNWE2NTJjNDk1ZDUyMWFlZCKZ7r8=: 00:32:54.449 05:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:54.449 05:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:54.449 05:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:54.449 05:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGU5YjZkMjAxYWRiNDVhOWRmZmM4NjQ5MDlkZGQzZjRkY2I0MDMwYWQ1OWNmMTlhNWE2NTJjNDk1ZDUyMWFlZCKZ7r8=: 00:32:54.449 05:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:54.449 05:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:32:54.449 05:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:54.449 05:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:54.449 05:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:54.449 05:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:54.449 05:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:54.449 05:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:54.449 05:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.449 05:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.449 05:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.449 05:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:54.449 05:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:54.449 05:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:54.449 05:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:54.449 05:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:54.449 05:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:54.449 05:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:54.449 05:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:54.450 05:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:54.450 05:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:54.450 05:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:54.450 05:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:54.450 05:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.450 05:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.383 nvme0n1 00:32:55.383 05:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.383 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:55.383 05:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.383 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:55.383 05:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.383 05:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.383 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:55.383 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:55.383 05:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.383 05:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.383 05:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.383 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:55.383 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:55.383 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:55.383 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:32:55.384 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:55.384 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:55.384 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:55.384 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:55.384 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzlhZGE1YWUwOGM2ZGY0YmY1OGU3MDNkM2Q2ZGE5MTL0fxlQ: 00:32:55.384 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTljZWJjODljYzcyNWVlMmMwYzU2ZTMzNmE1ZDIzMGI3OWMyNWY2NDBkOWRkNjUzNGFmOTc3NGNiY2ZhMjc3Yi4NmVg=: 00:32:55.384 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:55.384 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:55.384 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzlhZGE1YWUwOGM2ZGY0YmY1OGU3MDNkM2Q2ZGE5MTL0fxlQ: 00:32:55.384 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTljZWJjODljYzcyNWVlMmMwYzU2ZTMzNmE1ZDIzMGI3OWMyNWY2NDBkOWRkNjUzNGFmOTc3NGNiY2ZhMjc3Yi4NmVg=: ]] 00:32:55.384 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTljZWJjODljYzcyNWVlMmMwYzU2ZTMzNmE1ZDIzMGI3OWMyNWY2NDBkOWRkNjUzNGFmOTc3NGNiY2ZhMjc3Yi4NmVg=: 00:32:55.384 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:32:55.384 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:55.384 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:55.384 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:55.384 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:55.384 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:55.384 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:55.384 05:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.384 05:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.384 05:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.384 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:55.384 05:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:55.384 05:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:55.384 05:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:55.384 05:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:55.384 05:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:55.384 05:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:55.384 05:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:55.384 05:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:55.384 05:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:55.384 05:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:55.384 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:55.384 05:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.384 05:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.642 nvme0n1 00:32:55.642 05:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.642 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:55.642 05:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.642 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:55.642 05:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.642 05:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.642 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:55.642 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:55.642 05:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.642 05:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.642 05:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.642 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:55.642 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:32:55.642 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:55.642 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:55.642 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:55.642 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:55.642 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWRhMjE5ZTUwZTkyZjZhNDk5MjQ4ODFmNDhkZmU1OTRhZDFmNTBkOGFmMzc5Yjcz+NuzFg==: 00:32:55.642 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2YzZGZmYmI1NGYyYjBlNzE5MDY1ZGI4OWQ1NDViMGNjNzQ4NzViMzkzMGY2NzFiDXOVbQ==: 00:32:55.642 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:55.642 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:55.642 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWRhMjE5ZTUwZTkyZjZhNDk5MjQ4ODFmNDhkZmU1OTRhZDFmNTBkOGFmMzc5Yjcz+NuzFg==: 00:32:55.642 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2YzZGZmYmI1NGYyYjBlNzE5MDY1ZGI4OWQ1NDViMGNjNzQ4NzViMzkzMGY2NzFiDXOVbQ==: ]] 00:32:55.642 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2YzZGZmYmI1NGYyYjBlNzE5MDY1ZGI4OWQ1NDViMGNjNzQ4NzViMzkzMGY2NzFiDXOVbQ==: 00:32:55.642 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:32:55.642 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:55.642 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:55.642 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:55.642 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:55.642 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:55.642 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:55.642 05:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.642 05:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.642 05:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.642 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:55.642 05:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:55.642 05:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:55.642 05:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:55.642 05:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:55.642 05:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:55.642 05:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:55.642 05:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:55.642 05:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:55.642 05:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:55.642 05:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:55.642 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:55.642 05:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.642 05:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.642 nvme0n1 00:32:55.642 05:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.642 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:55.642 05:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.642 05:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.642 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:55.642 05:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.900 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:55.900 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:55.900 05:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.900 05:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.900 05:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.900 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:55.900 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:32:55.900 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:55.900 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:55.900 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:55.900 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:55.900 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTUzYTRlNTAzZWMxYjljMmJmOWUwMDg1ZDE0ZWRlNzdac6UY: 00:32:55.900 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzM2Njg1ZmU5NjNlYjg1Yzc0OWFhMDI2OTAwNDBiOTiimVRV: 00:32:55.901 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:55.901 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:55.901 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTUzYTRlNTAzZWMxYjljMmJmOWUwMDg1ZDE0ZWRlNzdac6UY: 00:32:55.901 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzM2Njg1ZmU5NjNlYjg1Yzc0OWFhMDI2OTAwNDBiOTiimVRV: ]] 00:32:55.901 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzM2Njg1ZmU5NjNlYjg1Yzc0OWFhMDI2OTAwNDBiOTiimVRV: 00:32:55.901 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:32:55.901 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:55.901 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:55.901 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:55.901 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:55.901 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:55.901 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:55.901 05:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.901 05:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.901 05:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.901 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:55.901 05:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:55.901 05:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:55.901 05:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:55.901 05:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:55.901 05:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:55.901 05:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:55.901 05:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:55.901 05:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:55.901 05:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:55.901 05:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:55.901 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:55.901 05:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.901 05:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.901 nvme0n1 00:32:55.901 05:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.901 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:55.901 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:55.901 05:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.901 05:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.901 05:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.901 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:55.901 05:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:55.901 05:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.901 05:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.901 05:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.901 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:55.901 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:32:55.901 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:55.901 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:55.901 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:55.901 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:55.901 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2I5OTQ5ZTZjOThjNzVhZmJhYzlmZGJlOWZiM2EyNWViNDE1Y2Q1MDYwMmM4YzY4l4iYsQ==: 00:32:55.901 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmY1ZGU3MjQ0NjhkYjQzZjRiZGFjY2ViMzlhMzcwNmXu+zH6: 00:32:55.901 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:55.901 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:55.901 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2I5OTQ5ZTZjOThjNzVhZmJhYzlmZGJlOWZiM2EyNWViNDE1Y2Q1MDYwMmM4YzY4l4iYsQ==: 00:32:56.159 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmY1ZGU3MjQ0NjhkYjQzZjRiZGFjY2ViMzlhMzcwNmXu+zH6: ]] 00:32:56.159 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmY1ZGU3MjQ0NjhkYjQzZjRiZGFjY2ViMzlhMzcwNmXu+zH6: 00:32:56.159 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:32:56.159 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:56.159 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:56.159 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:56.159 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:56.159 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:56.159 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:56.159 05:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.159 05:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.159 05:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.159 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:56.159 05:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:56.159 05:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:56.159 05:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:56.159 05:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:56.159 05:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:56.159 05:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:56.159 05:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:56.159 05:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:56.159 05:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:56.159 05:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:56.159 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:56.159 05:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.159 05:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.159 nvme0n1 00:32:56.159 05:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.160 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:56.160 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:56.160 05:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.160 05:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.160 05:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.160 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:56.160 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:56.160 05:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.160 05:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.160 05:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.160 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:56.160 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:32:56.160 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:56.160 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:56.160 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:56.160 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:56.160 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGU5YjZkMjAxYWRiNDVhOWRmZmM4NjQ5MDlkZGQzZjRkY2I0MDMwYWQ1OWNmMTlhNWE2NTJjNDk1ZDUyMWFlZCKZ7r8=: 00:32:56.160 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:56.160 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:56.160 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:56.160 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGU5YjZkMjAxYWRiNDVhOWRmZmM4NjQ5MDlkZGQzZjRkY2I0MDMwYWQ1OWNmMTlhNWE2NTJjNDk1ZDUyMWFlZCKZ7r8=: 00:32:56.160 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:56.160 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:32:56.160 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:56.160 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:56.160 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:56.160 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:56.160 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:56.160 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:56.160 05:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.160 05:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.160 05:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.160 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:56.160 05:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:56.160 05:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:56.160 05:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:56.160 05:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:56.160 05:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:56.160 05:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:56.160 05:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:56.160 05:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:56.160 05:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:56.160 05:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:56.160 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:56.160 05:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.160 05:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.418 nvme0n1 00:32:56.418 05:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.418 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:56.418 05:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.418 05:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.418 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:56.418 05:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.418 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:56.418 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:56.418 05:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.418 05:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.418 05:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.418 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:56.418 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:56.418 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:32:56.418 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:56.418 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:56.418 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:56.418 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:56.418 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzlhZGE1YWUwOGM2ZGY0YmY1OGU3MDNkM2Q2ZGE5MTL0fxlQ: 00:32:56.418 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTljZWJjODljYzcyNWVlMmMwYzU2ZTMzNmE1ZDIzMGI3OWMyNWY2NDBkOWRkNjUzNGFmOTc3NGNiY2ZhMjc3Yi4NmVg=: 00:32:56.418 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:56.418 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:56.418 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzlhZGE1YWUwOGM2ZGY0YmY1OGU3MDNkM2Q2ZGE5MTL0fxlQ: 00:32:56.418 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTljZWJjODljYzcyNWVlMmMwYzU2ZTMzNmE1ZDIzMGI3OWMyNWY2NDBkOWRkNjUzNGFmOTc3NGNiY2ZhMjc3Yi4NmVg=: ]] 00:32:56.418 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTljZWJjODljYzcyNWVlMmMwYzU2ZTMzNmE1ZDIzMGI3OWMyNWY2NDBkOWRkNjUzNGFmOTc3NGNiY2ZhMjc3Yi4NmVg=: 00:32:56.418 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:32:56.418 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:56.418 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:56.418 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:56.418 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:56.418 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:56.418 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:56.418 05:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.418 05:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.418 05:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.418 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:56.418 05:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:56.418 05:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:56.418 05:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:56.418 05:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:56.418 05:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:56.418 05:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:56.418 05:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:56.418 05:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:56.418 05:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:56.418 05:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:56.418 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:56.418 05:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.418 05:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.676 nvme0n1 00:32:56.676 05:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.676 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:56.676 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:56.676 05:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.676 05:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.676 05:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.676 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:56.676 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:56.676 05:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.676 05:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.676 05:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.676 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:56.676 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:32:56.676 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:56.676 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:56.676 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:56.676 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:56.676 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWRhMjE5ZTUwZTkyZjZhNDk5MjQ4ODFmNDhkZmU1OTRhZDFmNTBkOGFmMzc5Yjcz+NuzFg==: 00:32:56.676 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2YzZGZmYmI1NGYyYjBlNzE5MDY1ZGI4OWQ1NDViMGNjNzQ4NzViMzkzMGY2NzFiDXOVbQ==: 00:32:56.676 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:56.676 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:56.676 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWRhMjE5ZTUwZTkyZjZhNDk5MjQ4ODFmNDhkZmU1OTRhZDFmNTBkOGFmMzc5Yjcz+NuzFg==: 00:32:56.676 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2YzZGZmYmI1NGYyYjBlNzE5MDY1ZGI4OWQ1NDViMGNjNzQ4NzViMzkzMGY2NzFiDXOVbQ==: ]] 00:32:56.676 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2YzZGZmYmI1NGYyYjBlNzE5MDY1ZGI4OWQ1NDViMGNjNzQ4NzViMzkzMGY2NzFiDXOVbQ==: 00:32:56.676 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:32:56.676 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:56.676 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:56.676 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:56.676 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:56.676 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:56.676 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:56.676 05:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.676 05:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.676 05:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.676 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:56.676 05:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:56.676 05:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:56.676 05:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:56.676 05:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:56.676 05:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:56.676 05:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:56.676 05:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:56.676 05:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:56.676 05:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:56.676 05:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:56.676 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:56.676 05:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.676 05:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.934 nvme0n1 00:32:56.934 05:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.934 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:56.934 05:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.934 05:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.934 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:56.934 05:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.934 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:56.934 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:56.934 05:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.934 05:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.935 05:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.935 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:56.935 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:32:56.935 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:56.935 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:56.935 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:56.935 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:56.935 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTUzYTRlNTAzZWMxYjljMmJmOWUwMDg1ZDE0ZWRlNzdac6UY: 00:32:56.935 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzM2Njg1ZmU5NjNlYjg1Yzc0OWFhMDI2OTAwNDBiOTiimVRV: 00:32:56.935 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:56.935 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:56.935 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTUzYTRlNTAzZWMxYjljMmJmOWUwMDg1ZDE0ZWRlNzdac6UY: 00:32:56.935 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzM2Njg1ZmU5NjNlYjg1Yzc0OWFhMDI2OTAwNDBiOTiimVRV: ]] 00:32:56.935 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzM2Njg1ZmU5NjNlYjg1Yzc0OWFhMDI2OTAwNDBiOTiimVRV: 00:32:56.935 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:32:56.935 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:56.935 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:56.935 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:56.935 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:56.935 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:56.935 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:56.935 05:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.935 05:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.935 05:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.935 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:56.935 05:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:56.935 05:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:56.935 05:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:56.935 05:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:56.935 05:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:56.935 05:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:56.935 05:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:56.935 05:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:56.935 05:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:56.935 05:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:56.935 05:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:56.935 05:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.935 05:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.193 nvme0n1 00:32:57.193 05:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.193 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:57.193 05:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.193 05:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.193 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:57.193 05:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.193 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:57.193 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:57.193 05:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.193 05:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.193 05:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.193 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:57.193 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:32:57.193 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:57.193 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:57.193 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:57.193 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:57.193 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2I5OTQ5ZTZjOThjNzVhZmJhYzlmZGJlOWZiM2EyNWViNDE1Y2Q1MDYwMmM4YzY4l4iYsQ==: 00:32:57.193 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmY1ZGU3MjQ0NjhkYjQzZjRiZGFjY2ViMzlhMzcwNmXu+zH6: 00:32:57.193 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:57.193 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:57.193 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2I5OTQ5ZTZjOThjNzVhZmJhYzlmZGJlOWZiM2EyNWViNDE1Y2Q1MDYwMmM4YzY4l4iYsQ==: 00:32:57.193 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmY1ZGU3MjQ0NjhkYjQzZjRiZGFjY2ViMzlhMzcwNmXu+zH6: ]] 00:32:57.193 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmY1ZGU3MjQ0NjhkYjQzZjRiZGFjY2ViMzlhMzcwNmXu+zH6: 00:32:57.193 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:32:57.193 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:57.193 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:57.193 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:57.193 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:57.193 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:57.193 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:57.193 05:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.193 05:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.193 05:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.193 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:57.193 05:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:57.193 05:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:57.193 05:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:57.193 05:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:57.193 05:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:57.193 05:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:57.193 05:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:57.193 05:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:57.193 05:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:57.193 05:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:57.193 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:57.193 05:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.193 05:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.453 nvme0n1 00:32:57.453 05:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.453 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:57.453 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:57.453 05:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.453 05:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.453 05:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.453 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:57.453 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:57.453 05:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.453 05:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.453 05:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.453 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:57.453 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:32:57.453 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:57.453 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:57.453 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:57.453 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:57.453 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGU5YjZkMjAxYWRiNDVhOWRmZmM4NjQ5MDlkZGQzZjRkY2I0MDMwYWQ1OWNmMTlhNWE2NTJjNDk1ZDUyMWFlZCKZ7r8=: 00:32:57.453 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:57.453 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:57.453 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:57.453 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGU5YjZkMjAxYWRiNDVhOWRmZmM4NjQ5MDlkZGQzZjRkY2I0MDMwYWQ1OWNmMTlhNWE2NTJjNDk1ZDUyMWFlZCKZ7r8=: 00:32:57.453 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:57.453 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:32:57.453 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:57.453 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:57.453 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:57.453 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:57.453 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:57.453 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:57.453 05:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.453 05:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.453 05:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.453 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:57.453 05:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:57.453 05:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:57.453 05:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:57.453 05:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:57.453 05:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:57.453 05:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:57.453 05:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:57.453 05:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:57.453 05:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:57.453 05:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:57.453 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:57.453 05:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.453 05:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.741 nvme0n1 00:32:57.741 05:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.741 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:57.741 05:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.741 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:57.741 05:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.741 05:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.741 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:57.741 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:57.741 05:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.741 05:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.741 05:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.741 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:57.741 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:57.741 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:32:57.741 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:57.741 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:57.741 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:57.741 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:57.741 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzlhZGE1YWUwOGM2ZGY0YmY1OGU3MDNkM2Q2ZGE5MTL0fxlQ: 00:32:57.741 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTljZWJjODljYzcyNWVlMmMwYzU2ZTMzNmE1ZDIzMGI3OWMyNWY2NDBkOWRkNjUzNGFmOTc3NGNiY2ZhMjc3Yi4NmVg=: 00:32:57.741 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:57.741 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:57.741 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzlhZGE1YWUwOGM2ZGY0YmY1OGU3MDNkM2Q2ZGE5MTL0fxlQ: 00:32:57.741 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTljZWJjODljYzcyNWVlMmMwYzU2ZTMzNmE1ZDIzMGI3OWMyNWY2NDBkOWRkNjUzNGFmOTc3NGNiY2ZhMjc3Yi4NmVg=: ]] 00:32:57.741 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTljZWJjODljYzcyNWVlMmMwYzU2ZTMzNmE1ZDIzMGI3OWMyNWY2NDBkOWRkNjUzNGFmOTc3NGNiY2ZhMjc3Yi4NmVg=: 00:32:57.741 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:32:57.741 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:57.741 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:57.741 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:57.741 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:57.741 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:57.741 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:57.741 05:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.741 05:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.741 05:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.741 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:57.741 05:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:57.741 05:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:57.741 05:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:57.741 05:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:57.741 05:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:57.741 05:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:57.741 05:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:57.741 05:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:57.741 05:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:57.741 05:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:57.741 05:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:57.741 05:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.741 05:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.997 nvme0n1 00:32:57.997 05:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.997 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:57.997 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:57.997 05:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.997 05:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.254 05:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.254 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:58.254 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:58.254 05:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.254 05:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.254 05:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.254 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:58.254 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:32:58.254 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:58.254 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:58.254 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:58.254 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:58.254 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWRhMjE5ZTUwZTkyZjZhNDk5MjQ4ODFmNDhkZmU1OTRhZDFmNTBkOGFmMzc5Yjcz+NuzFg==: 00:32:58.254 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2YzZGZmYmI1NGYyYjBlNzE5MDY1ZGI4OWQ1NDViMGNjNzQ4NzViMzkzMGY2NzFiDXOVbQ==: 00:32:58.254 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:58.254 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:58.254 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWRhMjE5ZTUwZTkyZjZhNDk5MjQ4ODFmNDhkZmU1OTRhZDFmNTBkOGFmMzc5Yjcz+NuzFg==: 00:32:58.254 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2YzZGZmYmI1NGYyYjBlNzE5MDY1ZGI4OWQ1NDViMGNjNzQ4NzViMzkzMGY2NzFiDXOVbQ==: ]] 00:32:58.254 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2YzZGZmYmI1NGYyYjBlNzE5MDY1ZGI4OWQ1NDViMGNjNzQ4NzViMzkzMGY2NzFiDXOVbQ==: 00:32:58.254 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:32:58.254 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:58.254 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:58.254 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:58.254 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:58.254 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:58.254 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:58.254 05:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.254 05:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.254 05:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.254 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:58.254 05:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:58.254 05:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:58.254 05:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:58.254 05:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:58.254 05:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:58.254 05:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:58.254 05:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:58.254 05:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:58.254 05:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:58.254 05:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:58.254 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:58.254 05:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.254 05:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.511 nvme0n1 00:32:58.511 05:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.511 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:58.511 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:58.511 05:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.511 05:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.511 05:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.511 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:58.511 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:58.511 05:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.511 05:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.511 05:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.511 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:58.511 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:32:58.511 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:58.511 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:58.511 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:58.511 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:58.511 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTUzYTRlNTAzZWMxYjljMmJmOWUwMDg1ZDE0ZWRlNzdac6UY: 00:32:58.511 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzM2Njg1ZmU5NjNlYjg1Yzc0OWFhMDI2OTAwNDBiOTiimVRV: 00:32:58.511 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:58.511 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:58.511 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTUzYTRlNTAzZWMxYjljMmJmOWUwMDg1ZDE0ZWRlNzdac6UY: 00:32:58.511 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzM2Njg1ZmU5NjNlYjg1Yzc0OWFhMDI2OTAwNDBiOTiimVRV: ]] 00:32:58.511 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzM2Njg1ZmU5NjNlYjg1Yzc0OWFhMDI2OTAwNDBiOTiimVRV: 00:32:58.511 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:32:58.511 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:58.511 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:58.511 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:58.511 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:58.511 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:58.511 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:58.511 05:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.511 05:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.511 05:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.511 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:58.511 05:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:58.511 05:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:58.511 05:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:58.511 05:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:58.511 05:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:58.511 05:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:58.511 05:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:58.511 05:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:58.511 05:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:58.511 05:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:58.511 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:58.511 05:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.511 05:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.768 nvme0n1 00:32:58.768 05:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.768 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:58.768 05:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.768 05:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.768 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:58.768 05:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.768 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:58.768 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:58.768 05:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.768 05:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.026 05:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.026 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:59.026 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:32:59.026 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:59.026 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:59.026 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:59.026 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:59.026 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2I5OTQ5ZTZjOThjNzVhZmJhYzlmZGJlOWZiM2EyNWViNDE1Y2Q1MDYwMmM4YzY4l4iYsQ==: 00:32:59.026 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmY1ZGU3MjQ0NjhkYjQzZjRiZGFjY2ViMzlhMzcwNmXu+zH6: 00:32:59.026 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:59.026 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:59.026 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2I5OTQ5ZTZjOThjNzVhZmJhYzlmZGJlOWZiM2EyNWViNDE1Y2Q1MDYwMmM4YzY4l4iYsQ==: 00:32:59.026 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmY1ZGU3MjQ0NjhkYjQzZjRiZGFjY2ViMzlhMzcwNmXu+zH6: ]] 00:32:59.026 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmY1ZGU3MjQ0NjhkYjQzZjRiZGFjY2ViMzlhMzcwNmXu+zH6: 00:32:59.026 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:32:59.026 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:59.026 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:59.026 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:59.026 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:59.026 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:59.026 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:59.026 05:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.026 05:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.026 05:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.026 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:59.026 05:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:59.026 05:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:59.026 05:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:59.026 05:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:59.026 05:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:59.026 05:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:59.026 05:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:59.026 05:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:59.026 05:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:59.026 05:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:59.026 05:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:59.026 05:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.026 05:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.283 nvme0n1 00:32:59.283 05:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.283 05:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:59.283 05:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.283 05:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.283 05:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:59.283 05:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.283 05:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:59.283 05:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:59.283 05:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.283 05:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.283 05:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.283 05:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:59.283 05:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:32:59.284 05:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:59.284 05:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:59.284 05:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:59.284 05:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:59.284 05:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGU5YjZkMjAxYWRiNDVhOWRmZmM4NjQ5MDlkZGQzZjRkY2I0MDMwYWQ1OWNmMTlhNWE2NTJjNDk1ZDUyMWFlZCKZ7r8=: 00:32:59.284 05:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:59.284 05:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:59.284 05:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:59.284 05:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGU5YjZkMjAxYWRiNDVhOWRmZmM4NjQ5MDlkZGQzZjRkY2I0MDMwYWQ1OWNmMTlhNWE2NTJjNDk1ZDUyMWFlZCKZ7r8=: 00:32:59.284 05:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:59.284 05:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:32:59.284 05:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:59.284 05:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:59.284 05:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:59.284 05:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:59.284 05:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:59.284 05:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:59.284 05:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.284 05:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.284 05:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.284 05:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:59.284 05:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:59.284 05:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:59.284 05:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:59.284 05:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:59.284 05:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:59.284 05:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:59.284 05:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:59.284 05:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:59.284 05:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:59.284 05:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:59.284 05:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:59.284 05:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.284 05:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.540 nvme0n1 00:32:59.540 05:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.540 05:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:59.540 05:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.540 05:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.540 05:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:59.540 05:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.540 05:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:59.540 05:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:59.540 05:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.540 05:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.540 05:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.540 05:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:59.540 05:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:59.540 05:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:32:59.540 05:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:59.540 05:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:59.540 05:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:59.540 05:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:59.540 05:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzlhZGE1YWUwOGM2ZGY0YmY1OGU3MDNkM2Q2ZGE5MTL0fxlQ: 00:32:59.540 05:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTljZWJjODljYzcyNWVlMmMwYzU2ZTMzNmE1ZDIzMGI3OWMyNWY2NDBkOWRkNjUzNGFmOTc3NGNiY2ZhMjc3Yi4NmVg=: 00:32:59.540 05:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:59.540 05:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:59.540 05:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzlhZGE1YWUwOGM2ZGY0YmY1OGU3MDNkM2Q2ZGE5MTL0fxlQ: 00:32:59.540 05:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTljZWJjODljYzcyNWVlMmMwYzU2ZTMzNmE1ZDIzMGI3OWMyNWY2NDBkOWRkNjUzNGFmOTc3NGNiY2ZhMjc3Yi4NmVg=: ]] 00:32:59.540 05:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTljZWJjODljYzcyNWVlMmMwYzU2ZTMzNmE1ZDIzMGI3OWMyNWY2NDBkOWRkNjUzNGFmOTc3NGNiY2ZhMjc3Yi4NmVg=: 00:32:59.540 05:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:32:59.540 05:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:59.540 05:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:59.540 05:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:59.540 05:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:59.540 05:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:59.540 05:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:59.540 05:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.540 05:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.540 05:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.540 05:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:59.540 05:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:59.540 05:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:59.540 05:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:59.541 05:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:59.541 05:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:59.541 05:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:59.541 05:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:59.541 05:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:59.541 05:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:59.541 05:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:59.541 05:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:59.541 05:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.541 05:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.104 nvme0n1 00:33:00.104 05:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.105 05:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:00.105 05:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.105 05:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.105 05:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:00.105 05:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.105 05:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:00.105 05:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:00.105 05:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.105 05:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.362 05:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.362 05:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:00.362 05:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:33:00.362 05:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:00.362 05:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:00.362 05:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:00.362 05:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:00.362 05:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWRhMjE5ZTUwZTkyZjZhNDk5MjQ4ODFmNDhkZmU1OTRhZDFmNTBkOGFmMzc5Yjcz+NuzFg==: 00:33:00.362 05:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2YzZGZmYmI1NGYyYjBlNzE5MDY1ZGI4OWQ1NDViMGNjNzQ4NzViMzkzMGY2NzFiDXOVbQ==: 00:33:00.362 05:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:00.362 05:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:00.362 05:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWRhMjE5ZTUwZTkyZjZhNDk5MjQ4ODFmNDhkZmU1OTRhZDFmNTBkOGFmMzc5Yjcz+NuzFg==: 00:33:00.362 05:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2YzZGZmYmI1NGYyYjBlNzE5MDY1ZGI4OWQ1NDViMGNjNzQ4NzViMzkzMGY2NzFiDXOVbQ==: ]] 00:33:00.362 05:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2YzZGZmYmI1NGYyYjBlNzE5MDY1ZGI4OWQ1NDViMGNjNzQ4NzViMzkzMGY2NzFiDXOVbQ==: 00:33:00.362 05:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:33:00.362 05:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:00.362 05:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:00.362 05:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:00.362 05:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:00.362 05:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:00.362 05:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:00.362 05:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.362 05:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.362 05:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.362 05:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:00.362 05:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:00.362 05:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:00.362 05:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:00.362 05:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:00.362 05:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:00.362 05:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:00.362 05:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:00.362 05:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:00.362 05:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:00.362 05:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:00.362 05:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:00.362 05:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.362 05:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.927 nvme0n1 00:33:00.927 05:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.927 05:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:00.927 05:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:00.927 05:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.927 05:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.927 05:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.927 05:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:00.927 05:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:00.927 05:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.927 05:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.927 05:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.927 05:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:00.927 05:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:33:00.927 05:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:00.927 05:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:00.927 05:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:00.927 05:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:00.928 05:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTUzYTRlNTAzZWMxYjljMmJmOWUwMDg1ZDE0ZWRlNzdac6UY: 00:33:00.928 05:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzM2Njg1ZmU5NjNlYjg1Yzc0OWFhMDI2OTAwNDBiOTiimVRV: 00:33:00.928 05:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:00.928 05:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:00.928 05:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTUzYTRlNTAzZWMxYjljMmJmOWUwMDg1ZDE0ZWRlNzdac6UY: 00:33:00.928 05:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzM2Njg1ZmU5NjNlYjg1Yzc0OWFhMDI2OTAwNDBiOTiimVRV: ]] 00:33:00.928 05:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzM2Njg1ZmU5NjNlYjg1Yzc0OWFhMDI2OTAwNDBiOTiimVRV: 00:33:00.928 05:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:33:00.928 05:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:00.928 05:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:00.928 05:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:00.928 05:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:00.928 05:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:00.928 05:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:00.928 05:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.928 05:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.928 05:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.928 05:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:00.928 05:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:00.928 05:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:00.928 05:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:00.928 05:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:00.928 05:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:00.928 05:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:00.928 05:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:00.928 05:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:00.928 05:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:00.928 05:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:00.928 05:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:00.928 05:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.928 05:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.493 nvme0n1 00:33:01.493 05:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.493 05:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:01.493 05:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.493 05:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.493 05:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:01.493 05:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.493 05:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:01.493 05:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:01.493 05:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.493 05:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.493 05:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.493 05:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:01.493 05:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:33:01.493 05:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:01.493 05:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:01.493 05:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:01.493 05:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:01.493 05:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2I5OTQ5ZTZjOThjNzVhZmJhYzlmZGJlOWZiM2EyNWViNDE1Y2Q1MDYwMmM4YzY4l4iYsQ==: 00:33:01.493 05:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmY1ZGU3MjQ0NjhkYjQzZjRiZGFjY2ViMzlhMzcwNmXu+zH6: 00:33:01.493 05:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:01.493 05:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:01.493 05:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2I5OTQ5ZTZjOThjNzVhZmJhYzlmZGJlOWZiM2EyNWViNDE1Y2Q1MDYwMmM4YzY4l4iYsQ==: 00:33:01.493 05:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmY1ZGU3MjQ0NjhkYjQzZjRiZGFjY2ViMzlhMzcwNmXu+zH6: ]] 00:33:01.493 05:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmY1ZGU3MjQ0NjhkYjQzZjRiZGFjY2ViMzlhMzcwNmXu+zH6: 00:33:01.493 05:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:33:01.493 05:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:01.493 05:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:01.493 05:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:01.493 05:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:01.493 05:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:01.493 05:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:01.493 05:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.493 05:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.493 05:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.493 05:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:01.493 05:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:01.493 05:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:01.493 05:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:01.493 05:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:01.493 05:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:01.493 05:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:01.493 05:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:01.493 05:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:01.493 05:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:01.493 05:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:01.493 05:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:01.493 05:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.493 05:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.060 nvme0n1 00:33:02.060 05:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.060 05:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:02.060 05:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:02.060 05:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.060 05:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.060 05:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.060 05:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:02.060 05:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:02.060 05:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.060 05:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.060 05:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.060 05:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:02.060 05:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:33:02.060 05:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:02.060 05:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:02.060 05:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:02.060 05:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:02.060 05:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGU5YjZkMjAxYWRiNDVhOWRmZmM4NjQ5MDlkZGQzZjRkY2I0MDMwYWQ1OWNmMTlhNWE2NTJjNDk1ZDUyMWFlZCKZ7r8=: 00:33:02.060 05:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:02.060 05:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:02.060 05:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:02.060 05:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGU5YjZkMjAxYWRiNDVhOWRmZmM4NjQ5MDlkZGQzZjRkY2I0MDMwYWQ1OWNmMTlhNWE2NTJjNDk1ZDUyMWFlZCKZ7r8=: 00:33:02.060 05:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:02.060 05:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:33:02.060 05:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:02.060 05:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:02.060 05:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:02.060 05:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:02.060 05:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:02.060 05:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:02.060 05:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.060 05:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.060 05:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.060 05:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:02.060 05:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:02.060 05:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:02.060 05:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:02.060 05:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:02.060 05:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:02.060 05:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:02.060 05:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:02.060 05:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:02.060 05:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:02.060 05:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:02.060 05:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:02.060 05:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.060 05:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.627 nvme0n1 00:33:02.627 05:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.627 05:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:02.627 05:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:02.627 05:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.627 05:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.627 05:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.627 05:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:02.627 05:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:02.627 05:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.627 05:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.627 05:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.627 05:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:02.627 05:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:02.627 05:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:33:02.627 05:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:02.627 05:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:02.627 05:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:02.627 05:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:02.628 05:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzlhZGE1YWUwOGM2ZGY0YmY1OGU3MDNkM2Q2ZGE5MTL0fxlQ: 00:33:02.628 05:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTljZWJjODljYzcyNWVlMmMwYzU2ZTMzNmE1ZDIzMGI3OWMyNWY2NDBkOWRkNjUzNGFmOTc3NGNiY2ZhMjc3Yi4NmVg=: 00:33:02.628 05:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:02.628 05:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:02.628 05:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzlhZGE1YWUwOGM2ZGY0YmY1OGU3MDNkM2Q2ZGE5MTL0fxlQ: 00:33:02.628 05:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTljZWJjODljYzcyNWVlMmMwYzU2ZTMzNmE1ZDIzMGI3OWMyNWY2NDBkOWRkNjUzNGFmOTc3NGNiY2ZhMjc3Yi4NmVg=: ]] 00:33:02.628 05:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTljZWJjODljYzcyNWVlMmMwYzU2ZTMzNmE1ZDIzMGI3OWMyNWY2NDBkOWRkNjUzNGFmOTc3NGNiY2ZhMjc3Yi4NmVg=: 00:33:02.628 05:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:33:02.628 05:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:02.628 05:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:02.628 05:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:02.628 05:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:02.628 05:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:02.628 05:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:02.628 05:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.628 05:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.628 05:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.628 05:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:02.628 05:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:02.628 05:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:02.628 05:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:02.628 05:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:02.628 05:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:02.628 05:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:02.628 05:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:02.628 05:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:02.628 05:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:02.628 05:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:02.628 05:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:02.628 05:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.628 05:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.562 nvme0n1 00:33:03.562 05:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.562 05:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:03.562 05:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.562 05:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.562 05:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:03.562 05:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.821 05:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:03.821 05:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:03.821 05:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.821 05:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.821 05:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.821 05:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:03.821 05:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:33:03.821 05:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:03.821 05:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:03.821 05:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:03.821 05:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:03.821 05:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWRhMjE5ZTUwZTkyZjZhNDk5MjQ4ODFmNDhkZmU1OTRhZDFmNTBkOGFmMzc5Yjcz+NuzFg==: 00:33:03.821 05:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2YzZGZmYmI1NGYyYjBlNzE5MDY1ZGI4OWQ1NDViMGNjNzQ4NzViMzkzMGY2NzFiDXOVbQ==: 00:33:03.821 05:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:03.821 05:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:03.821 05:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWRhMjE5ZTUwZTkyZjZhNDk5MjQ4ODFmNDhkZmU1OTRhZDFmNTBkOGFmMzc5Yjcz+NuzFg==: 00:33:03.821 05:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2YzZGZmYmI1NGYyYjBlNzE5MDY1ZGI4OWQ1NDViMGNjNzQ4NzViMzkzMGY2NzFiDXOVbQ==: ]] 00:33:03.821 05:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2YzZGZmYmI1NGYyYjBlNzE5MDY1ZGI4OWQ1NDViMGNjNzQ4NzViMzkzMGY2NzFiDXOVbQ==: 00:33:03.821 05:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:33:03.821 05:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:03.821 05:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:03.821 05:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:03.821 05:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:03.821 05:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:03.821 05:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:03.821 05:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.821 05:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.821 05:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.821 05:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:03.821 05:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:03.821 05:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:03.821 05:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:03.821 05:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:03.821 05:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:03.821 05:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:03.821 05:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:03.821 05:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:03.821 05:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:03.821 05:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:03.821 05:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:03.821 05:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.821 05:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.752 nvme0n1 00:33:04.752 05:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.752 05:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:04.752 05:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:04.752 05:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.752 05:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.752 05:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.752 05:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:04.752 05:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:04.752 05:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.752 05:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.752 05:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.752 05:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:04.752 05:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:33:04.752 05:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:04.752 05:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:04.752 05:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:04.752 05:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:04.752 05:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTUzYTRlNTAzZWMxYjljMmJmOWUwMDg1ZDE0ZWRlNzdac6UY: 00:33:04.752 05:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzM2Njg1ZmU5NjNlYjg1Yzc0OWFhMDI2OTAwNDBiOTiimVRV: 00:33:04.752 05:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:04.752 05:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:04.752 05:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTUzYTRlNTAzZWMxYjljMmJmOWUwMDg1ZDE0ZWRlNzdac6UY: 00:33:04.752 05:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzM2Njg1ZmU5NjNlYjg1Yzc0OWFhMDI2OTAwNDBiOTiimVRV: ]] 00:33:04.752 05:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzM2Njg1ZmU5NjNlYjg1Yzc0OWFhMDI2OTAwNDBiOTiimVRV: 00:33:04.752 05:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:33:04.752 05:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:04.752 05:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:04.752 05:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:04.752 05:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:04.752 05:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:04.752 05:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:04.752 05:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.752 05:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.752 05:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.752 05:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:04.752 05:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:04.752 05:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:04.752 05:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:04.752 05:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:04.752 05:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:04.752 05:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:04.752 05:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:04.752 05:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:04.752 05:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:04.752 05:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:04.752 05:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:04.752 05:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.752 05:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.685 nvme0n1 00:33:05.685 05:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.685 05:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:05.685 05:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.685 05:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.685 05:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:05.685 05:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.685 05:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:05.685 05:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:05.685 05:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.685 05:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.685 05:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.685 05:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:05.685 05:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:33:05.685 05:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:05.685 05:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:05.685 05:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:05.685 05:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:05.685 05:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2I5OTQ5ZTZjOThjNzVhZmJhYzlmZGJlOWZiM2EyNWViNDE1Y2Q1MDYwMmM4YzY4l4iYsQ==: 00:33:05.685 05:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmY1ZGU3MjQ0NjhkYjQzZjRiZGFjY2ViMzlhMzcwNmXu+zH6: 00:33:05.685 05:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:05.685 05:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:05.685 05:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2I5OTQ5ZTZjOThjNzVhZmJhYzlmZGJlOWZiM2EyNWViNDE1Y2Q1MDYwMmM4YzY4l4iYsQ==: 00:33:05.685 05:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmY1ZGU3MjQ0NjhkYjQzZjRiZGFjY2ViMzlhMzcwNmXu+zH6: ]] 00:33:05.685 05:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmY1ZGU3MjQ0NjhkYjQzZjRiZGFjY2ViMzlhMzcwNmXu+zH6: 00:33:05.685 05:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:33:05.685 05:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:05.685 05:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:05.685 05:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:05.685 05:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:05.685 05:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:05.685 05:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:05.685 05:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.685 05:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.943 05:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.943 05:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:05.943 05:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:05.943 05:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:05.943 05:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:05.943 05:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:05.943 05:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:05.943 05:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:05.943 05:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:05.943 05:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:05.943 05:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:05.943 05:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:05.943 05:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:05.943 05:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.943 05:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.885 nvme0n1 00:33:06.885 05:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.885 05:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:06.885 05:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:06.885 05:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.885 05:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.885 05:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.885 05:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:06.885 05:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:06.885 05:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.885 05:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.885 05:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.885 05:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:06.885 05:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:33:06.885 05:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:06.885 05:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:06.885 05:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:06.885 05:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:06.885 05:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGU5YjZkMjAxYWRiNDVhOWRmZmM4NjQ5MDlkZGQzZjRkY2I0MDMwYWQ1OWNmMTlhNWE2NTJjNDk1ZDUyMWFlZCKZ7r8=: 00:33:06.885 05:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:06.885 05:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:06.885 05:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:06.885 05:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGU5YjZkMjAxYWRiNDVhOWRmZmM4NjQ5MDlkZGQzZjRkY2I0MDMwYWQ1OWNmMTlhNWE2NTJjNDk1ZDUyMWFlZCKZ7r8=: 00:33:06.885 05:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:06.885 05:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:33:06.885 05:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:06.885 05:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:06.885 05:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:06.885 05:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:06.885 05:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:06.885 05:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:06.885 05:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.885 05:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.885 05:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.885 05:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:06.885 05:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:06.885 05:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:06.885 05:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:06.885 05:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:06.885 05:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:06.885 05:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:06.885 05:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:06.885 05:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:06.885 05:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:06.885 05:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:06.885 05:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:06.885 05:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.885 05:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.825 nvme0n1 00:33:07.825 05:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.825 05:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:07.825 05:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:07.825 05:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.825 05:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.825 05:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.825 05:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:07.825 05:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:07.825 05:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.825 05:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.825 05:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.825 05:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:07.825 05:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:07.825 05:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:07.825 05:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:07.825 05:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:07.825 05:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWRhMjE5ZTUwZTkyZjZhNDk5MjQ4ODFmNDhkZmU1OTRhZDFmNTBkOGFmMzc5Yjcz+NuzFg==: 00:33:07.825 05:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2YzZGZmYmI1NGYyYjBlNzE5MDY1ZGI4OWQ1NDViMGNjNzQ4NzViMzkzMGY2NzFiDXOVbQ==: 00:33:07.825 05:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:07.825 05:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:07.825 05:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWRhMjE5ZTUwZTkyZjZhNDk5MjQ4ODFmNDhkZmU1OTRhZDFmNTBkOGFmMzc5Yjcz+NuzFg==: 00:33:07.825 05:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2YzZGZmYmI1NGYyYjBlNzE5MDY1ZGI4OWQ1NDViMGNjNzQ4NzViMzkzMGY2NzFiDXOVbQ==: ]] 00:33:07.825 05:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2YzZGZmYmI1NGYyYjBlNzE5MDY1ZGI4OWQ1NDViMGNjNzQ4NzViMzkzMGY2NzFiDXOVbQ==: 00:33:07.825 05:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:07.825 05:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.825 05:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.825 05:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.825 05:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:33:07.825 05:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:07.825 05:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:07.825 05:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:07.825 05:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:07.825 05:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:07.825 05:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:07.825 05:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:07.825 05:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:07.825 05:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:07.825 05:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:07.825 05:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:07.825 05:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:07.825 05:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:07.825 05:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:07.825 05:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:07.825 05:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:07.825 05:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:07.825 05:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:07.825 05:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.825 05:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.825 request: 00:33:07.825 { 00:33:07.825 "name": "nvme0", 00:33:07.825 "trtype": "tcp", 00:33:07.825 "traddr": "10.0.0.1", 00:33:07.825 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:07.825 "adrfam": "ipv4", 00:33:07.825 "trsvcid": "4420", 00:33:07.825 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:07.825 "method": "bdev_nvme_attach_controller", 00:33:07.825 "req_id": 1 00:33:07.825 } 00:33:07.825 Got JSON-RPC error response 00:33:07.825 response: 00:33:07.825 { 00:33:07.825 "code": -5, 00:33:07.825 "message": "Input/output error" 00:33:07.825 } 00:33:07.825 05:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:07.825 05:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:07.825 05:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:07.825 05:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:07.825 05:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:07.825 05:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:33:07.825 05:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.825 05:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.825 05:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:33:07.825 05:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.825 05:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:33:07.826 05:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:33:07.826 05:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:07.826 05:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:07.826 05:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:07.826 05:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:07.826 05:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:07.826 05:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:07.826 05:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:07.826 05:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:07.826 05:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:07.826 05:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:07.826 05:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:07.826 05:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:07.826 05:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:07.826 05:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:07.826 05:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:07.826 05:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:07.826 05:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:07.826 05:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:07.826 05:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.826 05:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.085 request: 00:33:08.085 { 00:33:08.085 "name": "nvme0", 00:33:08.085 "trtype": "tcp", 00:33:08.085 "traddr": "10.0.0.1", 00:33:08.085 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:08.085 "adrfam": "ipv4", 00:33:08.085 "trsvcid": "4420", 00:33:08.085 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:08.085 "dhchap_key": "key2", 00:33:08.085 "method": "bdev_nvme_attach_controller", 00:33:08.085 "req_id": 1 00:33:08.085 } 00:33:08.085 Got JSON-RPC error response 00:33:08.085 response: 00:33:08.085 { 00:33:08.085 "code": -5, 00:33:08.085 "message": "Input/output error" 00:33:08.085 } 00:33:08.085 05:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:08.085 05:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:08.085 05:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:08.085 05:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:08.085 05:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:08.085 05:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:33:08.085 05:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.085 05:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:33:08.085 05:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.085 05:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.085 05:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:33:08.085 05:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:33:08.085 05:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:08.085 05:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:08.085 05:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:08.085 05:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:08.085 05:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:08.085 05:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:08.085 05:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:08.085 05:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:08.085 05:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:08.085 05:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:08.085 05:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:08.085 05:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:08.085 05:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:08.085 05:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:08.085 05:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:08.085 05:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:08.085 05:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:08.085 05:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:08.085 05:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.085 05:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.085 request: 00:33:08.085 { 00:33:08.085 "name": "nvme0", 00:33:08.085 "trtype": "tcp", 00:33:08.085 "traddr": "10.0.0.1", 00:33:08.085 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:08.085 "adrfam": "ipv4", 00:33:08.085 "trsvcid": "4420", 00:33:08.085 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:08.085 "dhchap_key": "key1", 00:33:08.085 "dhchap_ctrlr_key": "ckey2", 00:33:08.085 "method": "bdev_nvme_attach_controller", 00:33:08.085 "req_id": 1 00:33:08.085 } 00:33:08.085 Got JSON-RPC error response 00:33:08.085 response: 00:33:08.085 { 00:33:08.085 "code": -5, 00:33:08.085 "message": "Input/output error" 00:33:08.085 } 00:33:08.085 05:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:08.085 05:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:08.085 05:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:08.085 05:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:08.085 05:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:08.085 05:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:33:08.085 05:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:33:08.085 05:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:33:08.085 05:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:08.085 05:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:33:08.085 05:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:08.085 05:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:33:08.085 05:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:08.085 05:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:08.085 rmmod nvme_tcp 00:33:08.085 rmmod nvme_fabrics 00:33:08.085 05:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:08.085 05:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:33:08.085 05:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:33:08.085 05:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 3373690 ']' 00:33:08.085 05:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 3373690 00:33:08.085 05:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@946 -- # '[' -z 3373690 ']' 00:33:08.085 05:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@950 -- # kill -0 3373690 00:33:08.085 05:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # uname 00:33:08.085 05:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:08.085 05:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3373690 00:33:08.085 05:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:08.085 05:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:08.085 05:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3373690' 00:33:08.085 killing process with pid 3373690 00:33:08.085 05:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@965 -- # kill 3373690 00:33:08.085 05:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@970 -- # wait 3373690 00:33:08.345 05:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:08.345 05:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:08.345 05:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:08.345 05:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:08.345 05:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:08.345 05:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:08.345 05:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:08.345 05:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:10.880 05:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:10.880 05:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:33:10.880 05:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:10.880 05:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:33:10.880 05:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:33:10.880 05:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:33:10.880 05:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:10.880 05:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:10.880 05:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:10.880 05:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:10.880 05:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:33:10.880 05:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:33:10.880 05:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:11.445 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:11.445 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:11.445 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:11.445 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:11.445 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:11.445 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:11.725 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:11.725 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:11.725 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:11.725 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:11.726 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:11.726 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:11.726 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:11.726 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:11.726 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:11.726 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:12.671 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:33:12.671 05:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.uU6 /tmp/spdk.key-null.lTO /tmp/spdk.key-sha256.rBq /tmp/spdk.key-sha384.4r4 /tmp/spdk.key-sha512.Fkw /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:33:12.671 05:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:14.047 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:14.047 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:14.047 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:14.047 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:14.047 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:14.047 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:14.047 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:14.047 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:14.047 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:14.047 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:14.047 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:14.047 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:14.047 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:14.047 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:14.047 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:14.047 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:14.047 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:14.047 00:33:14.047 real 0m49.958s 00:33:14.047 user 0m47.527s 00:33:14.047 sys 0m5.811s 00:33:14.047 05:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:14.047 05:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.047 ************************************ 00:33:14.047 END TEST nvmf_auth_host 00:33:14.047 ************************************ 00:33:14.047 05:47:21 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:33:14.047 05:47:21 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:33:14.047 05:47:21 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:33:14.047 05:47:21 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:14.047 05:47:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:14.047 ************************************ 00:33:14.047 START TEST nvmf_digest 00:33:14.047 ************************************ 00:33:14.047 05:47:21 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:33:14.306 * Looking for test storage... 00:33:14.306 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:14.306 05:47:21 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:14.306 05:47:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:33:14.306 05:47:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:14.306 05:47:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:14.306 05:47:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:14.306 05:47:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:14.306 05:47:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:14.306 05:47:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:14.306 05:47:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:14.306 05:47:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:14.306 05:47:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:14.306 05:47:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:14.306 05:47:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:14.306 05:47:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:14.306 05:47:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:14.306 05:47:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:14.306 05:47:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:14.306 05:47:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:14.306 05:47:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:14.306 05:47:21 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:14.306 05:47:21 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:14.306 05:47:21 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:14.306 05:47:21 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:14.306 05:47:21 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:14.306 05:47:21 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:14.306 05:47:21 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:33:14.306 05:47:21 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:14.306 05:47:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:33:14.306 05:47:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:14.306 05:47:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:14.306 05:47:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:14.306 05:47:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:14.306 05:47:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:14.306 05:47:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:14.306 05:47:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:14.306 05:47:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:14.306 05:47:21 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:33:14.306 05:47:21 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:33:14.306 05:47:21 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:33:14.306 05:47:21 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:33:14.306 05:47:21 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:33:14.306 05:47:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:14.306 05:47:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:14.306 05:47:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:14.306 05:47:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:14.306 05:47:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:14.306 05:47:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:14.306 05:47:21 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:14.306 05:47:21 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:14.306 05:47:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:14.306 05:47:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:14.306 05:47:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:33:14.306 05:47:21 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:16.207 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:16.207 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:16.207 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:16.207 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:16.207 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:16.207 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.128 ms 00:33:16.207 00:33:16.207 --- 10.0.0.2 ping statistics --- 00:33:16.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:16.207 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:16.207 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:16.207 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:33:16.207 00:33:16.207 --- 10.0.0.1 ping statistics --- 00:33:16.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:16.207 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:16.207 05:47:23 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:16.465 ************************************ 00:33:16.465 START TEST nvmf_digest_clean 00:33:16.465 ************************************ 00:33:16.465 05:47:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1121 -- # run_digest 00:33:16.465 05:47:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:33:16.465 05:47:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:33:16.465 05:47:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:33:16.465 05:47:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:33:16.465 05:47:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:33:16.465 05:47:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:16.465 05:47:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:16.465 05:47:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:16.465 05:47:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=3383134 00:33:16.466 05:47:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:16.466 05:47:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 3383134 00:33:16.466 05:47:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 3383134 ']' 00:33:16.466 05:47:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:16.466 05:47:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:16.466 05:47:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:16.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:16.466 05:47:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:16.466 05:47:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:16.466 [2024-07-14 05:47:23.372333] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:16.466 [2024-07-14 05:47:23.372430] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:16.466 EAL: No free 2048 kB hugepages reported on node 1 00:33:16.466 [2024-07-14 05:47:23.443842] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:16.466 [2024-07-14 05:47:23.532965] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:16.466 [2024-07-14 05:47:23.533031] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:16.466 [2024-07-14 05:47:23.533058] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:16.466 [2024-07-14 05:47:23.533072] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:16.466 [2024-07-14 05:47:23.533084] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:16.466 [2024-07-14 05:47:23.533113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:16.723 05:47:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:16.723 05:47:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:16.723 05:47:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:16.723 05:47:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:16.723 05:47:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:16.723 05:47:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:16.723 05:47:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:33:16.723 05:47:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:33:16.723 05:47:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:33:16.723 05:47:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.723 05:47:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:16.723 null0 00:33:16.723 [2024-07-14 05:47:23.719157] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:16.723 [2024-07-14 05:47:23.743383] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:16.723 05:47:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.723 05:47:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:33:16.723 05:47:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:16.723 05:47:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:16.723 05:47:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:33:16.723 05:47:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:33:16.723 05:47:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:33:16.723 05:47:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:16.723 05:47:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3383161 00:33:16.723 05:47:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:16.723 05:47:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3383161 /var/tmp/bperf.sock 00:33:16.723 05:47:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 3383161 ']' 00:33:16.723 05:47:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:16.723 05:47:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:16.723 05:47:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:16.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:16.723 05:47:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:16.723 05:47:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:16.723 [2024-07-14 05:47:23.788953] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:16.723 [2024-07-14 05:47:23.789019] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3383161 ] 00:33:16.723 EAL: No free 2048 kB hugepages reported on node 1 00:33:16.982 [2024-07-14 05:47:23.850469] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:16.982 [2024-07-14 05:47:23.942382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:16.982 05:47:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:16.982 05:47:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:16.982 05:47:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:16.982 05:47:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:16.982 05:47:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:17.548 05:47:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:17.548 05:47:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:17.806 nvme0n1 00:33:17.806 05:47:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:17.806 05:47:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:18.064 Running I/O for 2 seconds... 00:33:19.963 00:33:19.963 Latency(us) 00:33:19.963 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:19.963 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:19.963 nvme0n1 : 2.00 18982.88 74.15 0.00 0.00 6733.69 3131.16 15631.55 00:33:19.963 =================================================================================================================== 00:33:19.963 Total : 18982.88 74.15 0.00 0.00 6733.69 3131.16 15631.55 00:33:19.963 0 00:33:19.963 05:47:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:19.963 05:47:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:19.963 05:47:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:19.963 05:47:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:19.963 05:47:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:19.963 | select(.opcode=="crc32c") 00:33:19.963 | "\(.module_name) \(.executed)"' 00:33:20.221 05:47:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:20.221 05:47:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:20.221 05:47:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:20.221 05:47:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:20.221 05:47:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3383161 00:33:20.221 05:47:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 3383161 ']' 00:33:20.221 05:47:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 3383161 00:33:20.221 05:47:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:20.221 05:47:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:20.221 05:47:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3383161 00:33:20.221 05:47:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:20.221 05:47:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:20.221 05:47:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3383161' 00:33:20.221 killing process with pid 3383161 00:33:20.221 05:47:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 3383161 00:33:20.221 Received shutdown signal, test time was about 2.000000 seconds 00:33:20.221 00:33:20.221 Latency(us) 00:33:20.221 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:20.221 =================================================================================================================== 00:33:20.221 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:20.221 05:47:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 3383161 00:33:20.479 05:47:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:33:20.479 05:47:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:20.479 05:47:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:20.479 05:47:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:33:20.479 05:47:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:33:20.479 05:47:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:33:20.479 05:47:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:20.479 05:47:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3383571 00:33:20.479 05:47:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3383571 /var/tmp/bperf.sock 00:33:20.480 05:47:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:20.480 05:47:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 3383571 ']' 00:33:20.480 05:47:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:20.480 05:47:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:20.480 05:47:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:20.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:20.480 05:47:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:20.480 05:47:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:20.480 [2024-07-14 05:47:27.482934] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:20.480 [2024-07-14 05:47:27.483030] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3383571 ] 00:33:20.480 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:20.480 Zero copy mechanism will not be used. 00:33:20.480 EAL: No free 2048 kB hugepages reported on node 1 00:33:20.480 [2024-07-14 05:47:27.549046] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:20.738 [2024-07-14 05:47:27.640975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:20.738 05:47:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:20.738 05:47:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:20.738 05:47:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:20.738 05:47:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:20.738 05:47:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:20.996 05:47:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:20.996 05:47:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:21.562 nvme0n1 00:33:21.562 05:47:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:21.562 05:47:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:21.562 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:21.562 Zero copy mechanism will not be used. 00:33:21.562 Running I/O for 2 seconds... 00:33:23.462 00:33:23.462 Latency(us) 00:33:23.462 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:23.462 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:23.462 nvme0n1 : 2.01 2409.54 301.19 0.00 0.00 6635.78 6092.42 8689.59 00:33:23.462 =================================================================================================================== 00:33:23.462 Total : 2409.54 301.19 0.00 0.00 6635.78 6092.42 8689.59 00:33:23.462 0 00:33:23.462 05:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:23.462 05:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:23.462 05:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:23.462 05:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:23.462 05:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:23.462 | select(.opcode=="crc32c") 00:33:23.462 | "\(.module_name) \(.executed)"' 00:33:23.721 05:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:23.721 05:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:23.721 05:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:23.721 05:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:23.721 05:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3383571 00:33:23.721 05:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 3383571 ']' 00:33:23.721 05:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 3383571 00:33:23.721 05:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:23.721 05:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:23.721 05:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3383571 00:33:23.721 05:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:23.721 05:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:23.721 05:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3383571' 00:33:23.721 killing process with pid 3383571 00:33:23.721 05:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 3383571 00:33:23.721 Received shutdown signal, test time was about 2.000000 seconds 00:33:23.721 00:33:23.721 Latency(us) 00:33:23.721 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:23.721 =================================================================================================================== 00:33:23.721 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:23.721 05:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 3383571 00:33:23.979 05:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:33:23.979 05:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:23.979 05:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:23.979 05:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:23.979 05:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:33:23.979 05:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:33:23.979 05:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:23.979 05:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3384091 00:33:23.979 05:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:23.979 05:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3384091 /var/tmp/bperf.sock 00:33:23.979 05:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 3384091 ']' 00:33:23.979 05:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:23.979 05:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:23.979 05:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:23.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:23.979 05:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:23.979 05:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:23.979 [2024-07-14 05:47:31.031753] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:23.979 [2024-07-14 05:47:31.031841] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3384091 ] 00:33:23.979 EAL: No free 2048 kB hugepages reported on node 1 00:33:24.237 [2024-07-14 05:47:31.091285] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:24.237 [2024-07-14 05:47:31.177110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:24.237 05:47:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:24.237 05:47:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:24.237 05:47:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:24.237 05:47:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:24.237 05:47:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:24.495 05:47:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:24.495 05:47:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:25.061 nvme0n1 00:33:25.061 05:47:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:25.061 05:47:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:25.319 Running I/O for 2 seconds... 00:33:27.245 00:33:27.245 Latency(us) 00:33:27.245 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:27.245 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:27.245 nvme0n1 : 2.01 21236.77 82.96 0.00 0.00 6017.60 3325.35 17767.54 00:33:27.245 =================================================================================================================== 00:33:27.245 Total : 21236.77 82.96 0.00 0.00 6017.60 3325.35 17767.54 00:33:27.245 0 00:33:27.245 05:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:27.245 05:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:27.245 05:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:27.245 05:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:27.245 | select(.opcode=="crc32c") 00:33:27.245 | "\(.module_name) \(.executed)"' 00:33:27.245 05:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:27.536 05:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:27.536 05:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:27.536 05:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:27.536 05:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:27.536 05:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3384091 00:33:27.536 05:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 3384091 ']' 00:33:27.536 05:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 3384091 00:33:27.536 05:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:27.536 05:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:27.536 05:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3384091 00:33:27.536 05:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:27.536 05:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:27.536 05:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3384091' 00:33:27.536 killing process with pid 3384091 00:33:27.536 05:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 3384091 00:33:27.536 Received shutdown signal, test time was about 2.000000 seconds 00:33:27.536 00:33:27.536 Latency(us) 00:33:27.536 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:27.536 =================================================================================================================== 00:33:27.536 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:27.536 05:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 3384091 00:33:27.797 05:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:33:27.797 05:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:27.797 05:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:27.797 05:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:27.797 05:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:33:27.797 05:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:33:27.797 05:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:27.797 05:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3384496 00:33:27.797 05:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3384496 /var/tmp/bperf.sock 00:33:27.797 05:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:27.797 05:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 3384496 ']' 00:33:27.797 05:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:27.797 05:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:27.797 05:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:27.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:27.797 05:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:27.797 05:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:27.797 [2024-07-14 05:47:34.775242] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:27.797 [2024-07-14 05:47:34.775335] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3384496 ] 00:33:27.798 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:27.798 Zero copy mechanism will not be used. 00:33:27.798 EAL: No free 2048 kB hugepages reported on node 1 00:33:27.798 [2024-07-14 05:47:34.836984] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:28.055 [2024-07-14 05:47:34.923116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:28.055 05:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:28.055 05:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:28.055 05:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:28.055 05:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:28.055 05:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:28.313 05:47:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:28.313 05:47:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:28.879 nvme0n1 00:33:28.879 05:47:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:28.879 05:47:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:28.879 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:28.879 Zero copy mechanism will not be used. 00:33:28.879 Running I/O for 2 seconds... 00:33:30.778 00:33:30.778 Latency(us) 00:33:30.778 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:30.778 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:30.778 nvme0n1 : 2.01 1529.06 191.13 0.00 0.00 10432.25 8155.59 19612.25 00:33:30.778 =================================================================================================================== 00:33:30.778 Total : 1529.06 191.13 0.00 0.00 10432.25 8155.59 19612.25 00:33:30.778 0 00:33:30.778 05:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:30.778 05:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:30.778 05:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:30.778 05:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:30.778 05:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:30.778 | select(.opcode=="crc32c") 00:33:30.778 | "\(.module_name) \(.executed)"' 00:33:31.036 05:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:31.036 05:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:31.036 05:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:31.036 05:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:31.036 05:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3384496 00:33:31.036 05:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 3384496 ']' 00:33:31.036 05:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 3384496 00:33:31.036 05:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:31.036 05:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:31.036 05:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3384496 00:33:31.036 05:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:31.036 05:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:31.036 05:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3384496' 00:33:31.036 killing process with pid 3384496 00:33:31.036 05:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 3384496 00:33:31.036 Received shutdown signal, test time was about 2.000000 seconds 00:33:31.036 00:33:31.036 Latency(us) 00:33:31.036 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:31.036 =================================================================================================================== 00:33:31.036 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:31.036 05:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 3384496 00:33:31.294 05:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3383134 00:33:31.294 05:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 3383134 ']' 00:33:31.294 05:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 3383134 00:33:31.294 05:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:31.294 05:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:31.294 05:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3383134 00:33:31.294 05:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:31.294 05:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:31.294 05:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3383134' 00:33:31.294 killing process with pid 3383134 00:33:31.294 05:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 3383134 00:33:31.294 05:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 3383134 00:33:31.552 00:33:31.552 real 0m15.306s 00:33:31.552 user 0m30.918s 00:33:31.552 sys 0m3.744s 00:33:31.552 05:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:31.552 05:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:31.552 ************************************ 00:33:31.552 END TEST nvmf_digest_clean 00:33:31.552 ************************************ 00:33:31.552 05:47:38 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:33:31.552 05:47:38 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:33:31.552 05:47:38 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:31.552 05:47:38 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:31.810 ************************************ 00:33:31.810 START TEST nvmf_digest_error 00:33:31.810 ************************************ 00:33:31.810 05:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1121 -- # run_digest_error 00:33:31.810 05:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:33:31.810 05:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:31.810 05:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:31.810 05:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:31.810 05:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=3384938 00:33:31.810 05:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:31.810 05:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 3384938 00:33:31.810 05:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 3384938 ']' 00:33:31.810 05:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:31.810 05:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:31.810 05:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:31.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:31.810 05:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:31.810 05:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:31.810 [2024-07-14 05:47:38.729213] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:31.810 [2024-07-14 05:47:38.729294] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:31.810 EAL: No free 2048 kB hugepages reported on node 1 00:33:31.810 [2024-07-14 05:47:38.799954] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:31.810 [2024-07-14 05:47:38.891312] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:31.810 [2024-07-14 05:47:38.891364] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:31.810 [2024-07-14 05:47:38.891391] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:31.810 [2024-07-14 05:47:38.891405] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:31.810 [2024-07-14 05:47:38.891417] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:31.810 [2024-07-14 05:47:38.891446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:32.068 05:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:32.068 05:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:32.068 05:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:32.068 05:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:32.068 05:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:32.069 05:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:32.069 05:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:33:32.069 05:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.069 05:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:32.069 [2024-07-14 05:47:38.972073] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:33:32.069 05:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:32.069 05:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:33:32.069 05:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:33:32.069 05:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.069 05:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:32.069 null0 00:33:32.069 [2024-07-14 05:47:39.092130] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:32.069 [2024-07-14 05:47:39.116370] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:32.069 05:47:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:32.069 05:47:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:33:32.069 05:47:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:32.069 05:47:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:32.069 05:47:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:32.069 05:47:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:32.069 05:47:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3385081 00:33:32.069 05:47:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3385081 /var/tmp/bperf.sock 00:33:32.069 05:47:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:33:32.069 05:47:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 3385081 ']' 00:33:32.069 05:47:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:32.069 05:47:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:32.069 05:47:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:32.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:32.069 05:47:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:32.069 05:47:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:32.069 [2024-07-14 05:47:39.162271] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:32.069 [2024-07-14 05:47:39.162332] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3385081 ] 00:33:32.327 EAL: No free 2048 kB hugepages reported on node 1 00:33:32.327 [2024-07-14 05:47:39.221580] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:32.327 [2024-07-14 05:47:39.306998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:32.327 05:47:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:32.327 05:47:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:32.327 05:47:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:32.327 05:47:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:32.904 05:47:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:32.905 05:47:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.905 05:47:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:32.905 05:47:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:32.905 05:47:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:32.905 05:47:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:32.905 nvme0n1 00:33:33.168 05:47:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:33.168 05:47:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.168 05:47:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:33.168 05:47:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.168 05:47:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:33.168 05:47:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:33.168 Running I/O for 2 seconds... 00:33:33.168 [2024-07-14 05:47:40.164711] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:33.168 [2024-07-14 05:47:40.164771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.168 [2024-07-14 05:47:40.164790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.168 [2024-07-14 05:47:40.178510] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:33.168 [2024-07-14 05:47:40.178544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:10235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.168 [2024-07-14 05:47:40.178562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.168 [2024-07-14 05:47:40.190594] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:33.168 [2024-07-14 05:47:40.190624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.168 [2024-07-14 05:47:40.190640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.168 [2024-07-14 05:47:40.205010] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:33.168 [2024-07-14 05:47:40.205043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:1666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.168 [2024-07-14 05:47:40.205060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.168 [2024-07-14 05:47:40.217913] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:33.168 [2024-07-14 05:47:40.217945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:24132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.168 [2024-07-14 05:47:40.217963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.168 [2024-07-14 05:47:40.230968] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:33.168 [2024-07-14 05:47:40.231000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:15537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.168 [2024-07-14 05:47:40.231018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.168 [2024-07-14 05:47:40.244559] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:33.168 [2024-07-14 05:47:40.244590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:10470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.168 [2024-07-14 05:47:40.244608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.168 [2024-07-14 05:47:40.257183] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:33.168 [2024-07-14 05:47:40.257212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:13702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.168 [2024-07-14 05:47:40.257228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.168 [2024-07-14 05:47:40.270509] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:33.168 [2024-07-14 05:47:40.270555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:23436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.168 [2024-07-14 05:47:40.270572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.425 [2024-07-14 05:47:40.283698] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:33.425 [2024-07-14 05:47:40.283727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:19941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.425 [2024-07-14 05:47:40.283742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.425 [2024-07-14 05:47:40.297291] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:33.425 [2024-07-14 05:47:40.297321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.425 [2024-07-14 05:47:40.297338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.425 [2024-07-14 05:47:40.311617] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:33.425 [2024-07-14 05:47:40.311652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:7612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.425 [2024-07-14 05:47:40.311671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.425 [2024-07-14 05:47:40.324701] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:33.425 [2024-07-14 05:47:40.324736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.425 [2024-07-14 05:47:40.324755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.425 [2024-07-14 05:47:40.338437] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:33.425 [2024-07-14 05:47:40.338468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:11322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.425 [2024-07-14 05:47:40.338484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.425 [2024-07-14 05:47:40.351817] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:33.425 [2024-07-14 05:47:40.351848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.425 [2024-07-14 05:47:40.351879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.425 [2024-07-14 05:47:40.364272] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:33.425 [2024-07-14 05:47:40.364317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.425 [2024-07-14 05:47:40.364334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.425 [2024-07-14 05:47:40.377737] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:33.425 [2024-07-14 05:47:40.377768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:22393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.425 [2024-07-14 05:47:40.377785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.425 [2024-07-14 05:47:40.391047] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:33.425 [2024-07-14 05:47:40.391092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:19521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.425 [2024-07-14 05:47:40.391108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.425 [2024-07-14 05:47:40.405131] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:33.425 [2024-07-14 05:47:40.405176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:18876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.425 [2024-07-14 05:47:40.405193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.425 [2024-07-14 05:47:40.418075] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:33.425 [2024-07-14 05:47:40.418120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.425 [2024-07-14 05:47:40.418137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.425 [2024-07-14 05:47:40.431024] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:33.425 [2024-07-14 05:47:40.431054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:16537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.425 [2024-07-14 05:47:40.431071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.425 [2024-07-14 05:47:40.443952] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:33.425 [2024-07-14 05:47:40.443981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.425 [2024-07-14 05:47:40.443998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.425 [2024-07-14 05:47:40.458826] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:33.425 [2024-07-14 05:47:40.458860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.426 [2024-07-14 05:47:40.458890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.426 [2024-07-14 05:47:40.470215] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:33.426 [2024-07-14 05:47:40.470249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.426 [2024-07-14 05:47:40.470267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.426 [2024-07-14 05:47:40.484385] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:33.426 [2024-07-14 05:47:40.484419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.426 [2024-07-14 05:47:40.484438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.426 [2024-07-14 05:47:40.498453] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:33.426 [2024-07-14 05:47:40.498483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.426 [2024-07-14 05:47:40.498499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.426 [2024-07-14 05:47:40.510585] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:33.426 [2024-07-14 05:47:40.510614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:16566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.426 [2024-07-14 05:47:40.510630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.426 [2024-07-14 05:47:40.524395] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:33.426 [2024-07-14 05:47:40.524426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:13200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.426 [2024-07-14 05:47:40.524443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.683 [2024-07-14 05:47:40.536988] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:33.684 [2024-07-14 05:47:40.537016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:9641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.684 [2024-07-14 05:47:40.537032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.684 [2024-07-14 05:47:40.551893] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:33.684 [2024-07-14 05:47:40.551937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.684 [2024-07-14 05:47:40.551953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.684 [2024-07-14 05:47:40.565967] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:33.684 [2024-07-14 05:47:40.565999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:7737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.684 [2024-07-14 05:47:40.566017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.684 [2024-07-14 05:47:40.579083] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:33.684 [2024-07-14 05:47:40.579127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:4157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.684 [2024-07-14 05:47:40.579143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.684 [2024-07-14 05:47:40.592430] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:33.684 [2024-07-14 05:47:40.592477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:13492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.684 [2024-07-14 05:47:40.592496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.684 [2024-07-14 05:47:40.605493] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:33.684 [2024-07-14 05:47:40.605539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.684 [2024-07-14 05:47:40.605558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.684 [2024-07-14 05:47:40.619586] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:33.684 [2024-07-14 05:47:40.619616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:16449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.684 [2024-07-14 05:47:40.619633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.684 [2024-07-14 05:47:40.631001] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:33.684 [2024-07-14 05:47:40.631029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.684 [2024-07-14 05:47:40.631044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.684 [2024-07-14 05:47:40.647126] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:33.684 [2024-07-14 05:47:40.647175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.684 [2024-07-14 05:47:40.647194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.684 [2024-07-14 05:47:40.658297] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:33.684 [2024-07-14 05:47:40.658341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:14676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.684 [2024-07-14 05:47:40.658361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.684 [2024-07-14 05:47:40.672384] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:33.684 [2024-07-14 05:47:40.672418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:17021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.684 [2024-07-14 05:47:40.672438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.684 [2024-07-14 05:47:40.686319] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:33.684 [2024-07-14 05:47:40.686349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:9759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.684 [2024-07-14 05:47:40.686365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.684 [2024-07-14 05:47:40.699603] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:33.684 [2024-07-14 05:47:40.699648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.684 [2024-07-14 05:47:40.699671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.684 [2024-07-14 05:47:40.713189] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:33.684 [2024-07-14 05:47:40.713220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:16374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.684 [2024-07-14 05:47:40.713236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.684 [2024-07-14 05:47:40.727749] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:33.684 [2024-07-14 05:47:40.727784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:7177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.684 [2024-07-14 05:47:40.727802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.684 [2024-07-14 05:47:40.739928] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:33.684 [2024-07-14 05:47:40.739964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:7479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.684 [2024-07-14 05:47:40.739997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.684 [2024-07-14 05:47:40.754804] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:33.684 [2024-07-14 05:47:40.754839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:12721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.684 [2024-07-14 05:47:40.754858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.684 [2024-07-14 05:47:40.766479] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:33.684 [2024-07-14 05:47:40.766514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.684 [2024-07-14 05:47:40.766533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.684 [2024-07-14 05:47:40.779349] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:33.684 [2024-07-14 05:47:40.779384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:22900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.684 [2024-07-14 05:47:40.779404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.941 [2024-07-14 05:47:40.794044] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:33.941 [2024-07-14 05:47:40.794074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.941 [2024-07-14 05:47:40.794104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.941 [2024-07-14 05:47:40.807423] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:33.941 [2024-07-14 05:47:40.807453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:20375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.941 [2024-07-14 05:47:40.807486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.941 [2024-07-14 05:47:40.820088] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:33.941 [2024-07-14 05:47:40.820123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.941 [2024-07-14 05:47:40.820139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.941 [2024-07-14 05:47:40.833649] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:33.941 [2024-07-14 05:47:40.833684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.941 [2024-07-14 05:47:40.833703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.941 [2024-07-14 05:47:40.847205] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:33.941 [2024-07-14 05:47:40.847236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:17015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.941 [2024-07-14 05:47:40.847253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.941 [2024-07-14 05:47:40.859384] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:33.941 [2024-07-14 05:47:40.859415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:1666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.941 [2024-07-14 05:47:40.859432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.941 [2024-07-14 05:47:40.873961] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:33.941 [2024-07-14 05:47:40.873991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:19866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.941 [2024-07-14 05:47:40.874008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.941 [2024-07-14 05:47:40.888379] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:33.941 [2024-07-14 05:47:40.888410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.941 [2024-07-14 05:47:40.888427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.941 [2024-07-14 05:47:40.899695] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:33.941 [2024-07-14 05:47:40.899729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:11809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.941 [2024-07-14 05:47:40.899749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.941 [2024-07-14 05:47:40.914578] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:33.941 [2024-07-14 05:47:40.914609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:15574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.941 [2024-07-14 05:47:40.914625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.941 [2024-07-14 05:47:40.927687] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:33.941 [2024-07-14 05:47:40.927718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:8726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.941 [2024-07-14 05:47:40.927735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.941 [2024-07-14 05:47:40.940316] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:33.941 [2024-07-14 05:47:40.940345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.941 [2024-07-14 05:47:40.940361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.941 [2024-07-14 05:47:40.953845] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:33.942 [2024-07-14 05:47:40.953881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.942 [2024-07-14 05:47:40.953899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.942 [2024-07-14 05:47:40.967733] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:33.942 [2024-07-14 05:47:40.967762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:12137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.942 [2024-07-14 05:47:40.967779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.942 [2024-07-14 05:47:40.979364] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:33.942 [2024-07-14 05:47:40.979393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:14948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.942 [2024-07-14 05:47:40.979409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.942 [2024-07-14 05:47:40.993696] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:33.942 [2024-07-14 05:47:40.993730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.942 [2024-07-14 05:47:40.993749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.942 [2024-07-14 05:47:41.009022] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:33.942 [2024-07-14 05:47:41.009053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:23642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.942 [2024-07-14 05:47:41.009069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.942 [2024-07-14 05:47:41.021732] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:33.942 [2024-07-14 05:47:41.021762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:10114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.942 [2024-07-14 05:47:41.021779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.942 [2024-07-14 05:47:41.033198] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:33.942 [2024-07-14 05:47:41.033242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:23127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.942 [2024-07-14 05:47:41.033261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.199 [2024-07-14 05:47:41.048315] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.199 [2024-07-14 05:47:41.048346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:20103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.199 [2024-07-14 05:47:41.048367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.199 [2024-07-14 05:47:41.061423] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.199 [2024-07-14 05:47:41.061452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.199 [2024-07-14 05:47:41.061468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.199 [2024-07-14 05:47:41.074776] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.199 [2024-07-14 05:47:41.074810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:12501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.199 [2024-07-14 05:47:41.074828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.199 [2024-07-14 05:47:41.089450] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.199 [2024-07-14 05:47:41.089481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:4014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.199 [2024-07-14 05:47:41.089498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.199 [2024-07-14 05:47:41.101204] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.199 [2024-07-14 05:47:41.101235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:8002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.199 [2024-07-14 05:47:41.101267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.199 [2024-07-14 05:47:41.115113] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.199 [2024-07-14 05:47:41.115144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:9502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.199 [2024-07-14 05:47:41.115161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.199 [2024-07-14 05:47:41.128807] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.199 [2024-07-14 05:47:41.128841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:18313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.199 [2024-07-14 05:47:41.128860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.199 [2024-07-14 05:47:41.143340] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.199 [2024-07-14 05:47:41.143370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:17437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.199 [2024-07-14 05:47:41.143400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.199 [2024-07-14 05:47:41.155723] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.199 [2024-07-14 05:47:41.155758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.199 [2024-07-14 05:47:41.155776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.199 [2024-07-14 05:47:41.169221] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.199 [2024-07-14 05:47:41.169251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:24953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.199 [2024-07-14 05:47:41.169266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.200 [2024-07-14 05:47:41.183313] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.200 [2024-07-14 05:47:41.183342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:10801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.200 [2024-07-14 05:47:41.183374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.200 [2024-07-14 05:47:41.195174] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.200 [2024-07-14 05:47:41.195219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.200 [2024-07-14 05:47:41.195235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.200 [2024-07-14 05:47:41.209775] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.200 [2024-07-14 05:47:41.209809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:11453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.200 [2024-07-14 05:47:41.209828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.200 [2024-07-14 05:47:41.222843] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.200 [2024-07-14 05:47:41.222885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:2040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.200 [2024-07-14 05:47:41.222920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.200 [2024-07-14 05:47:41.234908] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.200 [2024-07-14 05:47:41.234946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.200 [2024-07-14 05:47:41.234962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.200 [2024-07-14 05:47:41.249691] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.200 [2024-07-14 05:47:41.249726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.200 [2024-07-14 05:47:41.249745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.200 [2024-07-14 05:47:41.262671] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.200 [2024-07-14 05:47:41.262702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.200 [2024-07-14 05:47:41.262718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.200 [2024-07-14 05:47:41.276631] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.200 [2024-07-14 05:47:41.276674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.200 [2024-07-14 05:47:41.276696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.200 [2024-07-14 05:47:41.290308] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.200 [2024-07-14 05:47:41.290354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:21992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.200 [2024-07-14 05:47:41.290370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.200 [2024-07-14 05:47:41.302805] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.200 [2024-07-14 05:47:41.302836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:11760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.200 [2024-07-14 05:47:41.302853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.458 [2024-07-14 05:47:41.317284] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.458 [2024-07-14 05:47:41.317316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.458 [2024-07-14 05:47:41.317333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.458 [2024-07-14 05:47:41.329055] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.458 [2024-07-14 05:47:41.329084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.458 [2024-07-14 05:47:41.329100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.458 [2024-07-14 05:47:41.342953] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.458 [2024-07-14 05:47:41.342982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:16979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.458 [2024-07-14 05:47:41.342998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.458 [2024-07-14 05:47:41.359754] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.458 [2024-07-14 05:47:41.359786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:12392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.459 [2024-07-14 05:47:41.359802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.459 [2024-07-14 05:47:41.372830] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.459 [2024-07-14 05:47:41.372864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.459 [2024-07-14 05:47:41.372895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.459 [2024-07-14 05:47:41.385799] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.459 [2024-07-14 05:47:41.385843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.459 [2024-07-14 05:47:41.385858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.459 [2024-07-14 05:47:41.398335] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.459 [2024-07-14 05:47:41.398375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:11625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.459 [2024-07-14 05:47:41.398395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.459 [2024-07-14 05:47:41.413527] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.459 [2024-07-14 05:47:41.413559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:15232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.459 [2024-07-14 05:47:41.413576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.459 [2024-07-14 05:47:41.425268] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.459 [2024-07-14 05:47:41.425298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:10123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.459 [2024-07-14 05:47:41.425315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.459 [2024-07-14 05:47:41.439536] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.459 [2024-07-14 05:47:41.439566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:10527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.459 [2024-07-14 05:47:41.439583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.459 [2024-07-14 05:47:41.452725] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.459 [2024-07-14 05:47:41.452756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:1007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.459 [2024-07-14 05:47:41.452772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.459 [2024-07-14 05:47:41.466151] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.459 [2024-07-14 05:47:41.466182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:21126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.459 [2024-07-14 05:47:41.466198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.459 [2024-07-14 05:47:41.478840] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.459 [2024-07-14 05:47:41.478879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.459 [2024-07-14 05:47:41.478898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.459 [2024-07-14 05:47:41.492053] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.459 [2024-07-14 05:47:41.492097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:17883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.459 [2024-07-14 05:47:41.492113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.459 [2024-07-14 05:47:41.505530] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.459 [2024-07-14 05:47:41.505565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:7452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.459 [2024-07-14 05:47:41.505584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.459 [2024-07-14 05:47:41.518577] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.459 [2024-07-14 05:47:41.518622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:18111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.459 [2024-07-14 05:47:41.518640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.459 [2024-07-14 05:47:41.531845] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.459 [2024-07-14 05:47:41.531882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:8758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.459 [2024-07-14 05:47:41.531900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.459 [2024-07-14 05:47:41.545196] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.459 [2024-07-14 05:47:41.545226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.459 [2024-07-14 05:47:41.545243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.459 [2024-07-14 05:47:41.557746] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.459 [2024-07-14 05:47:41.557780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:3221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.459 [2024-07-14 05:47:41.557799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.717 [2024-07-14 05:47:41.572980] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.717 [2024-07-14 05:47:41.573010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:5466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.717 [2024-07-14 05:47:41.573027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.717 [2024-07-14 05:47:41.584431] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.717 [2024-07-14 05:47:41.584460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:14715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.717 [2024-07-14 05:47:41.584476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.717 [2024-07-14 05:47:41.598576] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.717 [2024-07-14 05:47:41.598606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.717 [2024-07-14 05:47:41.598621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.717 [2024-07-14 05:47:41.611435] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.717 [2024-07-14 05:47:41.611464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.717 [2024-07-14 05:47:41.611479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.717 [2024-07-14 05:47:41.625381] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.717 [2024-07-14 05:47:41.625411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.718 [2024-07-14 05:47:41.625433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.718 [2024-07-14 05:47:41.637666] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.718 [2024-07-14 05:47:41.637699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:14581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.718 [2024-07-14 05:47:41.637718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.718 [2024-07-14 05:47:41.651556] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.718 [2024-07-14 05:47:41.651589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.718 [2024-07-14 05:47:41.651607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.718 [2024-07-14 05:47:41.665264] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.718 [2024-07-14 05:47:41.665296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.718 [2024-07-14 05:47:41.665313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.718 [2024-07-14 05:47:41.676766] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.718 [2024-07-14 05:47:41.676796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:8752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.718 [2024-07-14 05:47:41.676826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.718 [2024-07-14 05:47:41.690809] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.718 [2024-07-14 05:47:41.690840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:22435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.718 [2024-07-14 05:47:41.690856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.718 [2024-07-14 05:47:41.705225] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.718 [2024-07-14 05:47:41.705269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.718 [2024-07-14 05:47:41.705286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.718 [2024-07-14 05:47:41.718623] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.718 [2024-07-14 05:47:41.718651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:25039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.718 [2024-07-14 05:47:41.718666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.718 [2024-07-14 05:47:41.731547] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.718 [2024-07-14 05:47:41.731577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:2172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.718 [2024-07-14 05:47:41.731593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.718 [2024-07-14 05:47:41.744758] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.718 [2024-07-14 05:47:41.744795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:14834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.718 [2024-07-14 05:47:41.744811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.718 [2024-07-14 05:47:41.757762] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.718 [2024-07-14 05:47:41.757793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:11182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.718 [2024-07-14 05:47:41.757810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.718 [2024-07-14 05:47:41.771595] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.718 [2024-07-14 05:47:41.771628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:10672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.718 [2024-07-14 05:47:41.771647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.718 [2024-07-14 05:47:41.784341] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.718 [2024-07-14 05:47:41.784372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.718 [2024-07-14 05:47:41.784389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.718 [2024-07-14 05:47:41.799156] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.718 [2024-07-14 05:47:41.799199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.718 [2024-07-14 05:47:41.799215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.718 [2024-07-14 05:47:41.810830] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.718 [2024-07-14 05:47:41.810864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.718 [2024-07-14 05:47:41.810892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.976 [2024-07-14 05:47:41.826068] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.976 [2024-07-14 05:47:41.826098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:12966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.976 [2024-07-14 05:47:41.826115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.976 [2024-07-14 05:47:41.838116] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.976 [2024-07-14 05:47:41.838146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:11154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.976 [2024-07-14 05:47:41.838163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.976 [2024-07-14 05:47:41.852314] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.976 [2024-07-14 05:47:41.852359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:23130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.976 [2024-07-14 05:47:41.852376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.976 [2024-07-14 05:47:41.865199] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.976 [2024-07-14 05:47:41.865233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:24971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.976 [2024-07-14 05:47:41.865251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.976 [2024-07-14 05:47:41.880545] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.976 [2024-07-14 05:47:41.880575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:22779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.976 [2024-07-14 05:47:41.880592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.976 [2024-07-14 05:47:41.893016] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.976 [2024-07-14 05:47:41.893046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:9742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.976 [2024-07-14 05:47:41.893062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.976 [2024-07-14 05:47:41.905239] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.976 [2024-07-14 05:47:41.905269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:18047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.976 [2024-07-14 05:47:41.905286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.976 [2024-07-14 05:47:41.919197] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.976 [2024-07-14 05:47:41.919241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:22962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.976 [2024-07-14 05:47:41.919258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.976 [2024-07-14 05:47:41.932845] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.976 [2024-07-14 05:47:41.932881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:2914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.976 [2024-07-14 05:47:41.932900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.976 [2024-07-14 05:47:41.947262] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.977 [2024-07-14 05:47:41.947292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:22920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.977 [2024-07-14 05:47:41.947308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.977 [2024-07-14 05:47:41.958324] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.977 [2024-07-14 05:47:41.958357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:9818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.977 [2024-07-14 05:47:41.958376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.977 [2024-07-14 05:47:41.972846] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.977 [2024-07-14 05:47:41.972889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.977 [2024-07-14 05:47:41.972929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.977 [2024-07-14 05:47:41.986295] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.977 [2024-07-14 05:47:41.986328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.977 [2024-07-14 05:47:41.986347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.977 [2024-07-14 05:47:41.999215] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.977 [2024-07-14 05:47:41.999244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:12975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.977 [2024-07-14 05:47:41.999275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.977 [2024-07-14 05:47:42.012762] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.977 [2024-07-14 05:47:42.012792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.977 [2024-07-14 05:47:42.012809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.977 [2024-07-14 05:47:42.027181] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.977 [2024-07-14 05:47:42.027211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:3154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.977 [2024-07-14 05:47:42.027243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.977 [2024-07-14 05:47:42.039132] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.977 [2024-07-14 05:47:42.039163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:11118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.977 [2024-07-14 05:47:42.039179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.977 [2024-07-14 05:47:42.053144] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.977 [2024-07-14 05:47:42.053174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:5160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.977 [2024-07-14 05:47:42.053199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.977 [2024-07-14 05:47:42.065329] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.977 [2024-07-14 05:47:42.065359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.977 [2024-07-14 05:47:42.065378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.977 [2024-07-14 05:47:42.079089] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:34.977 [2024-07-14 05:47:42.079120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:7567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.977 [2024-07-14 05:47:42.079137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.235 [2024-07-14 05:47:42.094574] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:35.235 [2024-07-14 05:47:42.094608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.235 [2024-07-14 05:47:42.094627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.235 [2024-07-14 05:47:42.106570] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:35.235 [2024-07-14 05:47:42.106615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:3203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.235 [2024-07-14 05:47:42.106634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.235 [2024-07-14 05:47:42.121473] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:35.235 [2024-07-14 05:47:42.121501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.235 [2024-07-14 05:47:42.121528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.235 [2024-07-14 05:47:42.134312] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:35.235 [2024-07-14 05:47:42.134343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:16080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.235 [2024-07-14 05:47:42.134361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.235 [2024-07-14 05:47:42.146763] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x98c360) 00:33:35.235 [2024-07-14 05:47:42.146797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.235 [2024-07-14 05:47:42.146815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.235 00:33:35.235 Latency(us) 00:33:35.235 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:35.235 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:35.235 nvme0n1 : 2.00 19034.63 74.35 0.00 0.00 6715.67 3082.62 18641.35 00:33:35.235 =================================================================================================================== 00:33:35.235 Total : 19034.63 74.35 0.00 0.00 6715.67 3082.62 18641.35 00:33:35.235 0 00:33:35.236 05:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:35.236 05:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:35.236 05:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:35.236 05:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:35.236 | .driver_specific 00:33:35.236 | .nvme_error 00:33:35.236 | .status_code 00:33:35.236 | .command_transient_transport_error' 00:33:35.493 05:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 149 > 0 )) 00:33:35.493 05:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3385081 00:33:35.493 05:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 3385081 ']' 00:33:35.493 05:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 3385081 00:33:35.493 05:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:35.493 05:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:35.493 05:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3385081 00:33:35.493 05:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:35.493 05:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:35.493 05:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3385081' 00:33:35.493 killing process with pid 3385081 00:33:35.493 05:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 3385081 00:33:35.493 Received shutdown signal, test time was about 2.000000 seconds 00:33:35.493 00:33:35.493 Latency(us) 00:33:35.494 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:35.494 =================================================================================================================== 00:33:35.494 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:35.494 05:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 3385081 00:33:35.752 05:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:33:35.752 05:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:35.752 05:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:35.752 05:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:33:35.752 05:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:33:35.752 05:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3385485 00:33:35.752 05:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:33:35.752 05:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3385485 /var/tmp/bperf.sock 00:33:35.752 05:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 3385485 ']' 00:33:35.752 05:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:35.752 05:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:35.752 05:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:35.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:35.752 05:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:35.752 05:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:35.752 [2024-07-14 05:47:42.703195] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:35.752 [2024-07-14 05:47:42.703295] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3385485 ] 00:33:35.752 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:35.752 Zero copy mechanism will not be used. 00:33:35.752 EAL: No free 2048 kB hugepages reported on node 1 00:33:35.752 [2024-07-14 05:47:42.765928] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:35.752 [2024-07-14 05:47:42.853952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:36.010 05:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:36.010 05:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:36.010 05:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:36.010 05:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:36.268 05:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:36.268 05:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:36.268 05:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:36.268 05:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:36.268 05:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:36.268 05:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:36.525 nvme0n1 00:33:36.525 05:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:36.525 05:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:36.525 05:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:36.525 05:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:36.525 05:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:36.525 05:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:36.783 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:36.783 Zero copy mechanism will not be used. 00:33:36.783 Running I/O for 2 seconds... 00:33:36.783 [2024-07-14 05:47:43.741740] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:36.783 [2024-07-14 05:47:43.741794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.783 [2024-07-14 05:47:43.741819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:36.783 [2024-07-14 05:47:43.756002] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:36.783 [2024-07-14 05:47:43.756034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.783 [2024-07-14 05:47:43.756052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:36.783 [2024-07-14 05:47:43.769635] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:36.783 [2024-07-14 05:47:43.769672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.783 [2024-07-14 05:47:43.769696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:36.783 [2024-07-14 05:47:43.783216] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:36.783 [2024-07-14 05:47:43.783265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.783 [2024-07-14 05:47:43.783295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.783 [2024-07-14 05:47:43.796769] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:36.783 [2024-07-14 05:47:43.796804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.783 [2024-07-14 05:47:43.796839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:36.783 [2024-07-14 05:47:43.810440] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:36.783 [2024-07-14 05:47:43.810475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.783 [2024-07-14 05:47:43.810495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:36.783 [2024-07-14 05:47:43.824151] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:36.783 [2024-07-14 05:47:43.824202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.783 [2024-07-14 05:47:43.824226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:36.783 [2024-07-14 05:47:43.837877] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:36.783 [2024-07-14 05:47:43.837926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.783 [2024-07-14 05:47:43.837943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.783 [2024-07-14 05:47:43.851438] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:36.783 [2024-07-14 05:47:43.851474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.783 [2024-07-14 05:47:43.851494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:36.783 [2024-07-14 05:47:43.865136] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:36.783 [2024-07-14 05:47:43.865187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.783 [2024-07-14 05:47:43.865207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:36.783 [2024-07-14 05:47:43.878893] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:36.783 [2024-07-14 05:47:43.878940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.783 [2024-07-14 05:47:43.878964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.041 [2024-07-14 05:47:43.892852] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.041 [2024-07-14 05:47:43.892910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.041 [2024-07-14 05:47:43.892929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.041 [2024-07-14 05:47:43.906649] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.041 [2024-07-14 05:47:43.906684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.041 [2024-07-14 05:47:43.906707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.041 [2024-07-14 05:47:43.920585] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.041 [2024-07-14 05:47:43.920625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.041 [2024-07-14 05:47:43.920646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.041 [2024-07-14 05:47:43.934651] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.041 [2024-07-14 05:47:43.934686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.041 [2024-07-14 05:47:43.934706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.041 [2024-07-14 05:47:43.948461] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.041 [2024-07-14 05:47:43.948496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.041 [2024-07-14 05:47:43.948516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.041 [2024-07-14 05:47:43.962180] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.041 [2024-07-14 05:47:43.962229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.041 [2024-07-14 05:47:43.962248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.041 [2024-07-14 05:47:43.975988] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.041 [2024-07-14 05:47:43.976019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.041 [2024-07-14 05:47:43.976039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.041 [2024-07-14 05:47:43.989725] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.041 [2024-07-14 05:47:43.989761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.041 [2024-07-14 05:47:43.989780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.041 [2024-07-14 05:47:44.003471] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.041 [2024-07-14 05:47:44.003507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.041 [2024-07-14 05:47:44.003527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.041 [2024-07-14 05:47:44.017184] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.041 [2024-07-14 05:47:44.017230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.041 [2024-07-14 05:47:44.017250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.041 [2024-07-14 05:47:44.031286] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.041 [2024-07-14 05:47:44.031322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.041 [2024-07-14 05:47:44.031341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.041 [2024-07-14 05:47:44.045070] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.041 [2024-07-14 05:47:44.045103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.041 [2024-07-14 05:47:44.045121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.041 [2024-07-14 05:47:44.058701] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.041 [2024-07-14 05:47:44.058735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.041 [2024-07-14 05:47:44.058755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.041 [2024-07-14 05:47:44.072542] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.041 [2024-07-14 05:47:44.072576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.041 [2024-07-14 05:47:44.072595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.041 [2024-07-14 05:47:44.086263] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.042 [2024-07-14 05:47:44.086299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.042 [2024-07-14 05:47:44.086318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.042 [2024-07-14 05:47:44.100032] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.042 [2024-07-14 05:47:44.100064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.042 [2024-07-14 05:47:44.100081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.042 [2024-07-14 05:47:44.113815] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.042 [2024-07-14 05:47:44.113849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.042 [2024-07-14 05:47:44.113875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.042 [2024-07-14 05:47:44.127627] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.042 [2024-07-14 05:47:44.127662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.042 [2024-07-14 05:47:44.127681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.042 [2024-07-14 05:47:44.141625] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.042 [2024-07-14 05:47:44.141661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.042 [2024-07-14 05:47:44.141680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.299 [2024-07-14 05:47:44.155557] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.299 [2024-07-14 05:47:44.155599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.299 [2024-07-14 05:47:44.155619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.299 [2024-07-14 05:47:44.169151] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.299 [2024-07-14 05:47:44.169200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.299 [2024-07-14 05:47:44.169219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.299 [2024-07-14 05:47:44.183087] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.299 [2024-07-14 05:47:44.183118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.299 [2024-07-14 05:47:44.183136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.299 [2024-07-14 05:47:44.196663] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.299 [2024-07-14 05:47:44.196698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.299 [2024-07-14 05:47:44.196717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.299 [2024-07-14 05:47:44.210204] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.299 [2024-07-14 05:47:44.210254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.299 [2024-07-14 05:47:44.210273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.299 [2024-07-14 05:47:44.223744] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.299 [2024-07-14 05:47:44.223778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.299 [2024-07-14 05:47:44.223796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.299 [2024-07-14 05:47:44.237357] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.299 [2024-07-14 05:47:44.237391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.299 [2024-07-14 05:47:44.237410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.299 [2024-07-14 05:47:44.251209] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.299 [2024-07-14 05:47:44.251239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.299 [2024-07-14 05:47:44.251273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.299 [2024-07-14 05:47:44.264797] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.299 [2024-07-14 05:47:44.264843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.299 [2024-07-14 05:47:44.264863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.299 [2024-07-14 05:47:44.278216] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.299 [2024-07-14 05:47:44.278264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.299 [2024-07-14 05:47:44.278283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.299 [2024-07-14 05:47:44.291776] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.299 [2024-07-14 05:47:44.291811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.299 [2024-07-14 05:47:44.291831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.299 [2024-07-14 05:47:44.305645] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.300 [2024-07-14 05:47:44.305681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.300 [2024-07-14 05:47:44.305701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.300 [2024-07-14 05:47:44.319288] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.300 [2024-07-14 05:47:44.319322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.300 [2024-07-14 05:47:44.319342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.300 [2024-07-14 05:47:44.332967] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.300 [2024-07-14 05:47:44.332998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.300 [2024-07-14 05:47:44.333015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.300 [2024-07-14 05:47:44.346682] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.300 [2024-07-14 05:47:44.346717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.300 [2024-07-14 05:47:44.346737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.300 [2024-07-14 05:47:44.360366] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.300 [2024-07-14 05:47:44.360401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.300 [2024-07-14 05:47:44.360421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.300 [2024-07-14 05:47:44.373967] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.300 [2024-07-14 05:47:44.373999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.300 [2024-07-14 05:47:44.374017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.300 [2024-07-14 05:47:44.387407] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.300 [2024-07-14 05:47:44.387442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.300 [2024-07-14 05:47:44.387471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.300 [2024-07-14 05:47:44.401069] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.300 [2024-07-14 05:47:44.401101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.300 [2024-07-14 05:47:44.401119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.558 [2024-07-14 05:47:44.415027] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.558 [2024-07-14 05:47:44.415059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.558 [2024-07-14 05:47:44.415077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.558 [2024-07-14 05:47:44.428718] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.558 [2024-07-14 05:47:44.428753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.558 [2024-07-14 05:47:44.428772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.558 [2024-07-14 05:47:44.442265] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.558 [2024-07-14 05:47:44.442300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.558 [2024-07-14 05:47:44.442319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.558 [2024-07-14 05:47:44.456020] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.558 [2024-07-14 05:47:44.456050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.558 [2024-07-14 05:47:44.456066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.558 [2024-07-14 05:47:44.469857] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.558 [2024-07-14 05:47:44.469913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.558 [2024-07-14 05:47:44.469931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.558 [2024-07-14 05:47:44.483446] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.558 [2024-07-14 05:47:44.483481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.558 [2024-07-14 05:47:44.483502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.558 [2024-07-14 05:47:44.497190] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.558 [2024-07-14 05:47:44.497240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.558 [2024-07-14 05:47:44.497264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.558 [2024-07-14 05:47:44.511044] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.558 [2024-07-14 05:47:44.511099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.558 [2024-07-14 05:47:44.511118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.558 [2024-07-14 05:47:44.524790] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.558 [2024-07-14 05:47:44.524827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.558 [2024-07-14 05:47:44.524845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.558 [2024-07-14 05:47:44.538312] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.558 [2024-07-14 05:47:44.538354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.558 [2024-07-14 05:47:44.538373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.558 [2024-07-14 05:47:44.552135] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.558 [2024-07-14 05:47:44.552192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.558 [2024-07-14 05:47:44.552224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.558 [2024-07-14 05:47:44.565774] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.558 [2024-07-14 05:47:44.565810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.558 [2024-07-14 05:47:44.565837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.558 [2024-07-14 05:47:44.579278] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.558 [2024-07-14 05:47:44.579313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.558 [2024-07-14 05:47:44.579332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.558 [2024-07-14 05:47:44.592991] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.558 [2024-07-14 05:47:44.593023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.558 [2024-07-14 05:47:44.593040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.558 [2024-07-14 05:47:44.606603] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.558 [2024-07-14 05:47:44.606637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.558 [2024-07-14 05:47:44.606657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.558 [2024-07-14 05:47:44.620234] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.558 [2024-07-14 05:47:44.620284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.558 [2024-07-14 05:47:44.620303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.558 [2024-07-14 05:47:44.633836] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.558 [2024-07-14 05:47:44.633879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.558 [2024-07-14 05:47:44.633914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.558 [2024-07-14 05:47:44.647460] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.558 [2024-07-14 05:47:44.647495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.558 [2024-07-14 05:47:44.647514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.558 [2024-07-14 05:47:44.661120] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.558 [2024-07-14 05:47:44.661150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.558 [2024-07-14 05:47:44.661175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.816 [2024-07-14 05:47:44.674904] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.816 [2024-07-14 05:47:44.674942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.816 [2024-07-14 05:47:44.674959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.816 [2024-07-14 05:47:44.688640] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.816 [2024-07-14 05:47:44.688675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.816 [2024-07-14 05:47:44.688695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.816 [2024-07-14 05:47:44.702119] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.816 [2024-07-14 05:47:44.702151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.816 [2024-07-14 05:47:44.702184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.816 [2024-07-14 05:47:44.715625] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.816 [2024-07-14 05:47:44.715659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.816 [2024-07-14 05:47:44.715678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.816 [2024-07-14 05:47:44.729407] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.816 [2024-07-14 05:47:44.729441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.816 [2024-07-14 05:47:44.729460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.816 [2024-07-14 05:47:44.743027] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.816 [2024-07-14 05:47:44.743062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.816 [2024-07-14 05:47:44.743080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.816 [2024-07-14 05:47:44.757080] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.816 [2024-07-14 05:47:44.757112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.816 [2024-07-14 05:47:44.757129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.816 [2024-07-14 05:47:44.770577] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.816 [2024-07-14 05:47:44.770611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.816 [2024-07-14 05:47:44.770630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.816 [2024-07-14 05:47:44.784177] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.817 [2024-07-14 05:47:44.784225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.817 [2024-07-14 05:47:44.784244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.817 [2024-07-14 05:47:44.797796] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.817 [2024-07-14 05:47:44.797829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.817 [2024-07-14 05:47:44.797849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.817 [2024-07-14 05:47:44.811463] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.817 [2024-07-14 05:47:44.811498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.817 [2024-07-14 05:47:44.811517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.817 [2024-07-14 05:47:44.825127] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.817 [2024-07-14 05:47:44.825160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.817 [2024-07-14 05:47:44.825195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.817 [2024-07-14 05:47:44.838849] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.817 [2024-07-14 05:47:44.838894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.817 [2024-07-14 05:47:44.838927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.817 [2024-07-14 05:47:44.852626] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.817 [2024-07-14 05:47:44.852661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.817 [2024-07-14 05:47:44.852680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.817 [2024-07-14 05:47:44.866430] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.817 [2024-07-14 05:47:44.866465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.817 [2024-07-14 05:47:44.866485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.817 [2024-07-14 05:47:44.880130] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.817 [2024-07-14 05:47:44.880161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.817 [2024-07-14 05:47:44.880196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.817 [2024-07-14 05:47:44.893717] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.817 [2024-07-14 05:47:44.893751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.817 [2024-07-14 05:47:44.893770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.817 [2024-07-14 05:47:44.907303] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.817 [2024-07-14 05:47:44.907338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.817 [2024-07-14 05:47:44.907357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.817 [2024-07-14 05:47:44.920894] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:37.817 [2024-07-14 05:47:44.920943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.817 [2024-07-14 05:47:44.920960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.075 [2024-07-14 05:47:44.934743] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:38.075 [2024-07-14 05:47:44.934777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.075 [2024-07-14 05:47:44.934796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.075 [2024-07-14 05:47:44.948630] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:38.075 [2024-07-14 05:47:44.948665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.075 [2024-07-14 05:47:44.948684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.075 [2024-07-14 05:47:44.962157] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:38.075 [2024-07-14 05:47:44.962205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.075 [2024-07-14 05:47:44.962223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.075 [2024-07-14 05:47:44.975931] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:38.075 [2024-07-14 05:47:44.975963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.075 [2024-07-14 05:47:44.975986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.075 [2024-07-14 05:47:44.989718] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:38.075 [2024-07-14 05:47:44.989753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.075 [2024-07-14 05:47:44.989773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.075 [2024-07-14 05:47:45.003228] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:38.075 [2024-07-14 05:47:45.003276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.075 [2024-07-14 05:47:45.003295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.075 [2024-07-14 05:47:45.016816] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:38.075 [2024-07-14 05:47:45.016852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.075 [2024-07-14 05:47:45.016879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.075 [2024-07-14 05:47:45.030607] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:38.075 [2024-07-14 05:47:45.030642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.075 [2024-07-14 05:47:45.030661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.075 [2024-07-14 05:47:45.044153] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:38.075 [2024-07-14 05:47:45.044202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.075 [2024-07-14 05:47:45.044222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.075 [2024-07-14 05:47:45.057583] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:38.075 [2024-07-14 05:47:45.057617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.075 [2024-07-14 05:47:45.057637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.075 [2024-07-14 05:47:45.071739] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:38.075 [2024-07-14 05:47:45.071774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.075 [2024-07-14 05:47:45.071792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.075 [2024-07-14 05:47:45.085311] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:38.075 [2024-07-14 05:47:45.085346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.075 [2024-07-14 05:47:45.085366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.075 [2024-07-14 05:47:45.098914] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:38.075 [2024-07-14 05:47:45.098950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.075 [2024-07-14 05:47:45.098968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.075 [2024-07-14 05:47:45.112478] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:38.075 [2024-07-14 05:47:45.112514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.075 [2024-07-14 05:47:45.112533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.075 [2024-07-14 05:47:45.126257] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:38.075 [2024-07-14 05:47:45.126293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.075 [2024-07-14 05:47:45.126312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.075 [2024-07-14 05:47:45.139841] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:38.075 [2024-07-14 05:47:45.139887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.075 [2024-07-14 05:47:45.139923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.075 [2024-07-14 05:47:45.153581] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:38.075 [2024-07-14 05:47:45.153616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.075 [2024-07-14 05:47:45.153635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.075 [2024-07-14 05:47:45.167337] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:38.075 [2024-07-14 05:47:45.167372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.075 [2024-07-14 05:47:45.167390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.333 [2024-07-14 05:47:45.181007] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:38.333 [2024-07-14 05:47:45.181041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.333 [2024-07-14 05:47:45.181058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.333 [2024-07-14 05:47:45.194998] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:38.333 [2024-07-14 05:47:45.195030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.333 [2024-07-14 05:47:45.195048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.333 [2024-07-14 05:47:45.208606] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:38.333 [2024-07-14 05:47:45.208641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.333 [2024-07-14 05:47:45.208667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.333 [2024-07-14 05:47:45.222277] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:38.333 [2024-07-14 05:47:45.222312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.333 [2024-07-14 05:47:45.222331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.333 [2024-07-14 05:47:45.235705] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:38.333 [2024-07-14 05:47:45.235740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.334 [2024-07-14 05:47:45.235760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.334 [2024-07-14 05:47:45.249665] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:38.334 [2024-07-14 05:47:45.249700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.334 [2024-07-14 05:47:45.249720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.334 [2024-07-14 05:47:45.263457] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:38.334 [2024-07-14 05:47:45.263491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.334 [2024-07-14 05:47:45.263509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.334 [2024-07-14 05:47:45.277084] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:38.334 [2024-07-14 05:47:45.277131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.334 [2024-07-14 05:47:45.277147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.334 [2024-07-14 05:47:45.290842] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:38.334 [2024-07-14 05:47:45.290883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.334 [2024-07-14 05:47:45.290918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.334 [2024-07-14 05:47:45.304646] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:38.334 [2024-07-14 05:47:45.304680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.334 [2024-07-14 05:47:45.304700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.334 [2024-07-14 05:47:45.318021] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:38.334 [2024-07-14 05:47:45.318067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.334 [2024-07-14 05:47:45.318083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.334 [2024-07-14 05:47:45.331553] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:38.334 [2024-07-14 05:47:45.331594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.334 [2024-07-14 05:47:45.331614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.334 [2024-07-14 05:47:45.345149] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:38.334 [2024-07-14 05:47:45.345196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.334 [2024-07-14 05:47:45.345215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.334 [2024-07-14 05:47:45.358821] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:38.334 [2024-07-14 05:47:45.358855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.334 [2024-07-14 05:47:45.358884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.334 [2024-07-14 05:47:45.372555] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:38.334 [2024-07-14 05:47:45.372589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.334 [2024-07-14 05:47:45.372608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.334 [2024-07-14 05:47:45.386009] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:38.334 [2024-07-14 05:47:45.386038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.334 [2024-07-14 05:47:45.386054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.334 [2024-07-14 05:47:45.399615] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:38.334 [2024-07-14 05:47:45.399650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.334 [2024-07-14 05:47:45.399669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.334 [2024-07-14 05:47:45.413352] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:38.334 [2024-07-14 05:47:45.413387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.334 [2024-07-14 05:47:45.413406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.334 [2024-07-14 05:47:45.427061] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:38.334 [2024-07-14 05:47:45.427092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.334 [2024-07-14 05:47:45.427109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.592 [2024-07-14 05:47:45.440630] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:38.592 [2024-07-14 05:47:45.440664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.592 [2024-07-14 05:47:45.440683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.592 [2024-07-14 05:47:45.454382] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:38.592 [2024-07-14 05:47:45.454417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.592 [2024-07-14 05:47:45.454436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.592 [2024-07-14 05:47:45.467921] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:38.592 [2024-07-14 05:47:45.467953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.592 [2024-07-14 05:47:45.467971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.592 [2024-07-14 05:47:45.481495] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:38.592 [2024-07-14 05:47:45.481529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.592 [2024-07-14 05:47:45.481548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.592 [2024-07-14 05:47:45.495503] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:38.592 [2024-07-14 05:47:45.495539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.592 [2024-07-14 05:47:45.495558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.592 [2024-07-14 05:47:45.509056] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:38.592 [2024-07-14 05:47:45.509088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.592 [2024-07-14 05:47:45.509105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.592 [2024-07-14 05:47:45.523042] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:38.592 [2024-07-14 05:47:45.523073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.592 [2024-07-14 05:47:45.523090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.592 [2024-07-14 05:47:45.536570] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:38.592 [2024-07-14 05:47:45.536605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.592 [2024-07-14 05:47:45.536624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.592 [2024-07-14 05:47:45.550121] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:38.592 [2024-07-14 05:47:45.550151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.592 [2024-07-14 05:47:45.550187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.592 [2024-07-14 05:47:45.563621] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:38.592 [2024-07-14 05:47:45.563655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.592 [2024-07-14 05:47:45.563681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.592 [2024-07-14 05:47:45.576988] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:38.592 [2024-07-14 05:47:45.577019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.592 [2024-07-14 05:47:45.577036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.592 [2024-07-14 05:47:45.590776] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:38.592 [2024-07-14 05:47:45.590811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.592 [2024-07-14 05:47:45.590830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.592 [2024-07-14 05:47:45.604342] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:38.592 [2024-07-14 05:47:45.604376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.593 [2024-07-14 05:47:45.604395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.593 [2024-07-14 05:47:45.617983] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:38.593 [2024-07-14 05:47:45.618029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.593 [2024-07-14 05:47:45.618046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.593 [2024-07-14 05:47:45.631537] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:38.593 [2024-07-14 05:47:45.631572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.593 [2024-07-14 05:47:45.631602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.593 [2024-07-14 05:47:45.645164] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:38.593 [2024-07-14 05:47:45.645214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.593 [2024-07-14 05:47:45.645232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.593 [2024-07-14 05:47:45.658840] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:38.593 [2024-07-14 05:47:45.658883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.593 [2024-07-14 05:47:45.658918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.593 [2024-07-14 05:47:45.672346] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:38.593 [2024-07-14 05:47:45.672381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.593 [2024-07-14 05:47:45.672405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.593 [2024-07-14 05:47:45.686062] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:38.593 [2024-07-14 05:47:45.686093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.593 [2024-07-14 05:47:45.686110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.850 [2024-07-14 05:47:45.699817] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:38.850 [2024-07-14 05:47:45.699854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.850 [2024-07-14 05:47:45.699882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.850 [2024-07-14 05:47:45.713503] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:38.850 [2024-07-14 05:47:45.713538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.850 [2024-07-14 05:47:45.713557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.850 [2024-07-14 05:47:45.727467] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb4d50) 00:33:38.850 [2024-07-14 05:47:45.727500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.850 [2024-07-14 05:47:45.727519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.851 00:33:38.851 Latency(us) 00:33:38.851 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:38.851 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:38.851 nvme0n1 : 2.01 2263.29 282.91 0.00 0.00 7064.40 6456.51 14272.28 00:33:38.851 =================================================================================================================== 00:33:38.851 Total : 2263.29 282.91 0.00 0.00 7064.40 6456.51 14272.28 00:33:38.851 0 00:33:38.851 05:47:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:38.851 05:47:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:38.851 05:47:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:38.851 | .driver_specific 00:33:38.851 | .nvme_error 00:33:38.851 | .status_code 00:33:38.851 | .command_transient_transport_error' 00:33:38.851 05:47:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:39.109 05:47:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 146 > 0 )) 00:33:39.109 05:47:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3385485 00:33:39.109 05:47:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 3385485 ']' 00:33:39.109 05:47:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 3385485 00:33:39.109 05:47:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:39.109 05:47:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:39.109 05:47:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3385485 00:33:39.109 05:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:39.109 05:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:39.109 05:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3385485' 00:33:39.109 killing process with pid 3385485 00:33:39.109 05:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 3385485 00:33:39.109 Received shutdown signal, test time was about 2.000000 seconds 00:33:39.109 00:33:39.109 Latency(us) 00:33:39.109 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:39.109 =================================================================================================================== 00:33:39.109 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:39.109 05:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 3385485 00:33:39.367 05:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:33:39.367 05:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:39.367 05:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:33:39.367 05:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:39.367 05:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:39.367 05:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3385894 00:33:39.367 05:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:33:39.367 05:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3385894 /var/tmp/bperf.sock 00:33:39.367 05:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 3385894 ']' 00:33:39.367 05:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:39.367 05:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:39.367 05:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:39.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:39.367 05:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:39.367 05:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:39.367 [2024-07-14 05:47:46.266298] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:39.367 [2024-07-14 05:47:46.266391] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3385894 ] 00:33:39.367 EAL: No free 2048 kB hugepages reported on node 1 00:33:39.367 [2024-07-14 05:47:46.325806] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:39.367 [2024-07-14 05:47:46.411561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:39.625 05:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:39.625 05:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:39.625 05:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:39.625 05:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:39.910 05:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:39.910 05:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:39.910 05:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:39.910 05:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:39.910 05:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:39.910 05:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:40.168 nvme0n1 00:33:40.168 05:47:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:40.168 05:47:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:40.168 05:47:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:40.168 05:47:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:40.168 05:47:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:40.168 05:47:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:40.168 Running I/O for 2 seconds... 00:33:40.168 [2024-07-14 05:47:47.207343] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:40.168 [2024-07-14 05:47:47.208700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.168 [2024-07-14 05:47:47.208744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:40.168 [2024-07-14 05:47:47.219747] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:40.168 [2024-07-14 05:47:47.221111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:16434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.168 [2024-07-14 05:47:47.221158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:40.168 [2024-07-14 05:47:47.232424] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:40.168 [2024-07-14 05:47:47.233767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:54 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.168 [2024-07-14 05:47:47.233801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:40.168 [2024-07-14 05:47:47.245168] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:40.168 [2024-07-14 05:47:47.246470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:4100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.168 [2024-07-14 05:47:47.246503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:40.168 [2024-07-14 05:47:47.257781] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:40.168 [2024-07-14 05:47:47.259139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:13665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.168 [2024-07-14 05:47:47.259169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:40.168 [2024-07-14 05:47:47.270522] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:40.168 [2024-07-14 05:47:47.271997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.168 [2024-07-14 05:47:47.272028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:40.456 [2024-07-14 05:47:47.283447] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:40.456 [2024-07-14 05:47:47.284806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.457 [2024-07-14 05:47:47.284839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:40.457 [2024-07-14 05:47:47.295935] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:40.457 [2024-07-14 05:47:47.297229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:17647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.457 [2024-07-14 05:47:47.297258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:40.457 [2024-07-14 05:47:47.308433] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:40.457 [2024-07-14 05:47:47.309785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.457 [2024-07-14 05:47:47.309818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:40.457 [2024-07-14 05:47:47.321035] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:40.457 [2024-07-14 05:47:47.322335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.457 [2024-07-14 05:47:47.322368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:40.457 [2024-07-14 05:47:47.333548] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:40.457 [2024-07-14 05:47:47.334897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.457 [2024-07-14 05:47:47.334930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:40.457 [2024-07-14 05:47:47.346145] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:40.457 [2024-07-14 05:47:47.347462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:10079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.457 [2024-07-14 05:47:47.347494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:40.457 [2024-07-14 05:47:47.358654] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:40.457 [2024-07-14 05:47:47.360023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:16740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.457 [2024-07-14 05:47:47.360052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:40.457 [2024-07-14 05:47:47.371156] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:40.457 [2024-07-14 05:47:47.372469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.457 [2024-07-14 05:47:47.372502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:40.457 [2024-07-14 05:47:47.383579] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:40.457 [2024-07-14 05:47:47.384925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:18856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.457 [2024-07-14 05:47:47.384959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:40.457 [2024-07-14 05:47:47.396080] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:40.457 [2024-07-14 05:47:47.397413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:18179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.457 [2024-07-14 05:47:47.397446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:40.457 [2024-07-14 05:47:47.408497] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:40.457 [2024-07-14 05:47:47.409831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:3303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.457 [2024-07-14 05:47:47.409863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:40.457 [2024-07-14 05:47:47.421064] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:40.457 [2024-07-14 05:47:47.422350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:20085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.457 [2024-07-14 05:47:47.422384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:40.457 [2024-07-14 05:47:47.433441] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:40.457 [2024-07-14 05:47:47.434789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:12145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.457 [2024-07-14 05:47:47.434821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:40.457 [2024-07-14 05:47:47.446005] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:40.457 [2024-07-14 05:47:47.447323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:7927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.457 [2024-07-14 05:47:47.447356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:40.457 [2024-07-14 05:47:47.458419] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:40.457 [2024-07-14 05:47:47.459750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:14247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.457 [2024-07-14 05:47:47.459784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:40.457 [2024-07-14 05:47:47.470676] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:40.457 [2024-07-14 05:47:47.472035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.457 [2024-07-14 05:47:47.472079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:40.457 [2024-07-14 05:47:47.483242] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:40.457 [2024-07-14 05:47:47.484606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:13863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.457 [2024-07-14 05:47:47.484640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:40.457 [2024-07-14 05:47:47.495687] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:40.457 [2024-07-14 05:47:47.497050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:15306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.457 [2024-07-14 05:47:47.497079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:40.457 [2024-07-14 05:47:47.508244] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:40.457 [2024-07-14 05:47:47.509559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.457 [2024-07-14 05:47:47.509592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:40.457 [2024-07-14 05:47:47.520728] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:40.457 [2024-07-14 05:47:47.522081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:8705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.457 [2024-07-14 05:47:47.522110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:40.457 [2024-07-14 05:47:47.534258] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:40.457 [2024-07-14 05:47:47.535705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:17883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.457 [2024-07-14 05:47:47.535738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:40.718 [2024-07-14 05:47:47.548188] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:40.718 [2024-07-14 05:47:47.549498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:6520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.718 [2024-07-14 05:47:47.549532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:40.718 [2024-07-14 05:47:47.560791] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:40.718 [2024-07-14 05:47:47.562148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.718 [2024-07-14 05:47:47.562177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:40.718 [2024-07-14 05:47:47.575557] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:40.718 [2024-07-14 05:47:47.576913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:10708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.718 [2024-07-14 05:47:47.576946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:40.718 [2024-07-14 05:47:47.588236] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:40.718 [2024-07-14 05:47:47.589593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:15502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.718 [2024-07-14 05:47:47.589625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:40.718 [2024-07-14 05:47:47.600754] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:40.718 [2024-07-14 05:47:47.602114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.718 [2024-07-14 05:47:47.602146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:40.718 [2024-07-14 05:47:47.613350] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:40.718 [2024-07-14 05:47:47.614674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.718 [2024-07-14 05:47:47.614707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:40.718 [2024-07-14 05:47:47.625871] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:40.718 [2024-07-14 05:47:47.627171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:11792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.718 [2024-07-14 05:47:47.627200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:40.718 [2024-07-14 05:47:47.638275] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:40.718 [2024-07-14 05:47:47.639607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:24626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.718 [2024-07-14 05:47:47.639640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:40.718 [2024-07-14 05:47:47.650798] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:40.718 [2024-07-14 05:47:47.652150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:21646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.718 [2024-07-14 05:47:47.652194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:40.718 [2024-07-14 05:47:47.663432] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:40.718 [2024-07-14 05:47:47.664772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:11808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.718 [2024-07-14 05:47:47.664805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:40.718 [2024-07-14 05:47:47.676011] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:40.718 [2024-07-14 05:47:47.677332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:14219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.718 [2024-07-14 05:47:47.677365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:40.718 [2024-07-14 05:47:47.688424] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:40.718 [2024-07-14 05:47:47.689771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:12457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.718 [2024-07-14 05:47:47.689804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:40.718 [2024-07-14 05:47:47.700881] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:40.718 [2024-07-14 05:47:47.702233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.718 [2024-07-14 05:47:47.702263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:40.718 [2024-07-14 05:47:47.713317] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:40.718 [2024-07-14 05:47:47.714668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.718 [2024-07-14 05:47:47.714707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:40.718 [2024-07-14 05:47:47.725512] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:40.718 [2024-07-14 05:47:47.726854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:8076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.718 [2024-07-14 05:47:47.726893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:40.718 [2024-07-14 05:47:47.737844] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:40.718 [2024-07-14 05:47:47.739073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:14786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.718 [2024-07-14 05:47:47.739103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:40.718 [2024-07-14 05:47:47.750360] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:40.718 [2024-07-14 05:47:47.751714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:13541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.718 [2024-07-14 05:47:47.751747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:40.718 [2024-07-14 05:47:47.762959] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:40.718 [2024-07-14 05:47:47.764249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:22892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.718 [2024-07-14 05:47:47.764281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:40.718 [2024-07-14 05:47:47.775419] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:40.719 [2024-07-14 05:47:47.776767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:17863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.719 [2024-07-14 05:47:47.776801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:40.719 [2024-07-14 05:47:47.788004] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:40.719 [2024-07-14 05:47:47.789307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.719 [2024-07-14 05:47:47.789339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:40.719 [2024-07-14 05:47:47.800463] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:40.719 [2024-07-14 05:47:47.801819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:3849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.719 [2024-07-14 05:47:47.801851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:40.719 [2024-07-14 05:47:47.812977] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:40.719 [2024-07-14 05:47:47.814244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:17702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.719 [2024-07-14 05:47:47.814273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:40.976 [2024-07-14 05:47:47.825824] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:40.976 [2024-07-14 05:47:47.827369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:8803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.976 [2024-07-14 05:47:47.827402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:40.976 [2024-07-14 05:47:47.838494] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:40.976 [2024-07-14 05:47:47.839843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.976 [2024-07-14 05:47:47.839899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:40.976 [2024-07-14 05:47:47.851034] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:40.976 [2024-07-14 05:47:47.852339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.976 [2024-07-14 05:47:47.852372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:40.977 [2024-07-14 05:47:47.863553] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:40.977 [2024-07-14 05:47:47.864927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:16681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.977 [2024-07-14 05:47:47.864956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:40.977 [2024-07-14 05:47:47.876117] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:40.977 [2024-07-14 05:47:47.877430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.977 [2024-07-14 05:47:47.877462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:40.977 [2024-07-14 05:47:47.888627] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:40.977 [2024-07-14 05:47:47.889988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:15399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.977 [2024-07-14 05:47:47.890017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:40.977 [2024-07-14 05:47:47.901089] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:40.977 [2024-07-14 05:47:47.902421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.977 [2024-07-14 05:47:47.902454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:40.977 [2024-07-14 05:47:47.913468] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:40.977 [2024-07-14 05:47:47.914810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.977 [2024-07-14 05:47:47.914843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:40.977 [2024-07-14 05:47:47.926030] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:40.977 [2024-07-14 05:47:47.927333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:13695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.977 [2024-07-14 05:47:47.927366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:40.977 [2024-07-14 05:47:47.938490] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:40.977 [2024-07-14 05:47:47.939836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.977 [2024-07-14 05:47:47.939876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:40.977 [2024-07-14 05:47:47.951123] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:40.977 [2024-07-14 05:47:47.952437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.977 [2024-07-14 05:47:47.952469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:40.977 [2024-07-14 05:47:47.963663] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:40.977 [2024-07-14 05:47:47.965018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.977 [2024-07-14 05:47:47.965047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:40.977 [2024-07-14 05:47:47.975973] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:40.977 [2024-07-14 05:47:47.977285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.977 [2024-07-14 05:47:47.977314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:40.977 [2024-07-14 05:47:47.988425] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:40.977 [2024-07-14 05:47:47.989782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:17956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.977 [2024-07-14 05:47:47.989814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:40.977 [2024-07-14 05:47:48.000988] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:40.977 [2024-07-14 05:47:48.002272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.977 [2024-07-14 05:47:48.002300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:40.977 [2024-07-14 05:47:48.013397] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:40.977 [2024-07-14 05:47:48.014750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:19484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.977 [2024-07-14 05:47:48.014782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:40.977 [2024-07-14 05:47:48.025982] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:40.977 [2024-07-14 05:47:48.027351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.977 [2024-07-14 05:47:48.027385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:40.977 [2024-07-14 05:47:48.038413] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:40.977 [2024-07-14 05:47:48.039782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.977 [2024-07-14 05:47:48.039822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:40.977 [2024-07-14 05:47:48.050915] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:40.977 [2024-07-14 05:47:48.052192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:23847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.977 [2024-07-14 05:47:48.052220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:40.977 [2024-07-14 05:47:48.063252] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:40.977 [2024-07-14 05:47:48.064569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.977 [2024-07-14 05:47:48.064601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:40.977 [2024-07-14 05:47:48.075660] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:40.977 [2024-07-14 05:47:48.077013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.977 [2024-07-14 05:47:48.077041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:41.235 [2024-07-14 05:47:48.088762] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:41.235 [2024-07-14 05:47:48.090112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:11012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.235 [2024-07-14 05:47:48.090141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:41.235 [2024-07-14 05:47:48.101158] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:41.235 [2024-07-14 05:47:48.102506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.235 [2024-07-14 05:47:48.102539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:41.235 [2024-07-14 05:47:48.113611] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:41.235 [2024-07-14 05:47:48.114974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.235 [2024-07-14 05:47:48.115003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:41.235 [2024-07-14 05:47:48.126096] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:41.235 [2024-07-14 05:47:48.127418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.235 [2024-07-14 05:47:48.127451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:41.235 [2024-07-14 05:47:48.138600] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:41.235 [2024-07-14 05:47:48.139979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:4915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.235 [2024-07-14 05:47:48.140007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:41.235 [2024-07-14 05:47:48.151135] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:41.235 [2024-07-14 05:47:48.152489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.235 [2024-07-14 05:47:48.152521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:41.235 [2024-07-14 05:47:48.163621] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:41.235 [2024-07-14 05:47:48.165011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:8316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.235 [2024-07-14 05:47:48.165041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:41.235 [2024-07-14 05:47:48.176068] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:41.235 [2024-07-14 05:47:48.177382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:21838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.235 [2024-07-14 05:47:48.177414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:41.235 [2024-07-14 05:47:48.188525] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:41.235 [2024-07-14 05:47:48.189889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:14740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.235 [2024-07-14 05:47:48.189937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:41.235 [2024-07-14 05:47:48.200964] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:41.235 [2024-07-14 05:47:48.202273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:11581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.235 [2024-07-14 05:47:48.202303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:41.235 [2024-07-14 05:47:48.213323] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:41.235 [2024-07-14 05:47:48.214675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:18124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.235 [2024-07-14 05:47:48.214707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:41.235 [2024-07-14 05:47:48.225637] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:41.235 [2024-07-14 05:47:48.227017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:25264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.235 [2024-07-14 05:47:48.227047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:41.235 [2024-07-14 05:47:48.238100] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:41.235 [2024-07-14 05:47:48.239404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:18263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.235 [2024-07-14 05:47:48.239437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:41.235 [2024-07-14 05:47:48.250529] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:41.235 [2024-07-14 05:47:48.251890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:24297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.235 [2024-07-14 05:47:48.251934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:41.235 [2024-07-14 05:47:48.262954] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:41.236 [2024-07-14 05:47:48.264231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:6770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.236 [2024-07-14 05:47:48.264261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:41.236 [2024-07-14 05:47:48.275349] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:41.236 [2024-07-14 05:47:48.276682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.236 [2024-07-14 05:47:48.276714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:41.236 [2024-07-14 05:47:48.287679] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:41.236 [2024-07-14 05:47:48.289014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:22431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.236 [2024-07-14 05:47:48.289044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:41.236 [2024-07-14 05:47:48.300104] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:41.236 [2024-07-14 05:47:48.301404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.236 [2024-07-14 05:47:48.301437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:41.236 [2024-07-14 05:47:48.312402] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:41.236 [2024-07-14 05:47:48.313748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:15816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.236 [2024-07-14 05:47:48.313781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:41.236 [2024-07-14 05:47:48.324673] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:41.236 [2024-07-14 05:47:48.326010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.236 [2024-07-14 05:47:48.326040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:41.236 [2024-07-14 05:47:48.337091] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:41.236 [2024-07-14 05:47:48.338457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.236 [2024-07-14 05:47:48.338490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:41.494 [2024-07-14 05:47:48.350095] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:41.494 [2024-07-14 05:47:48.351438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.494 [2024-07-14 05:47:48.351471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:41.494 [2024-07-14 05:47:48.362588] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:41.494 [2024-07-14 05:47:48.363948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.494 [2024-07-14 05:47:48.363982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:41.494 [2024-07-14 05:47:48.375032] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:41.494 [2024-07-14 05:47:48.376333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:15266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.494 [2024-07-14 05:47:48.376366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:41.494 [2024-07-14 05:47:48.387418] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:41.494 [2024-07-14 05:47:48.388776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:10737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.494 [2024-07-14 05:47:48.388809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:41.494 [2024-07-14 05:47:48.400003] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:41.494 [2024-07-14 05:47:48.401288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:11416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.494 [2024-07-14 05:47:48.401323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:41.494 [2024-07-14 05:47:48.412302] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:41.494 [2024-07-14 05:47:48.413611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.494 [2024-07-14 05:47:48.413643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:41.494 [2024-07-14 05:47:48.424642] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:41.494 [2024-07-14 05:47:48.426004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.494 [2024-07-14 05:47:48.426034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:41.494 [2024-07-14 05:47:48.437049] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:41.494 [2024-07-14 05:47:48.438367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.494 [2024-07-14 05:47:48.438401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:41.494 [2024-07-14 05:47:48.449474] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:41.494 [2024-07-14 05:47:48.450833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:7375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.494 [2024-07-14 05:47:48.450875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:41.494 [2024-07-14 05:47:48.461988] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:41.494 [2024-07-14 05:47:48.463315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.494 [2024-07-14 05:47:48.463348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:41.494 [2024-07-14 05:47:48.474364] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:41.494 [2024-07-14 05:47:48.475734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:14665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.494 [2024-07-14 05:47:48.475767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:41.494 [2024-07-14 05:47:48.486652] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:41.494 [2024-07-14 05:47:48.488035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.494 [2024-07-14 05:47:48.488065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:41.494 [2024-07-14 05:47:48.499120] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:41.494 [2024-07-14 05:47:48.500424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:24273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.494 [2024-07-14 05:47:48.500457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:41.494 [2024-07-14 05:47:48.511592] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:41.494 [2024-07-14 05:47:48.512948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:3010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.494 [2024-07-14 05:47:48.512976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:41.494 [2024-07-14 05:47:48.524128] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:41.494 [2024-07-14 05:47:48.525431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:24454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.494 [2024-07-14 05:47:48.525463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:41.494 [2024-07-14 05:47:48.536523] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:41.494 [2024-07-14 05:47:48.537885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.494 [2024-07-14 05:47:48.537933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:41.494 [2024-07-14 05:47:48.548818] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:41.494 [2024-07-14 05:47:48.550138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:14173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.494 [2024-07-14 05:47:48.550182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:41.494 [2024-07-14 05:47:48.561284] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:41.494 [2024-07-14 05:47:48.562627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:17810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.494 [2024-07-14 05:47:48.562660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:41.494 [2024-07-14 05:47:48.573650] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:41.494 [2024-07-14 05:47:48.575012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.494 [2024-07-14 05:47:48.575041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:41.494 [2024-07-14 05:47:48.586088] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:41.494 [2024-07-14 05:47:48.587424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:15771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.494 [2024-07-14 05:47:48.587456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:41.494 [2024-07-14 05:47:48.598826] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:41.752 [2024-07-14 05:47:48.600340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.752 [2024-07-14 05:47:48.600373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:41.752 [2024-07-14 05:47:48.611551] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:41.752 [2024-07-14 05:47:48.612921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.752 [2024-07-14 05:47:48.612949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:41.752 [2024-07-14 05:47:48.624023] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:41.752 [2024-07-14 05:47:48.625322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:18011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.752 [2024-07-14 05:47:48.625355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:41.752 [2024-07-14 05:47:48.636467] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:41.752 [2024-07-14 05:47:48.637823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:4588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.752 [2024-07-14 05:47:48.637856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:41.752 [2024-07-14 05:47:48.648943] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:41.752 [2024-07-14 05:47:48.650249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:2105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.752 [2024-07-14 05:47:48.650276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:41.752 [2024-07-14 05:47:48.661319] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:41.752 [2024-07-14 05:47:48.662656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:10883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.752 [2024-07-14 05:47:48.662688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:41.752 [2024-07-14 05:47:48.673847] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:41.752 [2024-07-14 05:47:48.675168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:14256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.752 [2024-07-14 05:47:48.675196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:41.752 [2024-07-14 05:47:48.686181] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:41.752 [2024-07-14 05:47:48.687518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:10124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.752 [2024-07-14 05:47:48.687558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:41.752 [2024-07-14 05:47:48.698632] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:41.752 [2024-07-14 05:47:48.700037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.752 [2024-07-14 05:47:48.700068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:41.752 [2024-07-14 05:47:48.711140] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:41.752 [2024-07-14 05:47:48.712449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.752 [2024-07-14 05:47:48.712482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:41.752 [2024-07-14 05:47:48.723508] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:41.752 [2024-07-14 05:47:48.724873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.752 [2024-07-14 05:47:48.724920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:41.752 [2024-07-14 05:47:48.735723] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:41.752 [2024-07-14 05:47:48.737082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.752 [2024-07-14 05:47:48.737111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:41.752 [2024-07-14 05:47:48.748215] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:41.752 [2024-07-14 05:47:48.749547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:12813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.752 [2024-07-14 05:47:48.749580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:41.752 [2024-07-14 05:47:48.760983] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:41.752 [2024-07-14 05:47:48.762277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:24399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.752 [2024-07-14 05:47:48.762309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:41.752 [2024-07-14 05:47:48.773326] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:41.752 [2024-07-14 05:47:48.774690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:11847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.752 [2024-07-14 05:47:48.774722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:41.752 [2024-07-14 05:47:48.785757] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:41.752 [2024-07-14 05:47:48.787117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:13045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.752 [2024-07-14 05:47:48.787147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:41.752 [2024-07-14 05:47:48.798217] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:41.752 [2024-07-14 05:47:48.799541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:15027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.752 [2024-07-14 05:47:48.799573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:41.752 [2024-07-14 05:47:48.810606] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:41.752 [2024-07-14 05:47:48.811988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.752 [2024-07-14 05:47:48.812023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:41.752 [2024-07-14 05:47:48.823089] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:41.752 [2024-07-14 05:47:48.824388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:7397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.752 [2024-07-14 05:47:48.824420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:41.752 [2024-07-14 05:47:48.835492] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:41.752 [2024-07-14 05:47:48.836844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:2480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.752 [2024-07-14 05:47:48.836882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:41.753 [2024-07-14 05:47:48.847999] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:41.753 [2024-07-14 05:47:48.849328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:9862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.753 [2024-07-14 05:47:48.849362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:42.010 [2024-07-14 05:47:48.860977] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:42.010 [2024-07-14 05:47:48.862252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:3590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.010 [2024-07-14 05:47:48.862280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:42.010 [2024-07-14 05:47:48.873253] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:42.010 [2024-07-14 05:47:48.874594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:4102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.010 [2024-07-14 05:47:48.874627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:42.010 [2024-07-14 05:47:48.885653] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:42.010 [2024-07-14 05:47:48.887021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.010 [2024-07-14 05:47:48.887050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:42.010 [2024-07-14 05:47:48.898131] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:42.010 [2024-07-14 05:47:48.899438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.010 [2024-07-14 05:47:48.899470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:42.010 [2024-07-14 05:47:48.910632] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:42.010 [2024-07-14 05:47:48.912020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.010 [2024-07-14 05:47:48.912049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:42.010 [2024-07-14 05:47:48.923129] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:42.010 [2024-07-14 05:47:48.924450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:2929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.010 [2024-07-14 05:47:48.924483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:42.010 [2024-07-14 05:47:48.935570] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:42.010 [2024-07-14 05:47:48.936943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.010 [2024-07-14 05:47:48.936988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:42.010 [2024-07-14 05:47:48.948033] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:42.010 [2024-07-14 05:47:48.949297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:12879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.010 [2024-07-14 05:47:48.949325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:42.010 [2024-07-14 05:47:48.960370] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:42.010 [2024-07-14 05:47:48.961720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:11183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.011 [2024-07-14 05:47:48.961754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:42.011 [2024-07-14 05:47:48.972720] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:42.011 [2024-07-14 05:47:48.974080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.011 [2024-07-14 05:47:48.974109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:42.011 [2024-07-14 05:47:48.985110] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:42.011 [2024-07-14 05:47:48.986450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:18523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.011 [2024-07-14 05:47:48.986493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:42.011 [2024-07-14 05:47:48.997268] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:42.011 [2024-07-14 05:47:48.998591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.011 [2024-07-14 05:47:48.998624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:42.011 [2024-07-14 05:47:49.009691] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:42.011 [2024-07-14 05:47:49.011059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:25166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.011 [2024-07-14 05:47:49.011095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:42.011 [2024-07-14 05:47:49.022014] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:42.011 [2024-07-14 05:47:49.023397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:12390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.011 [2024-07-14 05:47:49.023430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:42.011 [2024-07-14 05:47:49.034424] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:42.011 [2024-07-14 05:47:49.035798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:20376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.011 [2024-07-14 05:47:49.035830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:42.011 [2024-07-14 05:47:49.046821] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:42.011 [2024-07-14 05:47:49.048179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:17686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.011 [2024-07-14 05:47:49.048209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:42.011 [2024-07-14 05:47:49.059228] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:42.011 [2024-07-14 05:47:49.060559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:18522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.011 [2024-07-14 05:47:49.060592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:42.011 [2024-07-14 05:47:49.071603] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:42.011 [2024-07-14 05:47:49.072961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:6572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.011 [2024-07-14 05:47:49.072991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:42.011 [2024-07-14 05:47:49.083925] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:42.011 [2024-07-14 05:47:49.085261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:15089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.011 [2024-07-14 05:47:49.085291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:42.011 [2024-07-14 05:47:49.096381] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:42.011 [2024-07-14 05:47:49.097741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.011 [2024-07-14 05:47:49.097772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:42.011 [2024-07-14 05:47:49.109003] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:42.011 [2024-07-14 05:47:49.110350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:20269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.011 [2024-07-14 05:47:49.110383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:42.268 [2024-07-14 05:47:49.122131] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:42.268 [2024-07-14 05:47:49.123462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.268 [2024-07-14 05:47:49.123495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:42.268 [2024-07-14 05:47:49.134628] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:42.268 [2024-07-14 05:47:49.136006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:9336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.268 [2024-07-14 05:47:49.136035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:42.268 [2024-07-14 05:47:49.147266] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:42.268 [2024-07-14 05:47:49.148663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.268 [2024-07-14 05:47:49.148696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:42.268 [2024-07-14 05:47:49.159825] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:42.268 [2024-07-14 05:47:49.161197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.268 [2024-07-14 05:47:49.161225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:42.268 [2024-07-14 05:47:49.172365] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:42.268 [2024-07-14 05:47:49.173698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:9067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.268 [2024-07-14 05:47:49.173730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:42.268 [2024-07-14 05:47:49.184953] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:42.268 [2024-07-14 05:47:49.186275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:47 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.268 [2024-07-14 05:47:49.186306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:42.268 [2024-07-14 05:47:49.197500] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0abc0) with pdu=0x2000190fa3a0 00:33:42.268 [2024-07-14 05:47:49.198888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.268 [2024-07-14 05:47:49.198933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:42.268 00:33:42.268 Latency(us) 00:33:42.268 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:42.268 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:42.269 nvme0n1 : 2.01 20394.38 79.67 0.00 0.00 6266.31 2864.17 14272.28 00:33:42.269 =================================================================================================================== 00:33:42.269 Total : 20394.38 79.67 0.00 0.00 6266.31 2864.17 14272.28 00:33:42.269 0 00:33:42.269 05:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:42.269 05:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:42.269 05:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:42.269 | .driver_specific 00:33:42.269 | .nvme_error 00:33:42.269 | .status_code 00:33:42.269 | .command_transient_transport_error' 00:33:42.269 05:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:42.526 05:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 160 > 0 )) 00:33:42.526 05:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3385894 00:33:42.526 05:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 3385894 ']' 00:33:42.526 05:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 3385894 00:33:42.526 05:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:42.526 05:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:42.526 05:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3385894 00:33:42.526 05:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:42.526 05:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:42.526 05:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3385894' 00:33:42.526 killing process with pid 3385894 00:33:42.526 05:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 3385894 00:33:42.526 Received shutdown signal, test time was about 2.000000 seconds 00:33:42.526 00:33:42.526 Latency(us) 00:33:42.526 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:42.526 =================================================================================================================== 00:33:42.526 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:42.526 05:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 3385894 00:33:42.784 05:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:33:42.784 05:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:42.784 05:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:33:42.784 05:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:33:42.784 05:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:33:42.784 05:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3386297 00:33:42.784 05:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:33:42.784 05:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3386297 /var/tmp/bperf.sock 00:33:42.784 05:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 3386297 ']' 00:33:42.784 05:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:42.784 05:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:42.784 05:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:42.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:42.784 05:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:42.784 05:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:42.784 [2024-07-14 05:47:49.771217] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:42.784 [2024-07-14 05:47:49.771295] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3386297 ] 00:33:42.784 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:42.784 Zero copy mechanism will not be used. 00:33:42.784 EAL: No free 2048 kB hugepages reported on node 1 00:33:42.784 [2024-07-14 05:47:49.833477] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:43.042 [2024-07-14 05:47:49.922361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:43.042 05:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:43.042 05:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:43.042 05:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:43.042 05:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:43.299 05:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:43.299 05:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:43.299 05:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:43.299 05:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:43.299 05:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:43.299 05:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:43.864 nvme0n1 00:33:43.864 05:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:43.864 05:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:43.864 05:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:43.864 05:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:43.864 05:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:43.864 05:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:43.864 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:43.864 Zero copy mechanism will not be used. 00:33:43.864 Running I/O for 2 seconds... 00:33:43.864 [2024-07-14 05:47:50.857841] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:43.864 [2024-07-14 05:47:50.858447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.864 [2024-07-14 05:47:50.858485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.864 [2024-07-14 05:47:50.877858] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:43.864 [2024-07-14 05:47:50.878472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.864 [2024-07-14 05:47:50.878505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.864 [2024-07-14 05:47:50.899335] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:43.864 [2024-07-14 05:47:50.899874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.864 [2024-07-14 05:47:50.899929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.864 [2024-07-14 05:47:50.919137] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:43.864 [2024-07-14 05:47:50.919521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.864 [2024-07-14 05:47:50.919549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.864 [2024-07-14 05:47:50.937309] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:43.864 [2024-07-14 05:47:50.937817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.864 [2024-07-14 05:47:50.937864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.864 [2024-07-14 05:47:50.958424] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:43.864 [2024-07-14 05:47:50.959012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.864 [2024-07-14 05:47:50.959057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.122 [2024-07-14 05:47:50.977037] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:44.122 [2024-07-14 05:47:50.977541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.122 [2024-07-14 05:47:50.977586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.122 [2024-07-14 05:47:50.997430] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:44.122 [2024-07-14 05:47:50.998036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.122 [2024-07-14 05:47:50.998064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.122 [2024-07-14 05:47:51.015853] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:44.122 [2024-07-14 05:47:51.016422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.122 [2024-07-14 05:47:51.016467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.122 [2024-07-14 05:47:51.036116] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:44.122 [2024-07-14 05:47:51.036653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.122 [2024-07-14 05:47:51.036695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.122 [2024-07-14 05:47:51.052960] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:44.122 [2024-07-14 05:47:51.053445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.122 [2024-07-14 05:47:51.053489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.122 [2024-07-14 05:47:51.072892] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:44.122 [2024-07-14 05:47:51.073478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.122 [2024-07-14 05:47:51.073520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.122 [2024-07-14 05:47:51.093566] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:44.122 [2024-07-14 05:47:51.094048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.122 [2024-07-14 05:47:51.094090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.122 [2024-07-14 05:47:51.114029] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:44.122 [2024-07-14 05:47:51.114500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.122 [2024-07-14 05:47:51.114527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.122 [2024-07-14 05:47:51.134592] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:44.122 [2024-07-14 05:47:51.135125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.122 [2024-07-14 05:47:51.135171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.122 [2024-07-14 05:47:51.154241] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:44.122 [2024-07-14 05:47:51.154701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.122 [2024-07-14 05:47:51.154728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.122 [2024-07-14 05:47:51.174238] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:44.122 [2024-07-14 05:47:51.174772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.122 [2024-07-14 05:47:51.174817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.122 [2024-07-14 05:47:51.194059] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:44.122 [2024-07-14 05:47:51.194483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.122 [2024-07-14 05:47:51.194525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.122 [2024-07-14 05:47:51.212429] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:44.122 [2024-07-14 05:47:51.212834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.122 [2024-07-14 05:47:51.212888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.381 [2024-07-14 05:47:51.231400] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:44.381 [2024-07-14 05:47:51.231977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.381 [2024-07-14 05:47:51.232018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.381 [2024-07-14 05:47:51.252320] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:44.381 [2024-07-14 05:47:51.252754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.381 [2024-07-14 05:47:51.252797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.381 [2024-07-14 05:47:51.273329] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:44.381 [2024-07-14 05:47:51.273733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.381 [2024-07-14 05:47:51.273776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.381 [2024-07-14 05:47:51.295011] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:44.381 [2024-07-14 05:47:51.295566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.381 [2024-07-14 05:47:51.295593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.381 [2024-07-14 05:47:51.314853] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:44.381 [2024-07-14 05:47:51.315255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.381 [2024-07-14 05:47:51.315299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.381 [2024-07-14 05:47:51.335134] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:44.381 [2024-07-14 05:47:51.335601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.381 [2024-07-14 05:47:51.335630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.381 [2024-07-14 05:47:51.354475] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:44.381 [2024-07-14 05:47:51.354939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.381 [2024-07-14 05:47:51.354969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.381 [2024-07-14 05:47:51.374353] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:44.381 [2024-07-14 05:47:51.374968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.381 [2024-07-14 05:47:51.375000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.381 [2024-07-14 05:47:51.396037] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:44.381 [2024-07-14 05:47:51.396560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.381 [2024-07-14 05:47:51.396605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.381 [2024-07-14 05:47:51.417332] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:44.381 [2024-07-14 05:47:51.417747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.381 [2024-07-14 05:47:51.417781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.381 [2024-07-14 05:47:51.436721] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:44.381 [2024-07-14 05:47:51.437153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.381 [2024-07-14 05:47:51.437181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.381 [2024-07-14 05:47:51.456462] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:44.381 [2024-07-14 05:47:51.456886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.381 [2024-07-14 05:47:51.456937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.381 [2024-07-14 05:47:51.476125] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:44.381 [2024-07-14 05:47:51.476519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.381 [2024-07-14 05:47:51.476562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.640 [2024-07-14 05:47:51.494284] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:44.640 [2024-07-14 05:47:51.494812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.640 [2024-07-14 05:47:51.494855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.640 [2024-07-14 05:47:51.515689] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:44.640 [2024-07-14 05:47:51.516175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.640 [2024-07-14 05:47:51.516204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.640 [2024-07-14 05:47:51.539016] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:44.640 [2024-07-14 05:47:51.539649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.640 [2024-07-14 05:47:51.539676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.640 [2024-07-14 05:47:51.558529] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:44.640 [2024-07-14 05:47:51.558955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.640 [2024-07-14 05:47:51.558983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.640 [2024-07-14 05:47:51.578686] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:44.640 [2024-07-14 05:47:51.579212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.640 [2024-07-14 05:47:51.579253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.640 [2024-07-14 05:47:51.598183] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:44.640 [2024-07-14 05:47:51.598840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.640 [2024-07-14 05:47:51.598872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.640 [2024-07-14 05:47:51.616467] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:44.640 [2024-07-14 05:47:51.616994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.640 [2024-07-14 05:47:51.617036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.640 [2024-07-14 05:47:51.636711] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:44.640 [2024-07-14 05:47:51.637258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.640 [2024-07-14 05:47:51.637284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.640 [2024-07-14 05:47:51.657522] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:44.640 [2024-07-14 05:47:51.658053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.640 [2024-07-14 05:47:51.658081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.640 [2024-07-14 05:47:51.678987] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:44.640 [2024-07-14 05:47:51.679471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.640 [2024-07-14 05:47:51.679516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.640 [2024-07-14 05:47:51.700242] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:44.640 [2024-07-14 05:47:51.700766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.640 [2024-07-14 05:47:51.700809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.640 [2024-07-14 05:47:51.720350] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:44.640 [2024-07-14 05:47:51.720885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.640 [2024-07-14 05:47:51.720913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.640 [2024-07-14 05:47:51.741276] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:44.640 [2024-07-14 05:47:51.741780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.640 [2024-07-14 05:47:51.741807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.899 [2024-07-14 05:47:51.761710] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:44.899 [2024-07-14 05:47:51.762226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.899 [2024-07-14 05:47:51.762267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.899 [2024-07-14 05:47:51.781143] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:44.899 [2024-07-14 05:47:51.781568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.899 [2024-07-14 05:47:51.781611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.899 [2024-07-14 05:47:51.800670] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:44.899 [2024-07-14 05:47:51.801146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.899 [2024-07-14 05:47:51.801188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.899 [2024-07-14 05:47:51.818927] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:44.899 [2024-07-14 05:47:51.819307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.899 [2024-07-14 05:47:51.819335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.899 [2024-07-14 05:47:51.839000] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:44.899 [2024-07-14 05:47:51.839502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.899 [2024-07-14 05:47:51.839548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.899 [2024-07-14 05:47:51.860667] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:44.899 [2024-07-14 05:47:51.861246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.899 [2024-07-14 05:47:51.861288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.899 [2024-07-14 05:47:51.880606] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:44.899 [2024-07-14 05:47:51.881191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.899 [2024-07-14 05:47:51.881236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.899 [2024-07-14 05:47:51.901653] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:44.899 [2024-07-14 05:47:51.902222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.899 [2024-07-14 05:47:51.902267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.899 [2024-07-14 05:47:51.921484] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:44.899 [2024-07-14 05:47:51.921898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.899 [2024-07-14 05:47:51.921927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.899 [2024-07-14 05:47:51.943011] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:44.899 [2024-07-14 05:47:51.943518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.899 [2024-07-14 05:47:51.943568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.900 [2024-07-14 05:47:51.963129] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:44.900 [2024-07-14 05:47:51.963646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.900 [2024-07-14 05:47:51.963691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.900 [2024-07-14 05:47:51.983930] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:44.900 [2024-07-14 05:47:51.984432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.900 [2024-07-14 05:47:51.984458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.900 [2024-07-14 05:47:52.004936] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:45.158 [2024-07-14 05:47:52.005542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.158 [2024-07-14 05:47:52.005587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.158 [2024-07-14 05:47:52.025041] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:45.158 [2024-07-14 05:47:52.025461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.158 [2024-07-14 05:47:52.025488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.158 [2024-07-14 05:47:52.044814] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:45.158 [2024-07-14 05:47:52.045402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.158 [2024-07-14 05:47:52.045434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.158 [2024-07-14 05:47:52.065175] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:45.158 [2024-07-14 05:47:52.065583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.158 [2024-07-14 05:47:52.065611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.158 [2024-07-14 05:47:52.082223] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:45.158 [2024-07-14 05:47:52.082643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.158 [2024-07-14 05:47:52.082687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.158 [2024-07-14 05:47:52.101203] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:45.158 [2024-07-14 05:47:52.101696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.158 [2024-07-14 05:47:52.101725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.158 [2024-07-14 05:47:52.122844] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:45.158 [2024-07-14 05:47:52.123389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.158 [2024-07-14 05:47:52.123431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.158 [2024-07-14 05:47:52.141739] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:45.158 [2024-07-14 05:47:52.142371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.158 [2024-07-14 05:47:52.142416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.158 [2024-07-14 05:47:52.161367] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:45.158 [2024-07-14 05:47:52.161964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.158 [2024-07-14 05:47:52.161991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.158 [2024-07-14 05:47:52.180483] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:45.158 [2024-07-14 05:47:52.180951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.158 [2024-07-14 05:47:52.180980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.158 [2024-07-14 05:47:52.201839] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:45.158 [2024-07-14 05:47:52.202350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.158 [2024-07-14 05:47:52.202396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.158 [2024-07-14 05:47:52.222369] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:45.158 [2024-07-14 05:47:52.222853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.158 [2024-07-14 05:47:52.222904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.158 [2024-07-14 05:47:52.243072] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:45.158 [2024-07-14 05:47:52.243607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.158 [2024-07-14 05:47:52.243651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.158 [2024-07-14 05:47:52.262546] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:45.158 [2024-07-14 05:47:52.262900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.158 [2024-07-14 05:47:52.262928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.417 [2024-07-14 05:47:52.282298] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:45.417 [2024-07-14 05:47:52.282673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.417 [2024-07-14 05:47:52.282715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.417 [2024-07-14 05:47:52.303050] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:45.417 [2024-07-14 05:47:52.303435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.417 [2024-07-14 05:47:52.303478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.417 [2024-07-14 05:47:52.323189] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:45.417 [2024-07-14 05:47:52.323697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.417 [2024-07-14 05:47:52.323742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.417 [2024-07-14 05:47:52.344355] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:45.417 [2024-07-14 05:47:52.344836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.417 [2024-07-14 05:47:52.344886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.417 [2024-07-14 05:47:52.362802] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:45.417 [2024-07-14 05:47:52.363287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.417 [2024-07-14 05:47:52.363328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.417 [2024-07-14 05:47:52.382325] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:45.417 [2024-07-14 05:47:52.382860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.417 [2024-07-14 05:47:52.382915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.417 [2024-07-14 05:47:52.402207] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:45.417 [2024-07-14 05:47:52.402827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.417 [2024-07-14 05:47:52.402855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.417 [2024-07-14 05:47:52.422283] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:45.417 [2024-07-14 05:47:52.422664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.417 [2024-07-14 05:47:52.422692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.417 [2024-07-14 05:47:52.440825] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:45.417 [2024-07-14 05:47:52.441308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.417 [2024-07-14 05:47:52.441351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.417 [2024-07-14 05:47:52.460651] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:45.417 [2024-07-14 05:47:52.461121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.417 [2024-07-14 05:47:52.461188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.417 [2024-07-14 05:47:52.480402] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:45.417 [2024-07-14 05:47:52.481024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.417 [2024-07-14 05:47:52.481053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.417 [2024-07-14 05:47:52.499468] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:45.417 [2024-07-14 05:47:52.499877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.417 [2024-07-14 05:47:52.499906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.417 [2024-07-14 05:47:52.518683] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:45.417 [2024-07-14 05:47:52.519191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.417 [2024-07-14 05:47:52.519227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.676 [2024-07-14 05:47:52.538756] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:45.676 [2024-07-14 05:47:52.539322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.676 [2024-07-14 05:47:52.539363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.676 [2024-07-14 05:47:52.558886] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:45.676 [2024-07-14 05:47:52.559267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.676 [2024-07-14 05:47:52.559295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.676 [2024-07-14 05:47:52.579613] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:45.676 [2024-07-14 05:47:52.580035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.676 [2024-07-14 05:47:52.580077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.676 [2024-07-14 05:47:52.601044] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:45.676 [2024-07-14 05:47:52.601470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.676 [2024-07-14 05:47:52.601497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.676 [2024-07-14 05:47:52.621805] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:45.676 [2024-07-14 05:47:52.622237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.676 [2024-07-14 05:47:52.622266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.677 [2024-07-14 05:47:52.642169] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:45.677 [2024-07-14 05:47:52.642676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.677 [2024-07-14 05:47:52.642722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.677 [2024-07-14 05:47:52.660144] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:45.677 [2024-07-14 05:47:52.660659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.677 [2024-07-14 05:47:52.660685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.677 [2024-07-14 05:47:52.679619] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:45.677 [2024-07-14 05:47:52.680161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.677 [2024-07-14 05:47:52.680189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.677 [2024-07-14 05:47:52.701061] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:45.677 [2024-07-14 05:47:52.701562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.677 [2024-07-14 05:47:52.701607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.677 [2024-07-14 05:47:52.722267] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:45.677 [2024-07-14 05:47:52.722717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.677 [2024-07-14 05:47:52.722761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.677 [2024-07-14 05:47:52.743237] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:45.677 [2024-07-14 05:47:52.743880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.677 [2024-07-14 05:47:52.743906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.677 [2024-07-14 05:47:52.764296] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:45.677 [2024-07-14 05:47:52.764793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.677 [2024-07-14 05:47:52.764838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.935 [2024-07-14 05:47:52.785029] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:45.935 [2024-07-14 05:47:52.785444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.935 [2024-07-14 05:47:52.785477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.935 [2024-07-14 05:47:52.805247] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:45.935 [2024-07-14 05:47:52.805449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.935 [2024-07-14 05:47:52.805483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.935 [2024-07-14 05:47:52.825836] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:45.935 [2024-07-14 05:47:52.826270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.935 [2024-07-14 05:47:52.826313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.935 [2024-07-14 05:47:52.845666] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b0ae90) with pdu=0x2000190fef90 00:33:45.935 [2024-07-14 05:47:52.846104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.935 [2024-07-14 05:47:52.846131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.935 00:33:45.935 Latency(us) 00:33:45.935 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:45.935 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:45.935 nvme0n1 : 2.01 1542.44 192.80 0.00 0.00 10340.12 7039.05 22816.24 00:33:45.935 =================================================================================================================== 00:33:45.935 Total : 1542.44 192.80 0.00 0.00 10340.12 7039.05 22816.24 00:33:45.935 0 00:33:45.935 05:47:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:45.935 05:47:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:45.935 05:47:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:45.935 05:47:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:45.935 | .driver_specific 00:33:45.935 | .nvme_error 00:33:45.935 | .status_code 00:33:45.935 | .command_transient_transport_error' 00:33:46.194 05:47:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 100 > 0 )) 00:33:46.194 05:47:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3386297 00:33:46.194 05:47:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 3386297 ']' 00:33:46.194 05:47:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 3386297 00:33:46.194 05:47:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:46.194 05:47:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:46.194 05:47:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3386297 00:33:46.194 05:47:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:46.194 05:47:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:46.194 05:47:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3386297' 00:33:46.194 killing process with pid 3386297 00:33:46.194 05:47:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 3386297 00:33:46.194 Received shutdown signal, test time was about 2.000000 seconds 00:33:46.194 00:33:46.194 Latency(us) 00:33:46.194 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:46.194 =================================================================================================================== 00:33:46.194 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:46.194 05:47:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 3386297 00:33:46.452 05:47:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3384938 00:33:46.452 05:47:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 3384938 ']' 00:33:46.452 05:47:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 3384938 00:33:46.452 05:47:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:46.452 05:47:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:46.452 05:47:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3384938 00:33:46.452 05:47:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:46.452 05:47:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:46.452 05:47:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3384938' 00:33:46.452 killing process with pid 3384938 00:33:46.452 05:47:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 3384938 00:33:46.452 05:47:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 3384938 00:33:46.711 00:33:46.711 real 0m14.920s 00:33:46.711 user 0m30.071s 00:33:46.711 sys 0m3.715s 00:33:46.711 05:47:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:46.711 05:47:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:46.711 ************************************ 00:33:46.711 END TEST nvmf_digest_error 00:33:46.711 ************************************ 00:33:46.711 05:47:53 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:33:46.711 05:47:53 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:33:46.711 05:47:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:46.711 05:47:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:33:46.711 05:47:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:46.711 05:47:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:33:46.711 05:47:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:46.711 05:47:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:46.711 rmmod nvme_tcp 00:33:46.711 rmmod nvme_fabrics 00:33:46.711 rmmod nvme_keyring 00:33:46.711 05:47:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:46.711 05:47:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:33:46.711 05:47:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:33:46.711 05:47:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 3384938 ']' 00:33:46.711 05:47:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 3384938 00:33:46.711 05:47:53 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@946 -- # '[' -z 3384938 ']' 00:33:46.711 05:47:53 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@950 -- # kill -0 3384938 00:33:46.711 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3384938) - No such process 00:33:46.711 05:47:53 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@973 -- # echo 'Process with pid 3384938 is not found' 00:33:46.711 Process with pid 3384938 is not found 00:33:46.711 05:47:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:46.711 05:47:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:46.711 05:47:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:46.711 05:47:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:46.711 05:47:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:46.711 05:47:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:46.711 05:47:53 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:46.711 05:47:53 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:49.243 05:47:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:49.243 00:33:49.243 real 0m34.631s 00:33:49.243 user 1m1.804s 00:33:49.243 sys 0m9.021s 00:33:49.243 05:47:55 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:49.243 05:47:55 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:49.243 ************************************ 00:33:49.243 END TEST nvmf_digest 00:33:49.243 ************************************ 00:33:49.243 05:47:55 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:33:49.243 05:47:55 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:33:49.243 05:47:55 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:33:49.243 05:47:55 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:33:49.243 05:47:55 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:33:49.243 05:47:55 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:49.243 05:47:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:49.243 ************************************ 00:33:49.243 START TEST nvmf_bdevperf 00:33:49.243 ************************************ 00:33:49.243 05:47:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:33:49.243 * Looking for test storage... 00:33:49.243 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:49.243 05:47:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:49.243 05:47:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:33:49.243 05:47:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:49.243 05:47:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:49.243 05:47:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:49.243 05:47:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:49.243 05:47:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:49.243 05:47:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:49.243 05:47:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:49.243 05:47:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:49.243 05:47:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:49.243 05:47:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:49.243 05:47:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:49.243 05:47:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:49.243 05:47:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:49.243 05:47:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:49.243 05:47:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:49.243 05:47:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:49.243 05:47:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:49.243 05:47:55 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:49.244 05:47:55 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:49.244 05:47:55 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:49.244 05:47:55 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:49.244 05:47:55 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:49.244 05:47:55 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:49.244 05:47:55 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:33:49.244 05:47:55 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:49.244 05:47:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:33:49.244 05:47:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:49.244 05:47:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:49.244 05:47:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:49.244 05:47:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:49.244 05:47:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:49.244 05:47:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:49.244 05:47:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:49.244 05:47:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:49.244 05:47:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:49.244 05:47:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:49.244 05:47:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:33:49.244 05:47:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:49.244 05:47:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:49.244 05:47:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:49.244 05:47:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:49.244 05:47:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:49.244 05:47:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:49.244 05:47:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:49.244 05:47:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:49.244 05:47:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:49.244 05:47:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:49.244 05:47:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:33:49.244 05:47:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:51.147 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:51.147 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:33:51.147 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:51.147 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:51.147 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:51.147 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:51.147 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:51.147 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:33:51.147 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:51.147 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:33:51.147 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:33:51.147 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:33:51.147 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:33:51.147 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:33:51.147 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:33:51.147 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:51.147 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:51.147 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:51.147 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:51.147 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:51.147 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:51.147 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:51.147 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:51.147 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:51.147 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:51.147 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:51.147 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:51.147 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:51.147 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:51.147 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:51.147 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:51.147 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:51.147 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:51.147 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:51.147 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:51.147 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:51.147 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:51.147 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:51.147 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:51.147 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:51.147 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:51.147 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:51.147 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:51.147 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:51.147 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:51.147 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:51.147 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:51.147 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:51.147 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:51.147 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:51.147 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:51.147 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:51.147 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:51.147 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:51.147 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:51.147 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:51.147 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:51.147 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:51.147 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:51.147 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:51.147 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:51.147 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:51.147 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:51.147 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:51.147 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:51.147 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:51.147 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:51.147 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:51.147 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:51.147 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:51.147 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:51.147 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:51.147 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:33:51.147 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:51.147 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:51.148 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:51.148 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:51.148 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:51.148 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:51.148 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:51.148 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:51.148 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:51.148 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:51.148 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:51.148 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:51.148 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:51.148 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:51.148 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:51.148 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:51.148 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:51.148 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:51.148 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:51.148 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:51.148 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:51.148 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:51.148 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:51.148 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:51.148 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:33:51.148 00:33:51.148 --- 10.0.0.2 ping statistics --- 00:33:51.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:51.148 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:33:51.148 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:51.148 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:51.148 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:33:51.148 00:33:51.148 --- 10.0.0.1 ping statistics --- 00:33:51.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:51.148 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:33:51.148 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:51.148 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:33:51.148 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:51.148 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:51.148 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:51.148 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:51.148 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:51.148 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:51.148 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:51.148 05:47:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:33:51.148 05:47:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:51.148 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:51.148 05:47:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:51.148 05:47:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:51.148 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3388643 00:33:51.148 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:51.148 05:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3388643 00:33:51.148 05:47:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 3388643 ']' 00:33:51.148 05:47:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:51.148 05:47:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:51.148 05:47:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:51.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:51.148 05:47:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:51.148 05:47:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:51.148 [2024-07-14 05:47:58.015938] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:51.148 [2024-07-14 05:47:58.016033] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:51.148 EAL: No free 2048 kB hugepages reported on node 1 00:33:51.148 [2024-07-14 05:47:58.086248] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:51.148 [2024-07-14 05:47:58.181629] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:51.148 [2024-07-14 05:47:58.181682] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:51.148 [2024-07-14 05:47:58.181699] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:51.148 [2024-07-14 05:47:58.181712] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:51.148 [2024-07-14 05:47:58.181724] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:51.148 [2024-07-14 05:47:58.182141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:51.148 [2024-07-14 05:47:58.182170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:51.148 [2024-07-14 05:47:58.182174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:51.407 05:47:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:51.407 05:47:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:33:51.407 05:47:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:51.407 05:47:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:51.407 05:47:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:51.407 05:47:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:51.407 05:47:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:51.407 05:47:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:51.407 05:47:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:51.407 [2024-07-14 05:47:58.319441] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:51.407 05:47:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:51.407 05:47:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:51.407 05:47:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:51.407 05:47:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:51.407 Malloc0 00:33:51.407 05:47:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:51.407 05:47:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:51.407 05:47:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:51.407 05:47:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:51.407 05:47:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:51.407 05:47:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:51.407 05:47:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:51.407 05:47:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:51.407 05:47:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:51.407 05:47:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:51.407 05:47:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:51.407 05:47:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:51.407 [2024-07-14 05:47:58.384703] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:51.407 05:47:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:51.407 05:47:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:33:51.407 05:47:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:33:51.407 05:47:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:33:51.407 05:47:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:33:51.407 05:47:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:51.407 05:47:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:51.407 { 00:33:51.407 "params": { 00:33:51.407 "name": "Nvme$subsystem", 00:33:51.407 "trtype": "$TEST_TRANSPORT", 00:33:51.407 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:51.407 "adrfam": "ipv4", 00:33:51.407 "trsvcid": "$NVMF_PORT", 00:33:51.407 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:51.407 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:51.407 "hdgst": ${hdgst:-false}, 00:33:51.407 "ddgst": ${ddgst:-false} 00:33:51.407 }, 00:33:51.407 "method": "bdev_nvme_attach_controller" 00:33:51.407 } 00:33:51.407 EOF 00:33:51.407 )") 00:33:51.407 05:47:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:33:51.407 05:47:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:33:51.407 05:47:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:33:51.407 05:47:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:33:51.407 "params": { 00:33:51.407 "name": "Nvme1", 00:33:51.407 "trtype": "tcp", 00:33:51.407 "traddr": "10.0.0.2", 00:33:51.407 "adrfam": "ipv4", 00:33:51.407 "trsvcid": "4420", 00:33:51.407 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:51.407 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:51.407 "hdgst": false, 00:33:51.407 "ddgst": false 00:33:51.407 }, 00:33:51.407 "method": "bdev_nvme_attach_controller" 00:33:51.407 }' 00:33:51.407 [2024-07-14 05:47:58.432743] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:51.407 [2024-07-14 05:47:58.432822] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3388791 ] 00:33:51.407 EAL: No free 2048 kB hugepages reported on node 1 00:33:51.407 [2024-07-14 05:47:58.493234] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:51.665 [2024-07-14 05:47:58.584615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:51.924 Running I/O for 1 seconds... 00:33:52.858 00:33:52.858 Latency(us) 00:33:52.858 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:52.858 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:52.858 Verification LBA range: start 0x0 length 0x4000 00:33:52.858 Nvme1n1 : 1.01 8714.96 34.04 0.00 0.00 14616.75 2718.53 16505.36 00:33:52.858 =================================================================================================================== 00:33:52.858 Total : 8714.96 34.04 0.00 0.00 14616.75 2718.53 16505.36 00:33:53.116 05:48:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3388946 00:33:53.116 05:48:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:33:53.116 05:48:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:33:53.116 05:48:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:33:53.116 05:48:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:33:53.116 05:48:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:33:53.116 05:48:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:53.116 05:48:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:53.116 { 00:33:53.116 "params": { 00:33:53.116 "name": "Nvme$subsystem", 00:33:53.116 "trtype": "$TEST_TRANSPORT", 00:33:53.116 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:53.116 "adrfam": "ipv4", 00:33:53.116 "trsvcid": "$NVMF_PORT", 00:33:53.116 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:53.116 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:53.116 "hdgst": ${hdgst:-false}, 00:33:53.116 "ddgst": ${ddgst:-false} 00:33:53.116 }, 00:33:53.116 "method": "bdev_nvme_attach_controller" 00:33:53.116 } 00:33:53.116 EOF 00:33:53.116 )") 00:33:53.116 05:48:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:33:53.116 05:48:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:33:53.116 05:48:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:33:53.116 05:48:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:33:53.116 "params": { 00:33:53.116 "name": "Nvme1", 00:33:53.116 "trtype": "tcp", 00:33:53.116 "traddr": "10.0.0.2", 00:33:53.116 "adrfam": "ipv4", 00:33:53.116 "trsvcid": "4420", 00:33:53.116 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:53.116 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:53.116 "hdgst": false, 00:33:53.116 "ddgst": false 00:33:53.116 }, 00:33:53.116 "method": "bdev_nvme_attach_controller" 00:33:53.116 }' 00:33:53.116 [2024-07-14 05:48:00.079142] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:53.116 [2024-07-14 05:48:00.079239] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3388946 ] 00:33:53.116 EAL: No free 2048 kB hugepages reported on node 1 00:33:53.116 [2024-07-14 05:48:00.140501] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:53.374 [2024-07-14 05:48:00.230618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:53.374 Running I/O for 15 seconds... 00:33:55.939 05:48:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3388643 00:33:55.939 05:48:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:33:56.201 [2024-07-14 05:48:03.051006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:56072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.201 [2024-07-14 05:48:03.051056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.201 [2024-07-14 05:48:03.051086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:56080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.201 [2024-07-14 05:48:03.051105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.201 [2024-07-14 05:48:03.051130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:56088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.201 [2024-07-14 05:48:03.051165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.201 [2024-07-14 05:48:03.051186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:56096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.201 [2024-07-14 05:48:03.051204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.201 [2024-07-14 05:48:03.051221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:56104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.201 [2024-07-14 05:48:03.051236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.201 [2024-07-14 05:48:03.051253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:56112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.201 [2024-07-14 05:48:03.051269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.201 [2024-07-14 05:48:03.051288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:56120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.201 [2024-07-14 05:48:03.051306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.201 [2024-07-14 05:48:03.051325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:56896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.201 [2024-07-14 05:48:03.051342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.201 [2024-07-14 05:48:03.051358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:56904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.201 [2024-07-14 05:48:03.051375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.201 [2024-07-14 05:48:03.051392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:56912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.201 [2024-07-14 05:48:03.051407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.201 [2024-07-14 05:48:03.051427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:56920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.201 [2024-07-14 05:48:03.051443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.201 [2024-07-14 05:48:03.051464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:56928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.201 [2024-07-14 05:48:03.051482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.201 [2024-07-14 05:48:03.051501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:56936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.201 [2024-07-14 05:48:03.051518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.201 [2024-07-14 05:48:03.051540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:56944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.201 [2024-07-14 05:48:03.051558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.201 [2024-07-14 05:48:03.051577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:56952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.201 [2024-07-14 05:48:03.051597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.201 [2024-07-14 05:48:03.051614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:56960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.201 [2024-07-14 05:48:03.051629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.201 [2024-07-14 05:48:03.051653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:56968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.201 [2024-07-14 05:48:03.051668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.201 [2024-07-14 05:48:03.051685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:56976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.201 [2024-07-14 05:48:03.051700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.201 [2024-07-14 05:48:03.051716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:56984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.201 [2024-07-14 05:48:03.051731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.201 [2024-07-14 05:48:03.051748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:56992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.201 [2024-07-14 05:48:03.051763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.201 [2024-07-14 05:48:03.051780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:57000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.201 [2024-07-14 05:48:03.051796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.201 [2024-07-14 05:48:03.051814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:57008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.201 [2024-07-14 05:48:03.051829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.201 [2024-07-14 05:48:03.051846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:57016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.201 [2024-07-14 05:48:03.051861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.201 [2024-07-14 05:48:03.051887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:56128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.201 [2024-07-14 05:48:03.051929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.201 [2024-07-14 05:48:03.051945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:56136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.201 [2024-07-14 05:48:03.051960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.201 [2024-07-14 05:48:03.051975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:56144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.201 [2024-07-14 05:48:03.051989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.201 [2024-07-14 05:48:03.052004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:56152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.201 [2024-07-14 05:48:03.052018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.201 [2024-07-14 05:48:03.052037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:56160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.201 [2024-07-14 05:48:03.052051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.201 [2024-07-14 05:48:03.052067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:56168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.202 [2024-07-14 05:48:03.052081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.202 [2024-07-14 05:48:03.052096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:56176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.202 [2024-07-14 05:48:03.052110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.202 [2024-07-14 05:48:03.052131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:56184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.202 [2024-07-14 05:48:03.052159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.202 [2024-07-14 05:48:03.052175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:56192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.202 [2024-07-14 05:48:03.052188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.202 [2024-07-14 05:48:03.052203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:57024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.202 [2024-07-14 05:48:03.052234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.202 [2024-07-14 05:48:03.052251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:56200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.202 [2024-07-14 05:48:03.052267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.202 [2024-07-14 05:48:03.052284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:56208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.202 [2024-07-14 05:48:03.052299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.202 [2024-07-14 05:48:03.052316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:56216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.202 [2024-07-14 05:48:03.052332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.202 [2024-07-14 05:48:03.052349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:56224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.202 [2024-07-14 05:48:03.052365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.202 [2024-07-14 05:48:03.052382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:56232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.202 [2024-07-14 05:48:03.052398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.202 [2024-07-14 05:48:03.052414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:56240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.202 [2024-07-14 05:48:03.052429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.202 [2024-07-14 05:48:03.052446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:56248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.202 [2024-07-14 05:48:03.052465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.202 [2024-07-14 05:48:03.052482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:56256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.202 [2024-07-14 05:48:03.052498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.202 [2024-07-14 05:48:03.052515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:56264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.202 [2024-07-14 05:48:03.052530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.202 [2024-07-14 05:48:03.052547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:56272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.202 [2024-07-14 05:48:03.052563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.202 [2024-07-14 05:48:03.052580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:56280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.202 [2024-07-14 05:48:03.052595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.202 [2024-07-14 05:48:03.052612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:56288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.202 [2024-07-14 05:48:03.052628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.202 [2024-07-14 05:48:03.052644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:56296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.202 [2024-07-14 05:48:03.052660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.202 [2024-07-14 05:48:03.052677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:56304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.202 [2024-07-14 05:48:03.052692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.202 [2024-07-14 05:48:03.052709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:56312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.202 [2024-07-14 05:48:03.052724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.202 [2024-07-14 05:48:03.052741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:56320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.202 [2024-07-14 05:48:03.052757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.202 [2024-07-14 05:48:03.052773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:56328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.202 [2024-07-14 05:48:03.052788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.202 [2024-07-14 05:48:03.052805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:56336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.202 [2024-07-14 05:48:03.052821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.202 [2024-07-14 05:48:03.052838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:56344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.202 [2024-07-14 05:48:03.052853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.202 [2024-07-14 05:48:03.052876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:56352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.202 [2024-07-14 05:48:03.052898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.202 [2024-07-14 05:48:03.052931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:56360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.202 [2024-07-14 05:48:03.052946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.202 [2024-07-14 05:48:03.052962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:56368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.202 [2024-07-14 05:48:03.052976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.202 [2024-07-14 05:48:03.052996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:56376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.202 [2024-07-14 05:48:03.053010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.202 [2024-07-14 05:48:03.053025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:56384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.202 [2024-07-14 05:48:03.053039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.202 [2024-07-14 05:48:03.053056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:56392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.202 [2024-07-14 05:48:03.053070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.202 [2024-07-14 05:48:03.053086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:56400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.202 [2024-07-14 05:48:03.053099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.202 [2024-07-14 05:48:03.053114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:56408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.202 [2024-07-14 05:48:03.053128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.202 [2024-07-14 05:48:03.053159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:56416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.202 [2024-07-14 05:48:03.053173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.202 [2024-07-14 05:48:03.053187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:56424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.202 [2024-07-14 05:48:03.053201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.202 [2024-07-14 05:48:03.053233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:56432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.202 [2024-07-14 05:48:03.053248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.202 [2024-07-14 05:48:03.053266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:56440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.202 [2024-07-14 05:48:03.053281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.202 [2024-07-14 05:48:03.053297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:56448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.202 [2024-07-14 05:48:03.053313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.202 [2024-07-14 05:48:03.053334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:56456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.202 [2024-07-14 05:48:03.053350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.202 [2024-07-14 05:48:03.053367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:56464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.202 [2024-07-14 05:48:03.053383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.202 [2024-07-14 05:48:03.053400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:56472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.202 [2024-07-14 05:48:03.053415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.202 [2024-07-14 05:48:03.053432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:56480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.202 [2024-07-14 05:48:03.053448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.202 [2024-07-14 05:48:03.053465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:56488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.202 [2024-07-14 05:48:03.053480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.202 [2024-07-14 05:48:03.053497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:56496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.202 [2024-07-14 05:48:03.053512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.202 [2024-07-14 05:48:03.053529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:56504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.203 [2024-07-14 05:48:03.053544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.203 [2024-07-14 05:48:03.053560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:56512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.203 [2024-07-14 05:48:03.053576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.203 [2024-07-14 05:48:03.053593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:56520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.203 [2024-07-14 05:48:03.053608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.203 [2024-07-14 05:48:03.053625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:56528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.203 [2024-07-14 05:48:03.053640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.203 [2024-07-14 05:48:03.053657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:56536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.203 [2024-07-14 05:48:03.053673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.203 [2024-07-14 05:48:03.053689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:56544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.203 [2024-07-14 05:48:03.053705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.203 [2024-07-14 05:48:03.053722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:56552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.203 [2024-07-14 05:48:03.053741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.203 [2024-07-14 05:48:03.053758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:56560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.203 [2024-07-14 05:48:03.053774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.203 [2024-07-14 05:48:03.053791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:56568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.203 [2024-07-14 05:48:03.053807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.203 [2024-07-14 05:48:03.053823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:56576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.203 [2024-07-14 05:48:03.053838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.203 [2024-07-14 05:48:03.053855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:56584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.203 [2024-07-14 05:48:03.053879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.203 [2024-07-14 05:48:03.053897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:56592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.203 [2024-07-14 05:48:03.053927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.203 [2024-07-14 05:48:03.053943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:56600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.203 [2024-07-14 05:48:03.053956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.203 [2024-07-14 05:48:03.053972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:56608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.203 [2024-07-14 05:48:03.053993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.203 [2024-07-14 05:48:03.054009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:56616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.203 [2024-07-14 05:48:03.054022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.203 [2024-07-14 05:48:03.054038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:56624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.203 [2024-07-14 05:48:03.054052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.203 [2024-07-14 05:48:03.054067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:56632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.203 [2024-07-14 05:48:03.054081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.203 [2024-07-14 05:48:03.054096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:56640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.203 [2024-07-14 05:48:03.054110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.203 [2024-07-14 05:48:03.054125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:56648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.203 [2024-07-14 05:48:03.054139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.203 [2024-07-14 05:48:03.054158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:56656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.203 [2024-07-14 05:48:03.054193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.203 [2024-07-14 05:48:03.054208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:56664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.203 [2024-07-14 05:48:03.054221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.203 [2024-07-14 05:48:03.054236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:56672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.203 [2024-07-14 05:48:03.054266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.203 [2024-07-14 05:48:03.054284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:56680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.203 [2024-07-14 05:48:03.054299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.203 [2024-07-14 05:48:03.054316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:56688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.203 [2024-07-14 05:48:03.054331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.203 [2024-07-14 05:48:03.054347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:56696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.203 [2024-07-14 05:48:03.054363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.203 [2024-07-14 05:48:03.054380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:56704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.203 [2024-07-14 05:48:03.054395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.203 [2024-07-14 05:48:03.054412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:56712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.203 [2024-07-14 05:48:03.054427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.203 [2024-07-14 05:48:03.054444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:56720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.203 [2024-07-14 05:48:03.054459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.203 [2024-07-14 05:48:03.054476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:56728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.203 [2024-07-14 05:48:03.054491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.203 [2024-07-14 05:48:03.054508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:56736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.203 [2024-07-14 05:48:03.054524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.203 [2024-07-14 05:48:03.054541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:56744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.203 [2024-07-14 05:48:03.054556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.203 [2024-07-14 05:48:03.054572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:56752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.203 [2024-07-14 05:48:03.054591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.203 [2024-07-14 05:48:03.054608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:56760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.203 [2024-07-14 05:48:03.054623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.203 [2024-07-14 05:48:03.054640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:57032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.203 [2024-07-14 05:48:03.054655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.203 [2024-07-14 05:48:03.054672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:57040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.203 [2024-07-14 05:48:03.054687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.203 [2024-07-14 05:48:03.054703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:57048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.203 [2024-07-14 05:48:03.054719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.203 [2024-07-14 05:48:03.054735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:57056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.203 [2024-07-14 05:48:03.054751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.203 [2024-07-14 05:48:03.054768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:57064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.203 [2024-07-14 05:48:03.054783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.203 [2024-07-14 05:48:03.054800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:57072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.203 [2024-07-14 05:48:03.054815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.203 [2024-07-14 05:48:03.054832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:57080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.203 [2024-07-14 05:48:03.054847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.203 [2024-07-14 05:48:03.054864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:57088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.203 [2024-07-14 05:48:03.054889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.203 [2024-07-14 05:48:03.054921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:56768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.203 [2024-07-14 05:48:03.054936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.203 [2024-07-14 05:48:03.054952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:56776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.203 [2024-07-14 05:48:03.054966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.204 [2024-07-14 05:48:03.054981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:56784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.204 [2024-07-14 05:48:03.054995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.204 [2024-07-14 05:48:03.055010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:56792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.204 [2024-07-14 05:48:03.055028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.204 [2024-07-14 05:48:03.055044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:56800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.204 [2024-07-14 05:48:03.055058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.204 [2024-07-14 05:48:03.055074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:56808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.204 [2024-07-14 05:48:03.055087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.204 [2024-07-14 05:48:03.055103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:56816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.204 [2024-07-14 05:48:03.055117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.204 [2024-07-14 05:48:03.055132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:56824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.204 [2024-07-14 05:48:03.055166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.204 [2024-07-14 05:48:03.055184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:56832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.204 [2024-07-14 05:48:03.055199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.204 [2024-07-14 05:48:03.055216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:56840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.204 [2024-07-14 05:48:03.055232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.204 [2024-07-14 05:48:03.055248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:56848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.204 [2024-07-14 05:48:03.055264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.204 [2024-07-14 05:48:03.055280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:56856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.204 [2024-07-14 05:48:03.055296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.204 [2024-07-14 05:48:03.055313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:56864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.204 [2024-07-14 05:48:03.055328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.204 [2024-07-14 05:48:03.055345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:56872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.204 [2024-07-14 05:48:03.055360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.204 [2024-07-14 05:48:03.055377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:56880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.204 [2024-07-14 05:48:03.055393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.204 [2024-07-14 05:48:03.055408] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9609a0 is same with the state(5) to be set 00:33:56.204 [2024-07-14 05:48:03.055426] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:56.204 [2024-07-14 05:48:03.055443] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:56.204 [2024-07-14 05:48:03.055457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:56888 len:8 PRP1 0x0 PRP2 0x0 00:33:56.204 [2024-07-14 05:48:03.055471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.204 [2024-07-14 05:48:03.055538] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x9609a0 was disconnected and freed. reset controller. 00:33:56.204 [2024-07-14 05:48:03.059441] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.204 [2024-07-14 05:48:03.059517] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.204 [2024-07-14 05:48:03.060243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.204 [2024-07-14 05:48:03.060276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.204 [2024-07-14 05:48:03.060294] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.204 [2024-07-14 05:48:03.060536] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.204 [2024-07-14 05:48:03.060780] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.204 [2024-07-14 05:48:03.060804] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.204 [2024-07-14 05:48:03.060822] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.204 [2024-07-14 05:48:03.064422] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.204 [2024-07-14 05:48:03.073774] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.204 [2024-07-14 05:48:03.074232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.204 [2024-07-14 05:48:03.074260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.204 [2024-07-14 05:48:03.074290] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.204 [2024-07-14 05:48:03.074546] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.204 [2024-07-14 05:48:03.074789] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.204 [2024-07-14 05:48:03.074812] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.204 [2024-07-14 05:48:03.074828] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.204 [2024-07-14 05:48:03.078430] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.204 [2024-07-14 05:48:03.087781] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.204 [2024-07-14 05:48:03.088246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.204 [2024-07-14 05:48:03.088278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.204 [2024-07-14 05:48:03.088297] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.204 [2024-07-14 05:48:03.088536] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.204 [2024-07-14 05:48:03.088779] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.204 [2024-07-14 05:48:03.088802] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.204 [2024-07-14 05:48:03.088824] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.204 [2024-07-14 05:48:03.092427] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.204 [2024-07-14 05:48:03.101759] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.204 [2024-07-14 05:48:03.102261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.204 [2024-07-14 05:48:03.102293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.204 [2024-07-14 05:48:03.102311] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.204 [2024-07-14 05:48:03.102550] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.204 [2024-07-14 05:48:03.102793] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.204 [2024-07-14 05:48:03.102816] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.204 [2024-07-14 05:48:03.102831] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.204 [2024-07-14 05:48:03.106431] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.204 [2024-07-14 05:48:03.115761] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.204 [2024-07-14 05:48:03.116249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.204 [2024-07-14 05:48:03.116279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.204 [2024-07-14 05:48:03.116297] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.204 [2024-07-14 05:48:03.116536] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.204 [2024-07-14 05:48:03.116779] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.204 [2024-07-14 05:48:03.116802] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.204 [2024-07-14 05:48:03.116818] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.204 [2024-07-14 05:48:03.120420] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.204 [2024-07-14 05:48:03.129749] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.204 [2024-07-14 05:48:03.130223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.204 [2024-07-14 05:48:03.130254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.204 [2024-07-14 05:48:03.130271] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.204 [2024-07-14 05:48:03.130511] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.204 [2024-07-14 05:48:03.130754] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.204 [2024-07-14 05:48:03.130777] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.204 [2024-07-14 05:48:03.130792] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.204 [2024-07-14 05:48:03.134393] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.204 [2024-07-14 05:48:03.143553] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.204 [2024-07-14 05:48:03.143958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.204 [2024-07-14 05:48:03.143993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.204 [2024-07-14 05:48:03.144010] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.204 [2024-07-14 05:48:03.144225] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.204 [2024-07-14 05:48:03.144444] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.204 [2024-07-14 05:48:03.144465] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.205 [2024-07-14 05:48:03.144479] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.205 [2024-07-14 05:48:03.147700] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.205 [2024-07-14 05:48:03.156824] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.205 [2024-07-14 05:48:03.157297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.205 [2024-07-14 05:48:03.157325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.205 [2024-07-14 05:48:03.157341] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.205 [2024-07-14 05:48:03.157581] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.205 [2024-07-14 05:48:03.157780] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.205 [2024-07-14 05:48:03.157799] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.205 [2024-07-14 05:48:03.157812] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.205 [2024-07-14 05:48:03.160834] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.205 [2024-07-14 05:48:03.170140] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.205 [2024-07-14 05:48:03.170632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.205 [2024-07-14 05:48:03.170660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.205 [2024-07-14 05:48:03.170675] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.205 [2024-07-14 05:48:03.170942] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.205 [2024-07-14 05:48:03.171163] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.205 [2024-07-14 05:48:03.171183] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.205 [2024-07-14 05:48:03.171196] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.205 [2024-07-14 05:48:03.174150] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.205 [2024-07-14 05:48:03.183396] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.205 [2024-07-14 05:48:03.183904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.205 [2024-07-14 05:48:03.183932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.205 [2024-07-14 05:48:03.183948] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.205 [2024-07-14 05:48:03.184202] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.205 [2024-07-14 05:48:03.184405] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.205 [2024-07-14 05:48:03.184425] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.205 [2024-07-14 05:48:03.184437] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.205 [2024-07-14 05:48:03.187417] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.205 [2024-07-14 05:48:03.196686] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.205 [2024-07-14 05:48:03.197093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.205 [2024-07-14 05:48:03.197120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.205 [2024-07-14 05:48:03.197137] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.205 [2024-07-14 05:48:03.197391] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.205 [2024-07-14 05:48:03.197590] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.205 [2024-07-14 05:48:03.197609] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.205 [2024-07-14 05:48:03.197622] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.205 [2024-07-14 05:48:03.200604] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.205 [2024-07-14 05:48:03.210005] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.205 [2024-07-14 05:48:03.210496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.205 [2024-07-14 05:48:03.210524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.205 [2024-07-14 05:48:03.210540] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.205 [2024-07-14 05:48:03.210783] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.205 [2024-07-14 05:48:03.211025] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.205 [2024-07-14 05:48:03.211046] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.205 [2024-07-14 05:48:03.211059] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.205 [2024-07-14 05:48:03.214015] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.205 [2024-07-14 05:48:03.223240] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.205 [2024-07-14 05:48:03.223675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.205 [2024-07-14 05:48:03.223702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.205 [2024-07-14 05:48:03.223718] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.205 [2024-07-14 05:48:03.223966] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.205 [2024-07-14 05:48:03.224186] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.205 [2024-07-14 05:48:03.224205] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.205 [2024-07-14 05:48:03.224218] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.205 [2024-07-14 05:48:03.227160] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.205 [2024-07-14 05:48:03.236535] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.205 [2024-07-14 05:48:03.236965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.205 [2024-07-14 05:48:03.236994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.205 [2024-07-14 05:48:03.237010] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.205 [2024-07-14 05:48:03.237263] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.205 [2024-07-14 05:48:03.237462] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.205 [2024-07-14 05:48:03.237481] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.205 [2024-07-14 05:48:03.237494] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.205 [2024-07-14 05:48:03.240518] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.205 [2024-07-14 05:48:03.249767] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.205 [2024-07-14 05:48:03.250202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.205 [2024-07-14 05:48:03.250230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.205 [2024-07-14 05:48:03.250246] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.205 [2024-07-14 05:48:03.250500] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.205 [2024-07-14 05:48:03.250700] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.205 [2024-07-14 05:48:03.250718] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.205 [2024-07-14 05:48:03.250731] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.205 [2024-07-14 05:48:03.253711] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.205 [2024-07-14 05:48:03.262943] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.205 [2024-07-14 05:48:03.263434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.205 [2024-07-14 05:48:03.263461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.205 [2024-07-14 05:48:03.263477] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.205 [2024-07-14 05:48:03.263729] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.205 [2024-07-14 05:48:03.263956] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.206 [2024-07-14 05:48:03.263977] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.206 [2024-07-14 05:48:03.263990] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.206 [2024-07-14 05:48:03.266947] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.206 [2024-07-14 05:48:03.276251] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.206 [2024-07-14 05:48:03.276678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.206 [2024-07-14 05:48:03.276705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.206 [2024-07-14 05:48:03.276725] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.206 [2024-07-14 05:48:03.277007] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.206 [2024-07-14 05:48:03.277213] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.206 [2024-07-14 05:48:03.277248] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.206 [2024-07-14 05:48:03.277261] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.206 [2024-07-14 05:48:03.280199] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.206 [2024-07-14 05:48:03.289465] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.206 [2024-07-14 05:48:03.289848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.206 [2024-07-14 05:48:03.289880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.206 [2024-07-14 05:48:03.289897] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.206 [2024-07-14 05:48:03.290133] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.206 [2024-07-14 05:48:03.290332] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.206 [2024-07-14 05:48:03.290351] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.206 [2024-07-14 05:48:03.290364] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.206 [2024-07-14 05:48:03.293413] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.206 [2024-07-14 05:48:03.303366] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.464 [2024-07-14 05:48:03.303817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.464 [2024-07-14 05:48:03.303847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.464 [2024-07-14 05:48:03.303872] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.464 [2024-07-14 05:48:03.304090] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.464 [2024-07-14 05:48:03.304338] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.464 [2024-07-14 05:48:03.304367] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.464 [2024-07-14 05:48:03.304393] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.464 [2024-07-14 05:48:03.307676] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.464 [2024-07-14 05:48:03.317188] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.464 [2024-07-14 05:48:03.317615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.464 [2024-07-14 05:48:03.317644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.464 [2024-07-14 05:48:03.317660] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.464 [2024-07-14 05:48:03.317897] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.464 [2024-07-14 05:48:03.318110] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.464 [2024-07-14 05:48:03.318135] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.464 [2024-07-14 05:48:03.318149] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.464 [2024-07-14 05:48:03.321435] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.464 [2024-07-14 05:48:03.330623] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.464 [2024-07-14 05:48:03.331096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.464 [2024-07-14 05:48:03.331125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.464 [2024-07-14 05:48:03.331141] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.464 [2024-07-14 05:48:03.331380] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.464 [2024-07-14 05:48:03.331587] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.464 [2024-07-14 05:48:03.331606] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.464 [2024-07-14 05:48:03.331619] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.464 [2024-07-14 05:48:03.334678] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.464 [2024-07-14 05:48:03.343855] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.464 [2024-07-14 05:48:03.344291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.464 [2024-07-14 05:48:03.344318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.464 [2024-07-14 05:48:03.344348] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.464 [2024-07-14 05:48:03.344591] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.464 [2024-07-14 05:48:03.344796] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.464 [2024-07-14 05:48:03.344816] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.464 [2024-07-14 05:48:03.344829] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.464 [2024-07-14 05:48:03.347835] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.464 [2024-07-14 05:48:03.357119] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.464 [2024-07-14 05:48:03.357569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.464 [2024-07-14 05:48:03.357597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.464 [2024-07-14 05:48:03.357613] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.464 [2024-07-14 05:48:03.357857] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.464 [2024-07-14 05:48:03.358094] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.464 [2024-07-14 05:48:03.358115] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.464 [2024-07-14 05:48:03.358128] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.464 [2024-07-14 05:48:03.361107] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.464 [2024-07-14 05:48:03.370360] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.464 [2024-07-14 05:48:03.370769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.464 [2024-07-14 05:48:03.370797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.464 [2024-07-14 05:48:03.370813] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.464 [2024-07-14 05:48:03.371054] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.464 [2024-07-14 05:48:03.371280] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.464 [2024-07-14 05:48:03.371300] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.464 [2024-07-14 05:48:03.371313] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.464 [2024-07-14 05:48:03.374288] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.464 [2024-07-14 05:48:03.383561] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.464 [2024-07-14 05:48:03.383980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.464 [2024-07-14 05:48:03.384010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.464 [2024-07-14 05:48:03.384027] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.464 [2024-07-14 05:48:03.384270] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.464 [2024-07-14 05:48:03.384476] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.464 [2024-07-14 05:48:03.384496] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.464 [2024-07-14 05:48:03.384509] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.464 [2024-07-14 05:48:03.387513] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.464 [2024-07-14 05:48:03.396833] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.464 [2024-07-14 05:48:03.397327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.464 [2024-07-14 05:48:03.397355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.464 [2024-07-14 05:48:03.397371] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.464 [2024-07-14 05:48:03.397611] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.464 [2024-07-14 05:48:03.397817] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.464 [2024-07-14 05:48:03.397837] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.465 [2024-07-14 05:48:03.397850] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.465 [2024-07-14 05:48:03.400853] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.465 [2024-07-14 05:48:03.410112] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.465 [2024-07-14 05:48:03.410523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.465 [2024-07-14 05:48:03.410549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.465 [2024-07-14 05:48:03.410564] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.465 [2024-07-14 05:48:03.410814] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.465 [2024-07-14 05:48:03.411035] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.465 [2024-07-14 05:48:03.411056] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.465 [2024-07-14 05:48:03.411070] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.465 [2024-07-14 05:48:03.414050] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.465 [2024-07-14 05:48:03.423304] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.465 [2024-07-14 05:48:03.423767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.465 [2024-07-14 05:48:03.423795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.465 [2024-07-14 05:48:03.423811] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.465 [2024-07-14 05:48:03.424050] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.465 [2024-07-14 05:48:03.424275] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.465 [2024-07-14 05:48:03.424295] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.465 [2024-07-14 05:48:03.424309] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.465 [2024-07-14 05:48:03.427283] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.465 [2024-07-14 05:48:03.436581] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.465 [2024-07-14 05:48:03.436988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.465 [2024-07-14 05:48:03.437017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.465 [2024-07-14 05:48:03.437033] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.465 [2024-07-14 05:48:03.437280] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.465 [2024-07-14 05:48:03.437486] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.465 [2024-07-14 05:48:03.437505] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.465 [2024-07-14 05:48:03.437518] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.465 [2024-07-14 05:48:03.440572] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.465 [2024-07-14 05:48:03.449856] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.465 [2024-07-14 05:48:03.450332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.465 [2024-07-14 05:48:03.450360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.465 [2024-07-14 05:48:03.450376] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.465 [2024-07-14 05:48:03.450618] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.465 [2024-07-14 05:48:03.450824] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.465 [2024-07-14 05:48:03.450843] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.465 [2024-07-14 05:48:03.450886] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.465 [2024-07-14 05:48:03.454014] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.465 [2024-07-14 05:48:03.463184] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.465 [2024-07-14 05:48:03.463682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.465 [2024-07-14 05:48:03.463709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.465 [2024-07-14 05:48:03.463725] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.465 [2024-07-14 05:48:03.463950] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.465 [2024-07-14 05:48:03.464185] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.465 [2024-07-14 05:48:03.464206] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.465 [2024-07-14 05:48:03.464234] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.465 [2024-07-14 05:48:03.467194] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.465 [2024-07-14 05:48:03.476492] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.465 [2024-07-14 05:48:03.476893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.465 [2024-07-14 05:48:03.476920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.465 [2024-07-14 05:48:03.476937] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.465 [2024-07-14 05:48:03.477167] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.465 [2024-07-14 05:48:03.477389] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.465 [2024-07-14 05:48:03.477409] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.465 [2024-07-14 05:48:03.477422] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.465 [2024-07-14 05:48:03.480464] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.465 [2024-07-14 05:48:03.489790] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.465 [2024-07-14 05:48:03.490294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.465 [2024-07-14 05:48:03.490322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.465 [2024-07-14 05:48:03.490339] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.465 [2024-07-14 05:48:03.490581] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.465 [2024-07-14 05:48:03.490786] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.465 [2024-07-14 05:48:03.490806] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.465 [2024-07-14 05:48:03.490819] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.465 [2024-07-14 05:48:03.493823] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.465 [2024-07-14 05:48:03.503089] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.465 [2024-07-14 05:48:03.503524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.465 [2024-07-14 05:48:03.503556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.465 [2024-07-14 05:48:03.503572] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.465 [2024-07-14 05:48:03.503815] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.465 [2024-07-14 05:48:03.504052] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.465 [2024-07-14 05:48:03.504073] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.465 [2024-07-14 05:48:03.504087] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.465 [2024-07-14 05:48:03.507066] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.465 [2024-07-14 05:48:03.516365] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.465 [2024-07-14 05:48:03.516791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.465 [2024-07-14 05:48:03.516819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.465 [2024-07-14 05:48:03.516835] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.465 [2024-07-14 05:48:03.517075] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.465 [2024-07-14 05:48:03.517300] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.465 [2024-07-14 05:48:03.517320] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.465 [2024-07-14 05:48:03.517334] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.465 [2024-07-14 05:48:03.520297] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.465 [2024-07-14 05:48:03.529604] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.465 [2024-07-14 05:48:03.530041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.465 [2024-07-14 05:48:03.530069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.465 [2024-07-14 05:48:03.530086] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.465 [2024-07-14 05:48:03.530329] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.465 [2024-07-14 05:48:03.530534] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.465 [2024-07-14 05:48:03.530554] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.465 [2024-07-14 05:48:03.530567] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.465 [2024-07-14 05:48:03.533651] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.465 [2024-07-14 05:48:03.542934] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.465 [2024-07-14 05:48:03.543407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.465 [2024-07-14 05:48:03.543434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.465 [2024-07-14 05:48:03.543450] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.465 [2024-07-14 05:48:03.543694] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.465 [2024-07-14 05:48:03.543931] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.466 [2024-07-14 05:48:03.543952] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.466 [2024-07-14 05:48:03.543966] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.466 [2024-07-14 05:48:03.546949] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.466 [2024-07-14 05:48:03.556130] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.466 [2024-07-14 05:48:03.556598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.466 [2024-07-14 05:48:03.556626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.466 [2024-07-14 05:48:03.556642] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.466 [2024-07-14 05:48:03.556894] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.466 [2024-07-14 05:48:03.557122] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.466 [2024-07-14 05:48:03.557143] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.466 [2024-07-14 05:48:03.557156] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.466 [2024-07-14 05:48:03.560226] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.723 [2024-07-14 05:48:03.569902] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.723 [2024-07-14 05:48:03.570353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.723 [2024-07-14 05:48:03.570382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.723 [2024-07-14 05:48:03.570398] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.723 [2024-07-14 05:48:03.570641] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.724 [2024-07-14 05:48:03.570863] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.724 [2024-07-14 05:48:03.570892] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.724 [2024-07-14 05:48:03.570906] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.724 [2024-07-14 05:48:03.574330] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.724 [2024-07-14 05:48:03.583216] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.724 [2024-07-14 05:48:03.583619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.724 [2024-07-14 05:48:03.583646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.724 [2024-07-14 05:48:03.583662] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.724 [2024-07-14 05:48:03.583917] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.724 [2024-07-14 05:48:03.584136] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.724 [2024-07-14 05:48:03.584157] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.724 [2024-07-14 05:48:03.584186] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.724 [2024-07-14 05:48:03.587171] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.724 [2024-07-14 05:48:03.596523] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.724 [2024-07-14 05:48:03.596984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.724 [2024-07-14 05:48:03.597026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.724 [2024-07-14 05:48:03.597043] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.724 [2024-07-14 05:48:03.597284] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.724 [2024-07-14 05:48:03.597489] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.724 [2024-07-14 05:48:03.597509] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.724 [2024-07-14 05:48:03.597522] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.724 [2024-07-14 05:48:03.600525] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.724 [2024-07-14 05:48:03.609786] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.724 [2024-07-14 05:48:03.610258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.724 [2024-07-14 05:48:03.610286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.724 [2024-07-14 05:48:03.610302] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.724 [2024-07-14 05:48:03.610545] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.724 [2024-07-14 05:48:03.610767] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.724 [2024-07-14 05:48:03.610788] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.724 [2024-07-14 05:48:03.610802] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.724 [2024-07-14 05:48:03.613855] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.724 [2024-07-14 05:48:03.623127] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.724 [2024-07-14 05:48:03.623535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.724 [2024-07-14 05:48:03.623563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.724 [2024-07-14 05:48:03.623579] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.724 [2024-07-14 05:48:03.623821] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.724 [2024-07-14 05:48:03.624077] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.724 [2024-07-14 05:48:03.624100] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.724 [2024-07-14 05:48:03.624114] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.724 [2024-07-14 05:48:03.627111] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.724 [2024-07-14 05:48:03.636411] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.724 [2024-07-14 05:48:03.636881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.724 [2024-07-14 05:48:03.636909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.724 [2024-07-14 05:48:03.636930] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.724 [2024-07-14 05:48:03.637173] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.724 [2024-07-14 05:48:03.637378] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.724 [2024-07-14 05:48:03.637398] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.724 [2024-07-14 05:48:03.637411] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.724 [2024-07-14 05:48:03.640455] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.724 [2024-07-14 05:48:03.649613] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.724 [2024-07-14 05:48:03.650021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.724 [2024-07-14 05:48:03.650049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.724 [2024-07-14 05:48:03.650065] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.724 [2024-07-14 05:48:03.650296] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.724 [2024-07-14 05:48:03.650518] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.724 [2024-07-14 05:48:03.650538] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.724 [2024-07-14 05:48:03.650551] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.724 [2024-07-14 05:48:03.653562] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.724 [2024-07-14 05:48:03.662815] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.724 [2024-07-14 05:48:03.663286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.724 [2024-07-14 05:48:03.663314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.724 [2024-07-14 05:48:03.663330] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.724 [2024-07-14 05:48:03.663571] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.724 [2024-07-14 05:48:03.663777] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.724 [2024-07-14 05:48:03.663797] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.724 [2024-07-14 05:48:03.663810] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.724 [2024-07-14 05:48:03.666796] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.724 [2024-07-14 05:48:03.676125] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.724 [2024-07-14 05:48:03.676595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.724 [2024-07-14 05:48:03.676623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.724 [2024-07-14 05:48:03.676639] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.724 [2024-07-14 05:48:03.676892] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.724 [2024-07-14 05:48:03.677120] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.724 [2024-07-14 05:48:03.677144] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.724 [2024-07-14 05:48:03.677158] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.724 [2024-07-14 05:48:03.680137] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.724 [2024-07-14 05:48:03.689300] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.724 [2024-07-14 05:48:03.689752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.724 [2024-07-14 05:48:03.689779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.724 [2024-07-14 05:48:03.689795] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.724 [2024-07-14 05:48:03.690039] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.724 [2024-07-14 05:48:03.690265] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.724 [2024-07-14 05:48:03.690285] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.724 [2024-07-14 05:48:03.690299] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.724 [2024-07-14 05:48:03.693260] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.724 [2024-07-14 05:48:03.702517] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.724 [2024-07-14 05:48:03.702948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.724 [2024-07-14 05:48:03.702976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.724 [2024-07-14 05:48:03.702992] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.724 [2024-07-14 05:48:03.703236] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.724 [2024-07-14 05:48:03.703441] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.724 [2024-07-14 05:48:03.703461] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.724 [2024-07-14 05:48:03.703474] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.724 [2024-07-14 05:48:03.706475] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.724 [2024-07-14 05:48:03.715811] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.724 [2024-07-14 05:48:03.716238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.724 [2024-07-14 05:48:03.716266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.724 [2024-07-14 05:48:03.716282] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.725 [2024-07-14 05:48:03.716527] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.725 [2024-07-14 05:48:03.716732] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.725 [2024-07-14 05:48:03.716752] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.725 [2024-07-14 05:48:03.716765] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.725 [2024-07-14 05:48:03.719773] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.725 [2024-07-14 05:48:03.729097] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.725 [2024-07-14 05:48:03.729493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.725 [2024-07-14 05:48:03.729521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.725 [2024-07-14 05:48:03.729537] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.725 [2024-07-14 05:48:03.729780] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.725 [2024-07-14 05:48:03.729994] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.725 [2024-07-14 05:48:03.730014] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.725 [2024-07-14 05:48:03.730027] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.725 [2024-07-14 05:48:03.733032] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.725 [2024-07-14 05:48:03.742464] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.725 [2024-07-14 05:48:03.742893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.725 [2024-07-14 05:48:03.742934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.725 [2024-07-14 05:48:03.742950] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.725 [2024-07-14 05:48:03.743180] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.725 [2024-07-14 05:48:03.743386] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.725 [2024-07-14 05:48:03.743406] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.725 [2024-07-14 05:48:03.743419] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.725 [2024-07-14 05:48:03.746383] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.725 [2024-07-14 05:48:03.756090] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.725 [2024-07-14 05:48:03.756614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.725 [2024-07-14 05:48:03.756641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.725 [2024-07-14 05:48:03.756657] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.725 [2024-07-14 05:48:03.756897] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.725 [2024-07-14 05:48:03.757119] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.725 [2024-07-14 05:48:03.757139] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.725 [2024-07-14 05:48:03.757153] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.725 [2024-07-14 05:48:03.760174] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.725 [2024-07-14 05:48:03.769381] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.725 [2024-07-14 05:48:03.769783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.725 [2024-07-14 05:48:03.769810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.725 [2024-07-14 05:48:03.769826] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.725 [2024-07-14 05:48:03.770069] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.725 [2024-07-14 05:48:03.770294] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.725 [2024-07-14 05:48:03.770314] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.725 [2024-07-14 05:48:03.770327] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.725 [2024-07-14 05:48:03.773295] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.725 [2024-07-14 05:48:03.782965] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.725 [2024-07-14 05:48:03.783420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.725 [2024-07-14 05:48:03.783448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.725 [2024-07-14 05:48:03.783464] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.725 [2024-07-14 05:48:03.783719] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.725 [2024-07-14 05:48:03.783949] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.725 [2024-07-14 05:48:03.783970] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.725 [2024-07-14 05:48:03.783984] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.725 [2024-07-14 05:48:03.787124] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.725 [2024-07-14 05:48:03.796489] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.725 [2024-07-14 05:48:03.796922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.725 [2024-07-14 05:48:03.796951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.725 [2024-07-14 05:48:03.796967] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.725 [2024-07-14 05:48:03.797209] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.725 [2024-07-14 05:48:03.797408] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.725 [2024-07-14 05:48:03.797427] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.725 [2024-07-14 05:48:03.797440] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.725 [2024-07-14 05:48:03.800548] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.725 [2024-07-14 05:48:03.809859] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.725 [2024-07-14 05:48:03.810259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.725 [2024-07-14 05:48:03.810286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.725 [2024-07-14 05:48:03.810302] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.725 [2024-07-14 05:48:03.810548] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.725 [2024-07-14 05:48:03.810764] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.725 [2024-07-14 05:48:03.810783] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.725 [2024-07-14 05:48:03.810803] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.725 [2024-07-14 05:48:03.814018] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.725 [2024-07-14 05:48:03.823147] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.725 [2024-07-14 05:48:03.823527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.725 [2024-07-14 05:48:03.823553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.725 [2024-07-14 05:48:03.823568] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.725 [2024-07-14 05:48:03.823803] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.725 [2024-07-14 05:48:03.824011] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.725 [2024-07-14 05:48:03.824030] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.725 [2024-07-14 05:48:03.824043] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.725 [2024-07-14 05:48:03.827521] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.983 [2024-07-14 05:48:03.836662] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.983 [2024-07-14 05:48:03.837090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.983 [2024-07-14 05:48:03.837119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.983 [2024-07-14 05:48:03.837135] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.983 [2024-07-14 05:48:03.837390] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.983 [2024-07-14 05:48:03.837590] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.983 [2024-07-14 05:48:03.837609] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.983 [2024-07-14 05:48:03.837622] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.983 [2024-07-14 05:48:03.840609] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.983 [2024-07-14 05:48:03.849975] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.983 [2024-07-14 05:48:03.850420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.983 [2024-07-14 05:48:03.850446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.983 [2024-07-14 05:48:03.850462] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.983 [2024-07-14 05:48:03.850696] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.983 [2024-07-14 05:48:03.850905] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.984 [2024-07-14 05:48:03.850925] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.984 [2024-07-14 05:48:03.850937] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.984 [2024-07-14 05:48:03.853894] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.984 [2024-07-14 05:48:03.863242] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.984 [2024-07-14 05:48:03.863701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.984 [2024-07-14 05:48:03.863741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.984 [2024-07-14 05:48:03.863758] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.984 [2024-07-14 05:48:03.864023] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.984 [2024-07-14 05:48:03.864223] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.984 [2024-07-14 05:48:03.864242] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.984 [2024-07-14 05:48:03.864255] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.984 [2024-07-14 05:48:03.867231] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.984 [2024-07-14 05:48:03.876506] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.984 [2024-07-14 05:48:03.876955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.984 [2024-07-14 05:48:03.876982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.984 [2024-07-14 05:48:03.876998] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.984 [2024-07-14 05:48:03.877233] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.984 [2024-07-14 05:48:03.877432] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.984 [2024-07-14 05:48:03.877451] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.984 [2024-07-14 05:48:03.877464] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.984 [2024-07-14 05:48:03.880440] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.984 [2024-07-14 05:48:03.889740] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.984 [2024-07-14 05:48:03.890131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.984 [2024-07-14 05:48:03.890157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.984 [2024-07-14 05:48:03.890172] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.984 [2024-07-14 05:48:03.890408] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.984 [2024-07-14 05:48:03.890608] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.984 [2024-07-14 05:48:03.890627] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.984 [2024-07-14 05:48:03.890640] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.984 [2024-07-14 05:48:03.893666] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.984 [2024-07-14 05:48:03.902932] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.984 [2024-07-14 05:48:03.903421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.984 [2024-07-14 05:48:03.903449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.984 [2024-07-14 05:48:03.903464] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.984 [2024-07-14 05:48:03.903716] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.984 [2024-07-14 05:48:03.903948] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.984 [2024-07-14 05:48:03.903968] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.984 [2024-07-14 05:48:03.903982] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.984 [2024-07-14 05:48:03.906937] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.984 [2024-07-14 05:48:03.916192] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.984 [2024-07-14 05:48:03.916560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.984 [2024-07-14 05:48:03.916586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.984 [2024-07-14 05:48:03.916601] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.984 [2024-07-14 05:48:03.916802] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.984 [2024-07-14 05:48:03.917045] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.984 [2024-07-14 05:48:03.917066] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.984 [2024-07-14 05:48:03.917079] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.984 [2024-07-14 05:48:03.920034] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.984 [2024-07-14 05:48:03.929420] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.984 [2024-07-14 05:48:03.929822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.984 [2024-07-14 05:48:03.929848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.984 [2024-07-14 05:48:03.929864] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.984 [2024-07-14 05:48:03.930082] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.984 [2024-07-14 05:48:03.930315] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.984 [2024-07-14 05:48:03.930335] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.984 [2024-07-14 05:48:03.930348] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.984 [2024-07-14 05:48:03.933325] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.984 [2024-07-14 05:48:03.942594] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.984 [2024-07-14 05:48:03.942990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.984 [2024-07-14 05:48:03.943017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.984 [2024-07-14 05:48:03.943033] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.984 [2024-07-14 05:48:03.943266] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.984 [2024-07-14 05:48:03.943465] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.984 [2024-07-14 05:48:03.943484] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.984 [2024-07-14 05:48:03.943497] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.984 [2024-07-14 05:48:03.946498] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.984 [2024-07-14 05:48:03.955913] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.984 [2024-07-14 05:48:03.956342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.984 [2024-07-14 05:48:03.956370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.984 [2024-07-14 05:48:03.956386] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.984 [2024-07-14 05:48:03.956639] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.984 [2024-07-14 05:48:03.956837] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.984 [2024-07-14 05:48:03.956856] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.984 [2024-07-14 05:48:03.956891] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.984 [2024-07-14 05:48:03.959847] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.984 [2024-07-14 05:48:03.969065] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.984 [2024-07-14 05:48:03.969468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.984 [2024-07-14 05:48:03.969495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.984 [2024-07-14 05:48:03.969510] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.984 [2024-07-14 05:48:03.969732] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.984 [2024-07-14 05:48:03.969974] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.984 [2024-07-14 05:48:03.969995] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.984 [2024-07-14 05:48:03.970008] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.984 [2024-07-14 05:48:03.972963] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.984 [2024-07-14 05:48:03.982351] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.984 [2024-07-14 05:48:03.982773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.984 [2024-07-14 05:48:03.982800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.984 [2024-07-14 05:48:03.982815] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.984 [2024-07-14 05:48:03.983067] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.984 [2024-07-14 05:48:03.983285] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.984 [2024-07-14 05:48:03.983304] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.984 [2024-07-14 05:48:03.983317] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.984 [2024-07-14 05:48:03.986293] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.984 [2024-07-14 05:48:03.995511] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.984 [2024-07-14 05:48:03.995889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.984 [2024-07-14 05:48:03.995916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.984 [2024-07-14 05:48:03.995935] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.984 [2024-07-14 05:48:03.996151] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.985 [2024-07-14 05:48:03.996350] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.985 [2024-07-14 05:48:03.996369] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.985 [2024-07-14 05:48:03.996382] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.985 [2024-07-14 05:48:03.999358] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.985 [2024-07-14 05:48:04.008841] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.985 [2024-07-14 05:48:04.009339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.985 [2024-07-14 05:48:04.009367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.985 [2024-07-14 05:48:04.009384] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.985 [2024-07-14 05:48:04.009637] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.985 [2024-07-14 05:48:04.009837] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.985 [2024-07-14 05:48:04.009856] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.985 [2024-07-14 05:48:04.009877] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.985 [2024-07-14 05:48:04.012890] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.985 [2024-07-14 05:48:04.022151] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.985 [2024-07-14 05:48:04.022638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.985 [2024-07-14 05:48:04.022666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.985 [2024-07-14 05:48:04.022682] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.985 [2024-07-14 05:48:04.022915] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.985 [2024-07-14 05:48:04.023121] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.985 [2024-07-14 05:48:04.023141] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.985 [2024-07-14 05:48:04.023168] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.985 [2024-07-14 05:48:04.026107] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.985 [2024-07-14 05:48:04.035444] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.985 [2024-07-14 05:48:04.035839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.985 [2024-07-14 05:48:04.035874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.985 [2024-07-14 05:48:04.035901] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.985 [2024-07-14 05:48:04.036141] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.985 [2024-07-14 05:48:04.036340] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.985 [2024-07-14 05:48:04.036364] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.985 [2024-07-14 05:48:04.036377] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.985 [2024-07-14 05:48:04.039353] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.985 [2024-07-14 05:48:04.048681] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.985 [2024-07-14 05:48:04.049123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.985 [2024-07-14 05:48:04.049150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.985 [2024-07-14 05:48:04.049166] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.985 [2024-07-14 05:48:04.049419] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.985 [2024-07-14 05:48:04.049618] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.985 [2024-07-14 05:48:04.049637] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.985 [2024-07-14 05:48:04.049650] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.985 [2024-07-14 05:48:04.052667] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.985 [2024-07-14 05:48:04.062097] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.985 [2024-07-14 05:48:04.062493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.985 [2024-07-14 05:48:04.062519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.985 [2024-07-14 05:48:04.062535] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.985 [2024-07-14 05:48:04.062777] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.985 [2024-07-14 05:48:04.063011] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.985 [2024-07-14 05:48:04.063033] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.985 [2024-07-14 05:48:04.063047] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.985 [2024-07-14 05:48:04.066286] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.985 [2024-07-14 05:48:04.075692] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.985 [2024-07-14 05:48:04.076117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.985 [2024-07-14 05:48:04.076145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:56.985 [2024-07-14 05:48:04.076160] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:56.985 [2024-07-14 05:48:04.076416] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:56.985 [2024-07-14 05:48:04.076615] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.985 [2024-07-14 05:48:04.076634] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.985 [2024-07-14 05:48:04.076647] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.985 [2024-07-14 05:48:04.079624] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.244 [2024-07-14 05:48:04.089604] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.244 [2024-07-14 05:48:04.090047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.244 [2024-07-14 05:48:04.090075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:57.244 [2024-07-14 05:48:04.090092] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:57.244 [2024-07-14 05:48:04.090347] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:57.244 [2024-07-14 05:48:04.090547] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.244 [2024-07-14 05:48:04.090566] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.244 [2024-07-14 05:48:04.090579] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.244 [2024-07-14 05:48:04.093966] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.244 [2024-07-14 05:48:04.103008] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.244 [2024-07-14 05:48:04.103500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.244 [2024-07-14 05:48:04.103528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:57.244 [2024-07-14 05:48:04.103544] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:57.244 [2024-07-14 05:48:04.103797] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:57.244 [2024-07-14 05:48:04.104045] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.244 [2024-07-14 05:48:04.104066] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.244 [2024-07-14 05:48:04.104080] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.244 [2024-07-14 05:48:04.107121] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.244 [2024-07-14 05:48:04.116264] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.244 [2024-07-14 05:48:04.116701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.244 [2024-07-14 05:48:04.116728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:57.244 [2024-07-14 05:48:04.116744] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:57.244 [2024-07-14 05:48:04.116981] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:57.244 [2024-07-14 05:48:04.117208] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.244 [2024-07-14 05:48:04.117228] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.244 [2024-07-14 05:48:04.117241] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.244 [2024-07-14 05:48:04.120334] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.244 [2024-07-14 05:48:04.129713] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.244 [2024-07-14 05:48:04.130168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.244 [2024-07-14 05:48:04.130196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:57.244 [2024-07-14 05:48:04.130212] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:57.244 [2024-07-14 05:48:04.130470] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:57.244 [2024-07-14 05:48:04.130669] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.244 [2024-07-14 05:48:04.130688] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.244 [2024-07-14 05:48:04.130701] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.244 [2024-07-14 05:48:04.133726] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.244 [2024-07-14 05:48:04.143070] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.244 [2024-07-14 05:48:04.143498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.244 [2024-07-14 05:48:04.143525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:57.244 [2024-07-14 05:48:04.143541] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:57.244 [2024-07-14 05:48:04.143793] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:57.244 [2024-07-14 05:48:04.144025] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.244 [2024-07-14 05:48:04.144046] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.244 [2024-07-14 05:48:04.144060] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.244 [2024-07-14 05:48:04.147036] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.244 [2024-07-14 05:48:04.156337] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.244 [2024-07-14 05:48:04.156775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.244 [2024-07-14 05:48:04.156803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:57.244 [2024-07-14 05:48:04.156820] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:57.244 [2024-07-14 05:48:04.157081] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:57.244 [2024-07-14 05:48:04.157282] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.244 [2024-07-14 05:48:04.157301] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.244 [2024-07-14 05:48:04.157314] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.244 [2024-07-14 05:48:04.160288] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.244 [2024-07-14 05:48:04.169545] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.244 [2024-07-14 05:48:04.170034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.244 [2024-07-14 05:48:04.170062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:57.244 [2024-07-14 05:48:04.170078] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:57.244 [2024-07-14 05:48:04.170331] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:57.244 [2024-07-14 05:48:04.170530] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.244 [2024-07-14 05:48:04.170549] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.244 [2024-07-14 05:48:04.170567] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.244 [2024-07-14 05:48:04.173544] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.244 [2024-07-14 05:48:04.182928] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.245 [2024-07-14 05:48:04.183342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.245 [2024-07-14 05:48:04.183369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:57.245 [2024-07-14 05:48:04.183385] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:57.245 [2024-07-14 05:48:04.183629] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:57.245 [2024-07-14 05:48:04.183834] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.245 [2024-07-14 05:48:04.183877] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.245 [2024-07-14 05:48:04.183893] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.245 [2024-07-14 05:48:04.186996] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.245 [2024-07-14 05:48:04.196293] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.245 [2024-07-14 05:48:04.196727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.245 [2024-07-14 05:48:04.196754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:57.245 [2024-07-14 05:48:04.196770] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:57.245 [2024-07-14 05:48:04.197009] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:57.245 [2024-07-14 05:48:04.197236] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.245 [2024-07-14 05:48:04.197256] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.245 [2024-07-14 05:48:04.197268] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.245 [2024-07-14 05:48:04.200289] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.245 [2024-07-14 05:48:04.209560] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.245 [2024-07-14 05:48:04.209986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.245 [2024-07-14 05:48:04.210014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:57.245 [2024-07-14 05:48:04.210030] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:57.245 [2024-07-14 05:48:04.210284] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:57.245 [2024-07-14 05:48:04.210484] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.245 [2024-07-14 05:48:04.210503] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.245 [2024-07-14 05:48:04.210515] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.245 [2024-07-14 05:48:04.213530] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.245 [2024-07-14 05:48:04.223037] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.245 [2024-07-14 05:48:04.223439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.245 [2024-07-14 05:48:04.223465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:57.245 [2024-07-14 05:48:04.223481] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:57.245 [2024-07-14 05:48:04.223716] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:57.245 [2024-07-14 05:48:04.223921] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.245 [2024-07-14 05:48:04.223941] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.245 [2024-07-14 05:48:04.223954] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.245 [2024-07-14 05:48:04.226972] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.245 [2024-07-14 05:48:04.236292] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.245 [2024-07-14 05:48:04.236670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.245 [2024-07-14 05:48:04.236697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:57.245 [2024-07-14 05:48:04.236713] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:57.245 [2024-07-14 05:48:04.236957] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:57.245 [2024-07-14 05:48:04.237157] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.245 [2024-07-14 05:48:04.237176] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.245 [2024-07-14 05:48:04.237189] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.245 [2024-07-14 05:48:04.240200] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.245 [2024-07-14 05:48:04.249532] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.245 [2024-07-14 05:48:04.249902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.245 [2024-07-14 05:48:04.249929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:57.245 [2024-07-14 05:48:04.249945] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:57.245 [2024-07-14 05:48:04.250153] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:57.245 [2024-07-14 05:48:04.250384] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.245 [2024-07-14 05:48:04.250404] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.245 [2024-07-14 05:48:04.250417] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.245 [2024-07-14 05:48:04.253348] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.245 [2024-07-14 05:48:04.263118] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.245 [2024-07-14 05:48:04.263592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.245 [2024-07-14 05:48:04.263623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:57.245 [2024-07-14 05:48:04.263641] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:57.245 [2024-07-14 05:48:04.263889] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:57.245 [2024-07-14 05:48:04.264138] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.245 [2024-07-14 05:48:04.264161] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.245 [2024-07-14 05:48:04.264177] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.245 [2024-07-14 05:48:04.267767] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.245 [2024-07-14 05:48:04.277085] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.245 [2024-07-14 05:48:04.277549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.245 [2024-07-14 05:48:04.277575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:57.245 [2024-07-14 05:48:04.277590] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:57.245 [2024-07-14 05:48:04.277838] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:57.245 [2024-07-14 05:48:04.278076] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.245 [2024-07-14 05:48:04.278097] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.245 [2024-07-14 05:48:04.278110] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.245 [2024-07-14 05:48:04.281706] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.245 [2024-07-14 05:48:04.291029] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.245 [2024-07-14 05:48:04.291467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.245 [2024-07-14 05:48:04.291498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:57.245 [2024-07-14 05:48:04.291515] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:57.245 [2024-07-14 05:48:04.291754] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:57.245 [2024-07-14 05:48:04.292008] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.245 [2024-07-14 05:48:04.292032] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.245 [2024-07-14 05:48:04.292048] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.245 [2024-07-14 05:48:04.295639] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.245 [2024-07-14 05:48:04.304963] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.245 [2024-07-14 05:48:04.305421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.245 [2024-07-14 05:48:04.305451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:57.245 [2024-07-14 05:48:04.305470] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:57.245 [2024-07-14 05:48:04.305709] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:57.245 [2024-07-14 05:48:04.305962] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.245 [2024-07-14 05:48:04.305986] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.245 [2024-07-14 05:48:04.306002] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.245 [2024-07-14 05:48:04.309595] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.245 [2024-07-14 05:48:04.318952] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.245 [2024-07-14 05:48:04.319397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.245 [2024-07-14 05:48:04.319428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:57.245 [2024-07-14 05:48:04.319445] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:57.245 [2024-07-14 05:48:04.319684] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:57.245 [2024-07-14 05:48:04.319939] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.245 [2024-07-14 05:48:04.319963] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.245 [2024-07-14 05:48:04.319979] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.245 [2024-07-14 05:48:04.323566] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.246 [2024-07-14 05:48:04.332899] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.246 [2024-07-14 05:48:04.333452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.246 [2024-07-14 05:48:04.333482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:57.246 [2024-07-14 05:48:04.333500] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:57.246 [2024-07-14 05:48:04.333738] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:57.246 [2024-07-14 05:48:04.333991] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.246 [2024-07-14 05:48:04.334015] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.246 [2024-07-14 05:48:04.334031] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.246 [2024-07-14 05:48:04.337620] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.246 [2024-07-14 05:48:04.347080] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.246 [2024-07-14 05:48:04.347655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.246 [2024-07-14 05:48:04.347705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:57.246 [2024-07-14 05:48:04.347724] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:57.246 [2024-07-14 05:48:04.347973] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:57.246 [2024-07-14 05:48:04.348249] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.246 [2024-07-14 05:48:04.348277] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.246 [2024-07-14 05:48:04.348293] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.504 [2024-07-14 05:48:04.351943] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.504 [2024-07-14 05:48:04.360975] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.504 [2024-07-14 05:48:04.361568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.504 [2024-07-14 05:48:04.361629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:57.504 [2024-07-14 05:48:04.361652] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:57.504 [2024-07-14 05:48:04.361900] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:57.504 [2024-07-14 05:48:04.362144] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.504 [2024-07-14 05:48:04.362167] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.504 [2024-07-14 05:48:04.362184] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.504 [2024-07-14 05:48:04.365766] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.504 [2024-07-14 05:48:04.374881] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.504 [2024-07-14 05:48:04.375475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.504 [2024-07-14 05:48:04.375531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:57.504 [2024-07-14 05:48:04.375549] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:57.504 [2024-07-14 05:48:04.375788] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:57.504 [2024-07-14 05:48:04.376040] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.504 [2024-07-14 05:48:04.376064] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.504 [2024-07-14 05:48:04.376079] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.504 [2024-07-14 05:48:04.379670] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.504 [2024-07-14 05:48:04.388798] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.504 [2024-07-14 05:48:04.389261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.504 [2024-07-14 05:48:04.389293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:57.504 [2024-07-14 05:48:04.389311] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:57.504 [2024-07-14 05:48:04.389549] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:57.504 [2024-07-14 05:48:04.389792] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.504 [2024-07-14 05:48:04.389815] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.504 [2024-07-14 05:48:04.389831] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.504 [2024-07-14 05:48:04.393425] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.504 [2024-07-14 05:48:04.402752] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.504 [2024-07-14 05:48:04.403221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.504 [2024-07-14 05:48:04.403252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:57.504 [2024-07-14 05:48:04.403269] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:57.505 [2024-07-14 05:48:04.403507] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:57.505 [2024-07-14 05:48:04.403750] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.505 [2024-07-14 05:48:04.403779] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.505 [2024-07-14 05:48:04.403795] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.505 [2024-07-14 05:48:04.407390] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.505 [2024-07-14 05:48:04.416713] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.505 [2024-07-14 05:48:04.417163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.505 [2024-07-14 05:48:04.417194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:57.505 [2024-07-14 05:48:04.417212] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:57.505 [2024-07-14 05:48:04.417451] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:57.505 [2024-07-14 05:48:04.417693] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.505 [2024-07-14 05:48:04.417717] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.505 [2024-07-14 05:48:04.417733] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.505 [2024-07-14 05:48:04.421327] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.505 [2024-07-14 05:48:04.430651] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.505 [2024-07-14 05:48:04.431126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.505 [2024-07-14 05:48:04.431157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:57.505 [2024-07-14 05:48:04.431175] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:57.505 [2024-07-14 05:48:04.431413] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:57.505 [2024-07-14 05:48:04.431656] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.505 [2024-07-14 05:48:04.431680] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.505 [2024-07-14 05:48:04.431695] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.505 [2024-07-14 05:48:04.435289] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.505 [2024-07-14 05:48:04.444604] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.505 [2024-07-14 05:48:04.445049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.505 [2024-07-14 05:48:04.445080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:57.505 [2024-07-14 05:48:04.445098] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:57.505 [2024-07-14 05:48:04.445336] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:57.505 [2024-07-14 05:48:04.445580] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.505 [2024-07-14 05:48:04.445603] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.505 [2024-07-14 05:48:04.445618] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.505 [2024-07-14 05:48:04.449215] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.505 [2024-07-14 05:48:04.458539] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.505 [2024-07-14 05:48:04.459006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.505 [2024-07-14 05:48:04.459039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:57.505 [2024-07-14 05:48:04.459056] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:57.505 [2024-07-14 05:48:04.459295] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:57.505 [2024-07-14 05:48:04.459537] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.505 [2024-07-14 05:48:04.459561] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.505 [2024-07-14 05:48:04.459577] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.505 [2024-07-14 05:48:04.463174] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.505 [2024-07-14 05:48:04.472489] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.505 [2024-07-14 05:48:04.472942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.505 [2024-07-14 05:48:04.472972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:57.505 [2024-07-14 05:48:04.472990] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:57.505 [2024-07-14 05:48:04.473228] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:57.505 [2024-07-14 05:48:04.473471] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.505 [2024-07-14 05:48:04.473494] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.505 [2024-07-14 05:48:04.473510] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.505 [2024-07-14 05:48:04.477106] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.505 [2024-07-14 05:48:04.486423] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.505 [2024-07-14 05:48:04.486875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.505 [2024-07-14 05:48:04.486905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:57.505 [2024-07-14 05:48:04.486923] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:57.505 [2024-07-14 05:48:04.487162] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:57.505 [2024-07-14 05:48:04.487404] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.505 [2024-07-14 05:48:04.487427] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.505 [2024-07-14 05:48:04.487442] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.505 [2024-07-14 05:48:04.491040] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.505 [2024-07-14 05:48:04.500360] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.505 [2024-07-14 05:48:04.500822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.505 [2024-07-14 05:48:04.500852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:57.505 [2024-07-14 05:48:04.500878] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:57.505 [2024-07-14 05:48:04.501128] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:57.505 [2024-07-14 05:48:04.501372] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.505 [2024-07-14 05:48:04.501395] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.505 [2024-07-14 05:48:04.501410] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.505 [2024-07-14 05:48:04.505002] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.505 [2024-07-14 05:48:04.514347] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.505 [2024-07-14 05:48:04.514801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.505 [2024-07-14 05:48:04.514831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:57.505 [2024-07-14 05:48:04.514849] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:57.505 [2024-07-14 05:48:04.515096] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:57.505 [2024-07-14 05:48:04.515341] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.505 [2024-07-14 05:48:04.515364] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.505 [2024-07-14 05:48:04.515379] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.505 [2024-07-14 05:48:04.518971] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.505 [2024-07-14 05:48:04.528292] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.505 [2024-07-14 05:48:04.528765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.505 [2024-07-14 05:48:04.528795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:57.505 [2024-07-14 05:48:04.528813] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:57.505 [2024-07-14 05:48:04.529062] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:57.505 [2024-07-14 05:48:04.529305] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.505 [2024-07-14 05:48:04.529328] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.505 [2024-07-14 05:48:04.529344] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.505 [2024-07-14 05:48:04.532939] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.505 [2024-07-14 05:48:04.542256] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.505 [2024-07-14 05:48:04.542697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.505 [2024-07-14 05:48:04.542727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:57.505 [2024-07-14 05:48:04.542744] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:57.505 [2024-07-14 05:48:04.542995] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:57.505 [2024-07-14 05:48:04.543238] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.505 [2024-07-14 05:48:04.543261] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.505 [2024-07-14 05:48:04.543283] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.505 [2024-07-14 05:48:04.546876] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.505 [2024-07-14 05:48:04.556191] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.505 [2024-07-14 05:48:04.556642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.505 [2024-07-14 05:48:04.556673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:57.505 [2024-07-14 05:48:04.556690] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:57.506 [2024-07-14 05:48:04.556941] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:57.506 [2024-07-14 05:48:04.557185] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.506 [2024-07-14 05:48:04.557208] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.506 [2024-07-14 05:48:04.557224] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.506 [2024-07-14 05:48:04.560811] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.506 [2024-07-14 05:48:04.570136] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.506 [2024-07-14 05:48:04.570572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.506 [2024-07-14 05:48:04.570602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:57.506 [2024-07-14 05:48:04.570619] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:57.506 [2024-07-14 05:48:04.570858] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:57.506 [2024-07-14 05:48:04.571111] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.506 [2024-07-14 05:48:04.571134] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.506 [2024-07-14 05:48:04.571150] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.506 [2024-07-14 05:48:04.574739] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.506 [2024-07-14 05:48:04.584065] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.506 [2024-07-14 05:48:04.584544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.506 [2024-07-14 05:48:04.584574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:57.506 [2024-07-14 05:48:04.584592] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:57.506 [2024-07-14 05:48:04.584830] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:57.506 [2024-07-14 05:48:04.585080] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.506 [2024-07-14 05:48:04.585104] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.506 [2024-07-14 05:48:04.585120] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.506 [2024-07-14 05:48:04.588703] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.506 [2024-07-14 05:48:04.598032] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.506 [2024-07-14 05:48:04.598597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.506 [2024-07-14 05:48:04.598656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:57.506 [2024-07-14 05:48:04.598673] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:57.506 [2024-07-14 05:48:04.598922] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:57.506 [2024-07-14 05:48:04.599166] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.506 [2024-07-14 05:48:04.599190] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.506 [2024-07-14 05:48:04.599205] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.506 [2024-07-14 05:48:04.602789] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.764 [2024-07-14 05:48:04.612169] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.764 [2024-07-14 05:48:04.612836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.764 [2024-07-14 05:48:04.612884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:57.764 [2024-07-14 05:48:04.612905] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:57.764 [2024-07-14 05:48:04.613145] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:57.764 [2024-07-14 05:48:04.613406] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.764 [2024-07-14 05:48:04.613440] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.764 [2024-07-14 05:48:04.613470] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.764 [2024-07-14 05:48:04.617110] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.765 [2024-07-14 05:48:04.626222] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.765 [2024-07-14 05:48:04.626689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.765 [2024-07-14 05:48:04.626719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:57.765 [2024-07-14 05:48:04.626737] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:57.765 [2024-07-14 05:48:04.626989] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:57.765 [2024-07-14 05:48:04.627232] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.765 [2024-07-14 05:48:04.627256] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.765 [2024-07-14 05:48:04.627271] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.765 [2024-07-14 05:48:04.630858] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.765 [2024-07-14 05:48:04.640186] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.765 [2024-07-14 05:48:04.640621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.765 [2024-07-14 05:48:04.640652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:57.765 [2024-07-14 05:48:04.640670] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:57.765 [2024-07-14 05:48:04.640923] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:57.765 [2024-07-14 05:48:04.641174] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.765 [2024-07-14 05:48:04.641197] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.765 [2024-07-14 05:48:04.641213] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.765 [2024-07-14 05:48:04.644801] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.765 [2024-07-14 05:48:04.654141] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.765 [2024-07-14 05:48:04.654580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.765 [2024-07-14 05:48:04.654611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:57.765 [2024-07-14 05:48:04.654628] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:57.765 [2024-07-14 05:48:04.654878] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:57.765 [2024-07-14 05:48:04.655122] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.765 [2024-07-14 05:48:04.655145] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.765 [2024-07-14 05:48:04.655161] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.765 [2024-07-14 05:48:04.658747] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.765 [2024-07-14 05:48:04.668072] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.765 [2024-07-14 05:48:04.668537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.765 [2024-07-14 05:48:04.668567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:57.765 [2024-07-14 05:48:04.668585] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:57.765 [2024-07-14 05:48:04.668824] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:57.765 [2024-07-14 05:48:04.669077] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.765 [2024-07-14 05:48:04.669100] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.765 [2024-07-14 05:48:04.669116] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.765 [2024-07-14 05:48:04.672702] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.765 [2024-07-14 05:48:04.682035] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.765 [2024-07-14 05:48:04.682455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.765 [2024-07-14 05:48:04.682487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:57.765 [2024-07-14 05:48:04.682505] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:57.765 [2024-07-14 05:48:04.682744] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:57.765 [2024-07-14 05:48:04.683000] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.765 [2024-07-14 05:48:04.683024] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.765 [2024-07-14 05:48:04.683039] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.765 [2024-07-14 05:48:04.686634] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.765 [2024-07-14 05:48:04.695969] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.765 [2024-07-14 05:48:04.696573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.765 [2024-07-14 05:48:04.696625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:57.765 [2024-07-14 05:48:04.696643] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:57.765 [2024-07-14 05:48:04.696893] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:57.765 [2024-07-14 05:48:04.697137] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.765 [2024-07-14 05:48:04.697160] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.765 [2024-07-14 05:48:04.697176] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.765 [2024-07-14 05:48:04.700910] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.765 [2024-07-14 05:48:04.710022] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.765 [2024-07-14 05:48:04.710500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.765 [2024-07-14 05:48:04.710531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:57.765 [2024-07-14 05:48:04.710549] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:57.765 [2024-07-14 05:48:04.710788] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:57.765 [2024-07-14 05:48:04.711043] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.765 [2024-07-14 05:48:04.711067] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.765 [2024-07-14 05:48:04.711083] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.765 [2024-07-14 05:48:04.714669] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.765 [2024-07-14 05:48:04.723994] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.765 [2024-07-14 05:48:04.724426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.765 [2024-07-14 05:48:04.724457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:57.765 [2024-07-14 05:48:04.724474] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:57.765 [2024-07-14 05:48:04.724713] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:57.765 [2024-07-14 05:48:04.724966] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.765 [2024-07-14 05:48:04.724990] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.765 [2024-07-14 05:48:04.725006] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.765 [2024-07-14 05:48:04.728598] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.765 [2024-07-14 05:48:04.737927] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.765 [2024-07-14 05:48:04.738367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.765 [2024-07-14 05:48:04.738398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:57.765 [2024-07-14 05:48:04.738422] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:57.765 [2024-07-14 05:48:04.738661] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:57.765 [2024-07-14 05:48:04.738915] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.765 [2024-07-14 05:48:04.738939] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.765 [2024-07-14 05:48:04.738954] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.765 [2024-07-14 05:48:04.742542] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.765 [2024-07-14 05:48:04.751862] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.765 [2024-07-14 05:48:04.752341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.765 [2024-07-14 05:48:04.752371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:57.765 [2024-07-14 05:48:04.752389] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:57.765 [2024-07-14 05:48:04.752628] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:57.765 [2024-07-14 05:48:04.752882] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.765 [2024-07-14 05:48:04.752906] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.765 [2024-07-14 05:48:04.752921] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.765 [2024-07-14 05:48:04.756507] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.765 [2024-07-14 05:48:04.765842] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.765 [2024-07-14 05:48:04.766287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.765 [2024-07-14 05:48:04.766317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:57.765 [2024-07-14 05:48:04.766335] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:57.765 [2024-07-14 05:48:04.766573] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:57.765 [2024-07-14 05:48:04.766816] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.765 [2024-07-14 05:48:04.766839] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.765 [2024-07-14 05:48:04.766855] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.766 [2024-07-14 05:48:04.770450] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.766 [2024-07-14 05:48:04.779779] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.766 [2024-07-14 05:48:04.780230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.766 [2024-07-14 05:48:04.780260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:57.766 [2024-07-14 05:48:04.780278] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:57.766 [2024-07-14 05:48:04.780517] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:57.766 [2024-07-14 05:48:04.780759] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.766 [2024-07-14 05:48:04.780788] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.766 [2024-07-14 05:48:04.780804] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.766 [2024-07-14 05:48:04.784403] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.766 [2024-07-14 05:48:04.793723] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.766 [2024-07-14 05:48:04.794195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.766 [2024-07-14 05:48:04.794225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:57.766 [2024-07-14 05:48:04.794243] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:57.766 [2024-07-14 05:48:04.794482] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:57.766 [2024-07-14 05:48:04.794724] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.766 [2024-07-14 05:48:04.794747] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.766 [2024-07-14 05:48:04.794763] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.766 [2024-07-14 05:48:04.798366] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.766 [2024-07-14 05:48:04.807692] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.766 [2024-07-14 05:48:04.808161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.766 [2024-07-14 05:48:04.808192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:57.766 [2024-07-14 05:48:04.808210] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:57.766 [2024-07-14 05:48:04.808449] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:57.766 [2024-07-14 05:48:04.808692] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.766 [2024-07-14 05:48:04.808715] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.766 [2024-07-14 05:48:04.808730] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.766 [2024-07-14 05:48:04.812329] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.766 [2024-07-14 05:48:04.821656] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.766 [2024-07-14 05:48:04.822072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.766 [2024-07-14 05:48:04.822102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:57.766 [2024-07-14 05:48:04.822120] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:57.766 [2024-07-14 05:48:04.822359] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:57.766 [2024-07-14 05:48:04.822602] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.766 [2024-07-14 05:48:04.822625] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.766 [2024-07-14 05:48:04.822641] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.766 [2024-07-14 05:48:04.826237] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.766 [2024-07-14 05:48:04.835570] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.766 [2024-07-14 05:48:04.836034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.766 [2024-07-14 05:48:04.836066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:57.766 [2024-07-14 05:48:04.836084] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:57.766 [2024-07-14 05:48:04.836322] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:57.766 [2024-07-14 05:48:04.836565] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.766 [2024-07-14 05:48:04.836588] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.766 [2024-07-14 05:48:04.836604] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.766 [2024-07-14 05:48:04.840197] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.766 [2024-07-14 05:48:04.849520] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.766 [2024-07-14 05:48:04.849985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.766 [2024-07-14 05:48:04.850016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:57.766 [2024-07-14 05:48:04.850034] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:57.766 [2024-07-14 05:48:04.850272] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:57.766 [2024-07-14 05:48:04.850515] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.766 [2024-07-14 05:48:04.850538] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.766 [2024-07-14 05:48:04.850554] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.766 [2024-07-14 05:48:04.854147] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.766 [2024-07-14 05:48:04.863478] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.766 [2024-07-14 05:48:04.863952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.766 [2024-07-14 05:48:04.863983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:57.766 [2024-07-14 05:48:04.864001] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:57.766 [2024-07-14 05:48:04.864240] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:57.766 [2024-07-14 05:48:04.864483] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.766 [2024-07-14 05:48:04.864506] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.766 [2024-07-14 05:48:04.864521] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.766 [2024-07-14 05:48:04.868280] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.025 [2024-07-14 05:48:04.877636] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.025 [2024-07-14 05:48:04.878125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.025 [2024-07-14 05:48:04.878157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.026 [2024-07-14 05:48:04.878176] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.026 [2024-07-14 05:48:04.878422] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.026 [2024-07-14 05:48:04.878665] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.026 [2024-07-14 05:48:04.878689] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.026 [2024-07-14 05:48:04.878705] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.026 [2024-07-14 05:48:04.882302] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.026 [2024-07-14 05:48:04.891647] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.026 [2024-07-14 05:48:04.892102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.026 [2024-07-14 05:48:04.892133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.026 [2024-07-14 05:48:04.892151] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.026 [2024-07-14 05:48:04.892392] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.026 [2024-07-14 05:48:04.892635] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.026 [2024-07-14 05:48:04.892659] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.026 [2024-07-14 05:48:04.892674] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.026 [2024-07-14 05:48:04.896282] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.026 [2024-07-14 05:48:04.905624] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.026 [2024-07-14 05:48:04.906068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.026 [2024-07-14 05:48:04.906099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.026 [2024-07-14 05:48:04.906117] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.026 [2024-07-14 05:48:04.906356] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.026 [2024-07-14 05:48:04.906600] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.026 [2024-07-14 05:48:04.906622] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.026 [2024-07-14 05:48:04.906638] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.026 [2024-07-14 05:48:04.910238] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.026 [2024-07-14 05:48:04.919596] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.026 [2024-07-14 05:48:04.920081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.026 [2024-07-14 05:48:04.920111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.026 [2024-07-14 05:48:04.920129] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.026 [2024-07-14 05:48:04.920368] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.026 [2024-07-14 05:48:04.920611] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.026 [2024-07-14 05:48:04.920634] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.026 [2024-07-14 05:48:04.920655] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.026 [2024-07-14 05:48:04.924257] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.026 [2024-07-14 05:48:04.933598] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.026 [2024-07-14 05:48:04.934081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.026 [2024-07-14 05:48:04.934112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.026 [2024-07-14 05:48:04.934129] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.026 [2024-07-14 05:48:04.934368] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.026 [2024-07-14 05:48:04.934611] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.026 [2024-07-14 05:48:04.934634] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.026 [2024-07-14 05:48:04.934650] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.026 [2024-07-14 05:48:04.938250] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.026 [2024-07-14 05:48:04.947580] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.026 [2024-07-14 05:48:04.948014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.026 [2024-07-14 05:48:04.948045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.026 [2024-07-14 05:48:04.948063] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.026 [2024-07-14 05:48:04.948302] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.026 [2024-07-14 05:48:04.948545] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.026 [2024-07-14 05:48:04.948568] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.026 [2024-07-14 05:48:04.948583] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.026 [2024-07-14 05:48:04.952182] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.026 [2024-07-14 05:48:04.961515] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.026 [2024-07-14 05:48:04.961954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.026 [2024-07-14 05:48:04.961984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.026 [2024-07-14 05:48:04.962001] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.026 [2024-07-14 05:48:04.962240] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.026 [2024-07-14 05:48:04.962484] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.026 [2024-07-14 05:48:04.962507] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.026 [2024-07-14 05:48:04.962522] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.026 [2024-07-14 05:48:04.966125] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.026 [2024-07-14 05:48:04.975454] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.026 [2024-07-14 05:48:04.975928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.026 [2024-07-14 05:48:04.975959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.026 [2024-07-14 05:48:04.975977] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.026 [2024-07-14 05:48:04.976216] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.026 [2024-07-14 05:48:04.976459] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.026 [2024-07-14 05:48:04.976482] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.026 [2024-07-14 05:48:04.976498] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.026 [2024-07-14 05:48:04.980100] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.027 [2024-07-14 05:48:04.989432] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.027 [2024-07-14 05:48:04.989895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.027 [2024-07-14 05:48:04.989926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.027 [2024-07-14 05:48:04.989943] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.027 [2024-07-14 05:48:04.990183] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.027 [2024-07-14 05:48:04.990425] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.027 [2024-07-14 05:48:04.990448] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.027 [2024-07-14 05:48:04.990464] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.027 [2024-07-14 05:48:04.994065] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.027 [2024-07-14 05:48:05.003406] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.027 [2024-07-14 05:48:05.003840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.027 [2024-07-14 05:48:05.003878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.027 [2024-07-14 05:48:05.003897] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.027 [2024-07-14 05:48:05.004137] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.027 [2024-07-14 05:48:05.004380] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.027 [2024-07-14 05:48:05.004403] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.027 [2024-07-14 05:48:05.004419] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.027 [2024-07-14 05:48:05.008018] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.027 [2024-07-14 05:48:05.017354] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.027 [2024-07-14 05:48:05.017828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.027 [2024-07-14 05:48:05.017859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.027 [2024-07-14 05:48:05.017886] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.027 [2024-07-14 05:48:05.018126] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.027 [2024-07-14 05:48:05.018374] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.027 [2024-07-14 05:48:05.018398] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.027 [2024-07-14 05:48:05.018414] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.027 [2024-07-14 05:48:05.022012] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.027 [2024-07-14 05:48:05.031343] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.027 [2024-07-14 05:48:05.031887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.027 [2024-07-14 05:48:05.031919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.027 [2024-07-14 05:48:05.031936] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.027 [2024-07-14 05:48:05.032175] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.027 [2024-07-14 05:48:05.032418] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.027 [2024-07-14 05:48:05.032442] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.027 [2024-07-14 05:48:05.032457] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.027 [2024-07-14 05:48:05.036049] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.027 [2024-07-14 05:48:05.045387] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.027 [2024-07-14 05:48:05.045847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.027 [2024-07-14 05:48:05.045883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.027 [2024-07-14 05:48:05.045902] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.027 [2024-07-14 05:48:05.046141] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.027 [2024-07-14 05:48:05.046384] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.027 [2024-07-14 05:48:05.046407] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.027 [2024-07-14 05:48:05.046423] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.027 [2024-07-14 05:48:05.050021] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.027 [2024-07-14 05:48:05.059349] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.027 [2024-07-14 05:48:05.059961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.027 [2024-07-14 05:48:05.059993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.027 [2024-07-14 05:48:05.060011] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.027 [2024-07-14 05:48:05.060249] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.027 [2024-07-14 05:48:05.060492] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.027 [2024-07-14 05:48:05.060515] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.027 [2024-07-14 05:48:05.060530] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.027 [2024-07-14 05:48:05.064135] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.027 [2024-07-14 05:48:05.073262] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.027 [2024-07-14 05:48:05.073859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.027 [2024-07-14 05:48:05.073923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.027 [2024-07-14 05:48:05.073941] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.027 [2024-07-14 05:48:05.074180] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.027 [2024-07-14 05:48:05.074423] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.027 [2024-07-14 05:48:05.074446] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.027 [2024-07-14 05:48:05.074461] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.027 [2024-07-14 05:48:05.078061] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.027 [2024-07-14 05:48:05.087185] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.027 [2024-07-14 05:48:05.087785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.027 [2024-07-14 05:48:05.087838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.027 [2024-07-14 05:48:05.087856] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.028 [2024-07-14 05:48:05.088102] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.028 [2024-07-14 05:48:05.088346] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.028 [2024-07-14 05:48:05.088369] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.028 [2024-07-14 05:48:05.088385] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.028 [2024-07-14 05:48:05.091974] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.028 [2024-07-14 05:48:05.101297] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.028 [2024-07-14 05:48:05.101769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.028 [2024-07-14 05:48:05.101799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.028 [2024-07-14 05:48:05.101816] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.028 [2024-07-14 05:48:05.102065] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.028 [2024-07-14 05:48:05.102309] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.028 [2024-07-14 05:48:05.102332] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.028 [2024-07-14 05:48:05.102348] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.028 [2024-07-14 05:48:05.105944] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.028 [2024-07-14 05:48:05.115269] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.028 [2024-07-14 05:48:05.115730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.028 [2024-07-14 05:48:05.115760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.028 [2024-07-14 05:48:05.115783] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.028 [2024-07-14 05:48:05.116035] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.028 [2024-07-14 05:48:05.116279] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.028 [2024-07-14 05:48:05.116303] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.028 [2024-07-14 05:48:05.116318] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.028 [2024-07-14 05:48:05.119912] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.028 [2024-07-14 05:48:05.129393] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.028 [2024-07-14 05:48:05.129979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.028 [2024-07-14 05:48:05.130013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.028 [2024-07-14 05:48:05.130031] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.028 [2024-07-14 05:48:05.130272] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.288 [2024-07-14 05:48:05.130515] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.288 [2024-07-14 05:48:05.130538] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.288 [2024-07-14 05:48:05.130554] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.288 [2024-07-14 05:48:05.134174] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.288 [2024-07-14 05:48:05.143413] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.288 [2024-07-14 05:48:05.143980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.288 [2024-07-14 05:48:05.144012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.288 [2024-07-14 05:48:05.144030] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.288 [2024-07-14 05:48:05.144269] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.288 [2024-07-14 05:48:05.144512] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.288 [2024-07-14 05:48:05.144535] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.288 [2024-07-14 05:48:05.144551] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.288 [2024-07-14 05:48:05.148168] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.288 [2024-07-14 05:48:05.157286] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.288 [2024-07-14 05:48:05.157725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.288 [2024-07-14 05:48:05.157755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.288 [2024-07-14 05:48:05.157773] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.288 [2024-07-14 05:48:05.158024] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.288 [2024-07-14 05:48:05.158268] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.288 [2024-07-14 05:48:05.158297] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.288 [2024-07-14 05:48:05.158314] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.288 [2024-07-14 05:48:05.161910] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.288 [2024-07-14 05:48:05.171234] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.288 [2024-07-14 05:48:05.171694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.288 [2024-07-14 05:48:05.171724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.288 [2024-07-14 05:48:05.171742] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.288 [2024-07-14 05:48:05.171993] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.288 [2024-07-14 05:48:05.172237] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.288 [2024-07-14 05:48:05.172260] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.288 [2024-07-14 05:48:05.172275] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.288 [2024-07-14 05:48:05.175863] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.288 [2024-07-14 05:48:05.185192] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.288 [2024-07-14 05:48:05.185647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.288 [2024-07-14 05:48:05.185679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.288 [2024-07-14 05:48:05.185697] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.288 [2024-07-14 05:48:05.185948] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.288 [2024-07-14 05:48:05.186192] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.288 [2024-07-14 05:48:05.186215] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.288 [2024-07-14 05:48:05.186231] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.288 [2024-07-14 05:48:05.189816] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.288 [2024-07-14 05:48:05.199150] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.288 [2024-07-14 05:48:05.199611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.288 [2024-07-14 05:48:05.199641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.288 [2024-07-14 05:48:05.199659] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.288 [2024-07-14 05:48:05.199909] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.288 [2024-07-14 05:48:05.200153] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.288 [2024-07-14 05:48:05.200176] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.288 [2024-07-14 05:48:05.200191] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.288 [2024-07-14 05:48:05.203777] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.288 [2024-07-14 05:48:05.213114] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.288 [2024-07-14 05:48:05.213579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.288 [2024-07-14 05:48:05.213609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.288 [2024-07-14 05:48:05.213627] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.288 [2024-07-14 05:48:05.213876] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.288 [2024-07-14 05:48:05.214119] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.288 [2024-07-14 05:48:05.214143] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.288 [2024-07-14 05:48:05.214158] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.288 [2024-07-14 05:48:05.217744] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.288 [2024-07-14 05:48:05.227074] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.288 [2024-07-14 05:48:05.227533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.288 [2024-07-14 05:48:05.227563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.288 [2024-07-14 05:48:05.227580] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.288 [2024-07-14 05:48:05.227819] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.288 [2024-07-14 05:48:05.228071] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.288 [2024-07-14 05:48:05.228095] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.288 [2024-07-14 05:48:05.228111] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.288 [2024-07-14 05:48:05.231709] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.288 [2024-07-14 05:48:05.241047] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.288 [2024-07-14 05:48:05.241487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.288 [2024-07-14 05:48:05.241517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.288 [2024-07-14 05:48:05.241535] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.288 [2024-07-14 05:48:05.241775] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.288 [2024-07-14 05:48:05.242028] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.289 [2024-07-14 05:48:05.242052] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.289 [2024-07-14 05:48:05.242068] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.289 [2024-07-14 05:48:05.245658] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.289 [2024-07-14 05:48:05.254991] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.289 [2024-07-14 05:48:05.255435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.289 [2024-07-14 05:48:05.255465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.289 [2024-07-14 05:48:05.255483] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.289 [2024-07-14 05:48:05.255729] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.289 [2024-07-14 05:48:05.255983] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.289 [2024-07-14 05:48:05.256007] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.289 [2024-07-14 05:48:05.256023] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.289 [2024-07-14 05:48:05.259615] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.289 [2024-07-14 05:48:05.268945] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.289 [2024-07-14 05:48:05.269412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.289 [2024-07-14 05:48:05.269443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.289 [2024-07-14 05:48:05.269460] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.289 [2024-07-14 05:48:05.269700] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.289 [2024-07-14 05:48:05.269955] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.289 [2024-07-14 05:48:05.269979] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.289 [2024-07-14 05:48:05.269994] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.289 [2024-07-14 05:48:05.273585] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.289 [2024-07-14 05:48:05.282954] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.289 [2024-07-14 05:48:05.283421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.289 [2024-07-14 05:48:05.283452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.289 [2024-07-14 05:48:05.283470] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.289 [2024-07-14 05:48:05.283710] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.289 [2024-07-14 05:48:05.283963] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.289 [2024-07-14 05:48:05.283999] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.289 [2024-07-14 05:48:05.284015] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.289 [2024-07-14 05:48:05.287603] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.289 [2024-07-14 05:48:05.296947] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.289 [2024-07-14 05:48:05.297589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.289 [2024-07-14 05:48:05.297644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.289 [2024-07-14 05:48:05.297661] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.289 [2024-07-14 05:48:05.297912] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.289 [2024-07-14 05:48:05.298155] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.289 [2024-07-14 05:48:05.298178] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.289 [2024-07-14 05:48:05.298201] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.289 [2024-07-14 05:48:05.301791] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.289 [2024-07-14 05:48:05.310916] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.289 [2024-07-14 05:48:05.311523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.289 [2024-07-14 05:48:05.311577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.289 [2024-07-14 05:48:05.311594] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.289 [2024-07-14 05:48:05.311833] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.289 [2024-07-14 05:48:05.312085] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.289 [2024-07-14 05:48:05.312109] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.289 [2024-07-14 05:48:05.312124] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.289 [2024-07-14 05:48:05.315732] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.289 [2024-07-14 05:48:05.324859] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.289 [2024-07-14 05:48:05.325288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.289 [2024-07-14 05:48:05.325320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.289 [2024-07-14 05:48:05.325338] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.289 [2024-07-14 05:48:05.325578] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.289 [2024-07-14 05:48:05.325821] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.289 [2024-07-14 05:48:05.325845] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.289 [2024-07-14 05:48:05.325861] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.289 [2024-07-14 05:48:05.329456] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.289 [2024-07-14 05:48:05.338786] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.289 [2024-07-14 05:48:05.339243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.289 [2024-07-14 05:48:05.339274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.289 [2024-07-14 05:48:05.339292] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.289 [2024-07-14 05:48:05.339531] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.289 [2024-07-14 05:48:05.339774] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.289 [2024-07-14 05:48:05.339797] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.289 [2024-07-14 05:48:05.339813] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.289 [2024-07-14 05:48:05.343435] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.289 [2024-07-14 05:48:05.352763] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.289 [2024-07-14 05:48:05.353241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.289 [2024-07-14 05:48:05.353272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.289 [2024-07-14 05:48:05.353290] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.289 [2024-07-14 05:48:05.353528] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.289 [2024-07-14 05:48:05.353771] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.289 [2024-07-14 05:48:05.353795] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.289 [2024-07-14 05:48:05.353810] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.289 [2024-07-14 05:48:05.357408] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.289 [2024-07-14 05:48:05.366731] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.289 [2024-07-14 05:48:05.367201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.289 [2024-07-14 05:48:05.367231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.289 [2024-07-14 05:48:05.367249] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.289 [2024-07-14 05:48:05.367488] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.289 [2024-07-14 05:48:05.367730] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.289 [2024-07-14 05:48:05.367753] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.289 [2024-07-14 05:48:05.367769] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.289 [2024-07-14 05:48:05.371366] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.289 [2024-07-14 05:48:05.380692] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.289 [2024-07-14 05:48:05.381185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.289 [2024-07-14 05:48:05.381216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.289 [2024-07-14 05:48:05.381233] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.289 [2024-07-14 05:48:05.381472] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.289 [2024-07-14 05:48:05.381716] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.289 [2024-07-14 05:48:05.381739] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.289 [2024-07-14 05:48:05.381754] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.289 [2024-07-14 05:48:05.385352] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.550 [2024-07-14 05:48:05.394730] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.550 [2024-07-14 05:48:05.395209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.550 [2024-07-14 05:48:05.395241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.550 [2024-07-14 05:48:05.395259] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.550 [2024-07-14 05:48:05.395497] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.550 [2024-07-14 05:48:05.395746] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.550 [2024-07-14 05:48:05.395770] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.550 [2024-07-14 05:48:05.395786] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.550 [2024-07-14 05:48:05.399497] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.550 [2024-07-14 05:48:05.408658] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.550 [2024-07-14 05:48:05.409138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.550 [2024-07-14 05:48:05.409170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.550 [2024-07-14 05:48:05.409188] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.550 [2024-07-14 05:48:05.409427] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.550 [2024-07-14 05:48:05.409670] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.550 [2024-07-14 05:48:05.409693] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.550 [2024-07-14 05:48:05.409709] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.550 [2024-07-14 05:48:05.413307] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.550 [2024-07-14 05:48:05.422635] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.550 [2024-07-14 05:48:05.423080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.550 [2024-07-14 05:48:05.423111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.550 [2024-07-14 05:48:05.423129] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.550 [2024-07-14 05:48:05.423369] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.550 [2024-07-14 05:48:05.423612] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.550 [2024-07-14 05:48:05.423635] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.550 [2024-07-14 05:48:05.423650] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.550 [2024-07-14 05:48:05.427245] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.550 [2024-07-14 05:48:05.436563] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.550 [2024-07-14 05:48:05.437008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.550 [2024-07-14 05:48:05.437040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.550 [2024-07-14 05:48:05.437057] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.550 [2024-07-14 05:48:05.437298] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.550 [2024-07-14 05:48:05.437542] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.550 [2024-07-14 05:48:05.437565] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.550 [2024-07-14 05:48:05.437581] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.550 [2024-07-14 05:48:05.441182] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.550 [2024-07-14 05:48:05.450501] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.550 [2024-07-14 05:48:05.450939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.550 [2024-07-14 05:48:05.450971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.550 [2024-07-14 05:48:05.450988] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.550 [2024-07-14 05:48:05.451227] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.550 [2024-07-14 05:48:05.451470] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.550 [2024-07-14 05:48:05.451493] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.550 [2024-07-14 05:48:05.451509] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.550 [2024-07-14 05:48:05.455103] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.550 [2024-07-14 05:48:05.464424] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.550 [2024-07-14 05:48:05.464876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.550 [2024-07-14 05:48:05.464906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.550 [2024-07-14 05:48:05.464924] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.550 [2024-07-14 05:48:05.465162] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.550 [2024-07-14 05:48:05.465405] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.550 [2024-07-14 05:48:05.465428] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.550 [2024-07-14 05:48:05.465444] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.550 [2024-07-14 05:48:05.469042] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.550 [2024-07-14 05:48:05.478360] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.550 [2024-07-14 05:48:05.478799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.550 [2024-07-14 05:48:05.478830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.550 [2024-07-14 05:48:05.478848] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.550 [2024-07-14 05:48:05.479097] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.550 [2024-07-14 05:48:05.479341] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.550 [2024-07-14 05:48:05.479365] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.550 [2024-07-14 05:48:05.479380] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.550 [2024-07-14 05:48:05.482972] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.550 [2024-07-14 05:48:05.492285] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.550 [2024-07-14 05:48:05.492742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.550 [2024-07-14 05:48:05.492773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.550 [2024-07-14 05:48:05.492796] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.550 [2024-07-14 05:48:05.493049] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.550 [2024-07-14 05:48:05.493293] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.550 [2024-07-14 05:48:05.493316] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.550 [2024-07-14 05:48:05.493332] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.550 [2024-07-14 05:48:05.496929] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.550 [2024-07-14 05:48:05.506243] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.550 [2024-07-14 05:48:05.506714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.550 [2024-07-14 05:48:05.506744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.550 [2024-07-14 05:48:05.506762] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.550 [2024-07-14 05:48:05.507011] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.550 [2024-07-14 05:48:05.507255] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.550 [2024-07-14 05:48:05.507278] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.550 [2024-07-14 05:48:05.507294] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.550 [2024-07-14 05:48:05.510883] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.550 [2024-07-14 05:48:05.520197] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.550 [2024-07-14 05:48:05.520634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.550 [2024-07-14 05:48:05.520664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.550 [2024-07-14 05:48:05.520681] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.550 [2024-07-14 05:48:05.520931] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.550 [2024-07-14 05:48:05.521174] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.550 [2024-07-14 05:48:05.521197] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.550 [2024-07-14 05:48:05.521213] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.550 [2024-07-14 05:48:05.524796] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.550 [2024-07-14 05:48:05.534117] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.550 [2024-07-14 05:48:05.534553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.550 [2024-07-14 05:48:05.534584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.550 [2024-07-14 05:48:05.534601] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.550 [2024-07-14 05:48:05.534841] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.550 [2024-07-14 05:48:05.535093] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.550 [2024-07-14 05:48:05.535122] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.550 [2024-07-14 05:48:05.535138] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.550 [2024-07-14 05:48:05.538726] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.550 [2024-07-14 05:48:05.548049] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.550 [2024-07-14 05:48:05.548513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.550 [2024-07-14 05:48:05.548544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.550 [2024-07-14 05:48:05.548562] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.550 [2024-07-14 05:48:05.548800] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.550 [2024-07-14 05:48:05.549053] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.550 [2024-07-14 05:48:05.549077] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.550 [2024-07-14 05:48:05.549092] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.550 [2024-07-14 05:48:05.552679] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.550 [2024-07-14 05:48:05.562005] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.550 [2024-07-14 05:48:05.562465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.550 [2024-07-14 05:48:05.562495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.550 [2024-07-14 05:48:05.562513] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.550 [2024-07-14 05:48:05.562752] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.550 [2024-07-14 05:48:05.563006] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.550 [2024-07-14 05:48:05.563030] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.550 [2024-07-14 05:48:05.563045] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.550 [2024-07-14 05:48:05.566631] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.550 [2024-07-14 05:48:05.575957] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.550 [2024-07-14 05:48:05.576419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.550 [2024-07-14 05:48:05.576450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.550 [2024-07-14 05:48:05.576468] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.550 [2024-07-14 05:48:05.576706] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.550 [2024-07-14 05:48:05.576960] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.550 [2024-07-14 05:48:05.576984] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.550 [2024-07-14 05:48:05.577000] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.550 [2024-07-14 05:48:05.580585] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.550 [2024-07-14 05:48:05.589921] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.550 [2024-07-14 05:48:05.590380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.550 [2024-07-14 05:48:05.590410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.550 [2024-07-14 05:48:05.590428] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.550 [2024-07-14 05:48:05.590666] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.550 [2024-07-14 05:48:05.590918] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.550 [2024-07-14 05:48:05.590941] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.551 [2024-07-14 05:48:05.590957] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.551 [2024-07-14 05:48:05.594544] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.551 [2024-07-14 05:48:05.603875] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.551 [2024-07-14 05:48:05.604345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.551 [2024-07-14 05:48:05.604375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.551 [2024-07-14 05:48:05.604393] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.551 [2024-07-14 05:48:05.604633] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.551 [2024-07-14 05:48:05.604886] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.551 [2024-07-14 05:48:05.604910] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.551 [2024-07-14 05:48:05.604925] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.551 [2024-07-14 05:48:05.608509] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.551 [2024-07-14 05:48:05.617824] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.551 [2024-07-14 05:48:05.618288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.551 [2024-07-14 05:48:05.618318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.551 [2024-07-14 05:48:05.618336] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.551 [2024-07-14 05:48:05.618574] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.551 [2024-07-14 05:48:05.618817] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.551 [2024-07-14 05:48:05.618840] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.551 [2024-07-14 05:48:05.618855] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.551 [2024-07-14 05:48:05.622452] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.551 [2024-07-14 05:48:05.631770] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.551 [2024-07-14 05:48:05.632219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.551 [2024-07-14 05:48:05.632250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.551 [2024-07-14 05:48:05.632267] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.551 [2024-07-14 05:48:05.632511] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.551 [2024-07-14 05:48:05.632755] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.551 [2024-07-14 05:48:05.632778] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.551 [2024-07-14 05:48:05.632793] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.551 [2024-07-14 05:48:05.636388] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.551 [2024-07-14 05:48:05.645706] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.551 [2024-07-14 05:48:05.646214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.551 [2024-07-14 05:48:05.646264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.551 [2024-07-14 05:48:05.646282] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.551 [2024-07-14 05:48:05.646520] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.551 [2024-07-14 05:48:05.646763] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.551 [2024-07-14 05:48:05.646787] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.551 [2024-07-14 05:48:05.646802] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.551 [2024-07-14 05:48:05.650464] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.810 [2024-07-14 05:48:05.659810] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.810 [2024-07-14 05:48:05.660304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.810 [2024-07-14 05:48:05.660338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.810 [2024-07-14 05:48:05.660356] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.810 [2024-07-14 05:48:05.660597] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.811 [2024-07-14 05:48:05.660839] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.811 [2024-07-14 05:48:05.660863] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.811 [2024-07-14 05:48:05.660891] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.811 [2024-07-14 05:48:05.664486] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.811 [2024-07-14 05:48:05.673812] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.811 [2024-07-14 05:48:05.674284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.811 [2024-07-14 05:48:05.674316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.811 [2024-07-14 05:48:05.674334] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.811 [2024-07-14 05:48:05.674573] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.811 [2024-07-14 05:48:05.674816] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.811 [2024-07-14 05:48:05.674839] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.811 [2024-07-14 05:48:05.674860] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.811 [2024-07-14 05:48:05.678459] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.811 [2024-07-14 05:48:05.687782] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.811 [2024-07-14 05:48:05.688230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.811 [2024-07-14 05:48:05.688262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.811 [2024-07-14 05:48:05.688280] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.811 [2024-07-14 05:48:05.688520] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.811 [2024-07-14 05:48:05.688764] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.811 [2024-07-14 05:48:05.688786] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.811 [2024-07-14 05:48:05.688802] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.811 [2024-07-14 05:48:05.692402] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.811 [2024-07-14 05:48:05.701735] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.811 [2024-07-14 05:48:05.702215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.811 [2024-07-14 05:48:05.702246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.811 [2024-07-14 05:48:05.702265] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.811 [2024-07-14 05:48:05.702503] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.811 [2024-07-14 05:48:05.702746] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.811 [2024-07-14 05:48:05.702770] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.811 [2024-07-14 05:48:05.702785] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.811 [2024-07-14 05:48:05.706381] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.811 [2024-07-14 05:48:05.715701] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.811 [2024-07-14 05:48:05.716172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.811 [2024-07-14 05:48:05.716203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.811 [2024-07-14 05:48:05.716220] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.811 [2024-07-14 05:48:05.716459] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.811 [2024-07-14 05:48:05.716702] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.811 [2024-07-14 05:48:05.716725] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.811 [2024-07-14 05:48:05.716741] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.811 [2024-07-14 05:48:05.720338] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.811 [2024-07-14 05:48:05.729657] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.811 [2024-07-14 05:48:05.730109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.811 [2024-07-14 05:48:05.730141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.811 [2024-07-14 05:48:05.730159] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.811 [2024-07-14 05:48:05.730399] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.811 [2024-07-14 05:48:05.730642] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.811 [2024-07-14 05:48:05.730665] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.811 [2024-07-14 05:48:05.730680] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.811 [2024-07-14 05:48:05.734276] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.811 [2024-07-14 05:48:05.743596] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.811 [2024-07-14 05:48:05.744041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.811 [2024-07-14 05:48:05.744072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.811 [2024-07-14 05:48:05.744090] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.811 [2024-07-14 05:48:05.744330] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.811 [2024-07-14 05:48:05.744572] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.811 [2024-07-14 05:48:05.744595] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.811 [2024-07-14 05:48:05.744611] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.811 [2024-07-14 05:48:05.748209] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.811 [2024-07-14 05:48:05.757527] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.811 [2024-07-14 05:48:05.757972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.811 [2024-07-14 05:48:05.758003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.811 [2024-07-14 05:48:05.758021] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.811 [2024-07-14 05:48:05.758260] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.811 [2024-07-14 05:48:05.758503] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.811 [2024-07-14 05:48:05.758526] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.811 [2024-07-14 05:48:05.758541] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.811 [2024-07-14 05:48:05.762135] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.811 [2024-07-14 05:48:05.771450] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.811 [2024-07-14 05:48:05.771884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.811 [2024-07-14 05:48:05.771915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.811 [2024-07-14 05:48:05.771933] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.811 [2024-07-14 05:48:05.772172] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.811 [2024-07-14 05:48:05.772420] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.811 [2024-07-14 05:48:05.772444] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.811 [2024-07-14 05:48:05.772459] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.811 [2024-07-14 05:48:05.776058] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.811 [2024-07-14 05:48:05.785382] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.811 [2024-07-14 05:48:05.785822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.811 [2024-07-14 05:48:05.785854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.811 [2024-07-14 05:48:05.785882] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.811 [2024-07-14 05:48:05.786123] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.811 [2024-07-14 05:48:05.786366] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.811 [2024-07-14 05:48:05.786389] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.811 [2024-07-14 05:48:05.786404] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.811 [2024-07-14 05:48:05.790000] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.811 [2024-07-14 05:48:05.799328] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.811 [2024-07-14 05:48:05.799789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.811 [2024-07-14 05:48:05.799819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.811 [2024-07-14 05:48:05.799837] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.811 [2024-07-14 05:48:05.800086] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.811 [2024-07-14 05:48:05.800330] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.811 [2024-07-14 05:48:05.800353] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.811 [2024-07-14 05:48:05.800368] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.811 [2024-07-14 05:48:05.803962] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.811 [2024-07-14 05:48:05.813299] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.811 [2024-07-14 05:48:05.813762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.811 [2024-07-14 05:48:05.813792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.811 [2024-07-14 05:48:05.813810] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.811 [2024-07-14 05:48:05.814059] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.812 [2024-07-14 05:48:05.814304] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.812 [2024-07-14 05:48:05.814327] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.812 [2024-07-14 05:48:05.814343] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.812 [2024-07-14 05:48:05.817944] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.812 [2024-07-14 05:48:05.827265] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.812 [2024-07-14 05:48:05.827727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.812 [2024-07-14 05:48:05.827757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.812 [2024-07-14 05:48:05.827775] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.812 [2024-07-14 05:48:05.828025] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.812 [2024-07-14 05:48:05.828269] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.812 [2024-07-14 05:48:05.828292] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.812 [2024-07-14 05:48:05.828308] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.812 [2024-07-14 05:48:05.831904] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.812 [2024-07-14 05:48:05.841229] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.812 [2024-07-14 05:48:05.841700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.812 [2024-07-14 05:48:05.841730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.812 [2024-07-14 05:48:05.841747] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.812 [2024-07-14 05:48:05.841996] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.812 [2024-07-14 05:48:05.842240] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.812 [2024-07-14 05:48:05.842263] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.812 [2024-07-14 05:48:05.842278] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.812 [2024-07-14 05:48:05.845874] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.812 [2024-07-14 05:48:05.855196] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.812 [2024-07-14 05:48:05.855673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.812 [2024-07-14 05:48:05.855703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.812 [2024-07-14 05:48:05.855721] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.812 [2024-07-14 05:48:05.855972] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.812 [2024-07-14 05:48:05.856216] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.812 [2024-07-14 05:48:05.856239] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.812 [2024-07-14 05:48:05.856254] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.812 [2024-07-14 05:48:05.859843] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.812 [2024-07-14 05:48:05.869169] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.812 [2024-07-14 05:48:05.869629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.812 [2024-07-14 05:48:05.869659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.812 [2024-07-14 05:48:05.869682] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.812 [2024-07-14 05:48:05.869932] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.812 [2024-07-14 05:48:05.870176] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.812 [2024-07-14 05:48:05.870200] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.812 [2024-07-14 05:48:05.870215] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.812 [2024-07-14 05:48:05.873801] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.812 [2024-07-14 05:48:05.883128] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.812 [2024-07-14 05:48:05.883561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.812 [2024-07-14 05:48:05.883591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.812 [2024-07-14 05:48:05.883609] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.812 [2024-07-14 05:48:05.883848] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.812 [2024-07-14 05:48:05.884101] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.812 [2024-07-14 05:48:05.884125] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.812 [2024-07-14 05:48:05.884140] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.812 [2024-07-14 05:48:05.887727] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.812 [2024-07-14 05:48:05.897061] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.812 [2024-07-14 05:48:05.897506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.812 [2024-07-14 05:48:05.897536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.812 [2024-07-14 05:48:05.897554] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.812 [2024-07-14 05:48:05.897793] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.812 [2024-07-14 05:48:05.898045] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.812 [2024-07-14 05:48:05.898069] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.812 [2024-07-14 05:48:05.898085] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.812 [2024-07-14 05:48:05.901672] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.812 [2024-07-14 05:48:05.911080] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.812 [2024-07-14 05:48:05.911576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.812 [2024-07-14 05:48:05.911609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:58.812 [2024-07-14 05:48:05.911628] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:58.812 [2024-07-14 05:48:05.911898] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:58.812 [2024-07-14 05:48:05.912144] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.812 [2024-07-14 05:48:05.912175] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.812 [2024-07-14 05:48:05.912191] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.072 [2024-07-14 05:48:05.916098] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.072 [2024-07-14 05:48:05.924980] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.072 [2024-07-14 05:48:05.925587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.072 [2024-07-14 05:48:05.925642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:59.072 [2024-07-14 05:48:05.925661] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:59.072 [2024-07-14 05:48:05.925913] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:59.072 [2024-07-14 05:48:05.926157] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.072 [2024-07-14 05:48:05.926181] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.072 [2024-07-14 05:48:05.926196] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.072 [2024-07-14 05:48:05.929781] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.072 [2024-07-14 05:48:05.938897] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.072 [2024-07-14 05:48:05.939368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.072 [2024-07-14 05:48:05.939399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:59.072 [2024-07-14 05:48:05.939417] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:59.072 [2024-07-14 05:48:05.939656] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:59.072 [2024-07-14 05:48:05.939911] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.073 [2024-07-14 05:48:05.939935] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.073 [2024-07-14 05:48:05.939950] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.073 [2024-07-14 05:48:05.943537] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.073 [2024-07-14 05:48:05.952855] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.073 [2024-07-14 05:48:05.953331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.073 [2024-07-14 05:48:05.953362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:59.073 [2024-07-14 05:48:05.953380] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:59.073 [2024-07-14 05:48:05.953619] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:59.073 [2024-07-14 05:48:05.953862] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.073 [2024-07-14 05:48:05.953895] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.073 [2024-07-14 05:48:05.953911] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.073 [2024-07-14 05:48:05.957499] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.073 [2024-07-14 05:48:05.966833] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.073 [2024-07-14 05:48:05.967311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.073 [2024-07-14 05:48:05.967342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:59.073 [2024-07-14 05:48:05.967360] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:59.073 [2024-07-14 05:48:05.967599] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:59.073 [2024-07-14 05:48:05.967842] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.073 [2024-07-14 05:48:05.967874] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.073 [2024-07-14 05:48:05.967892] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.073 [2024-07-14 05:48:05.971478] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.073 [2024-07-14 05:48:05.980789] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.073 [2024-07-14 05:48:05.981236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.073 [2024-07-14 05:48:05.981267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:59.073 [2024-07-14 05:48:05.981285] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:59.073 [2024-07-14 05:48:05.981523] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:59.073 [2024-07-14 05:48:05.981766] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.073 [2024-07-14 05:48:05.981789] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.073 [2024-07-14 05:48:05.981804] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.073 [2024-07-14 05:48:05.985402] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.073 [2024-07-14 05:48:05.994725] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.073 [2024-07-14 05:48:05.995180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.073 [2024-07-14 05:48:05.995210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:59.073 [2024-07-14 05:48:05.995228] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:59.073 [2024-07-14 05:48:05.995467] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:59.073 [2024-07-14 05:48:05.995711] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.073 [2024-07-14 05:48:05.995734] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.073 [2024-07-14 05:48:05.995749] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.073 [2024-07-14 05:48:05.999348] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.073 [2024-07-14 05:48:06.008669] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.073 [2024-07-14 05:48:06.009112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.073 [2024-07-14 05:48:06.009143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:59.073 [2024-07-14 05:48:06.009161] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:59.073 [2024-07-14 05:48:06.009405] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:59.073 [2024-07-14 05:48:06.009649] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.073 [2024-07-14 05:48:06.009672] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.073 [2024-07-14 05:48:06.009688] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.073 [2024-07-14 05:48:06.013285] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.073 [2024-07-14 05:48:06.022617] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.073 [2024-07-14 05:48:06.023063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.073 [2024-07-14 05:48:06.023094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:59.073 [2024-07-14 05:48:06.023112] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:59.073 [2024-07-14 05:48:06.023352] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:59.073 [2024-07-14 05:48:06.023595] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.073 [2024-07-14 05:48:06.023618] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.073 [2024-07-14 05:48:06.023634] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.073 [2024-07-14 05:48:06.027228] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.073 [2024-07-14 05:48:06.036558] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.073 [2024-07-14 05:48:06.036998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.073 [2024-07-14 05:48:06.037029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:59.073 [2024-07-14 05:48:06.037048] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:59.073 [2024-07-14 05:48:06.037287] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:59.073 [2024-07-14 05:48:06.037530] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.073 [2024-07-14 05:48:06.037553] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.073 [2024-07-14 05:48:06.037569] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.073 [2024-07-14 05:48:06.041165] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.073 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3388643 Killed "${NVMF_APP[@]}" "$@" 00:33:59.073 05:48:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:33:59.073 05:48:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:59.073 05:48:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:59.073 05:48:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:59.073 05:48:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:59.073 05:48:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3389721 00:33:59.073 05:48:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:59.073 05:48:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3389721 00:33:59.073 05:48:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 3389721 ']' 00:33:59.073 05:48:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:59.073 05:48:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:59.073 05:48:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:59.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:59.073 05:48:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:59.073 05:48:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:59.073 [2024-07-14 05:48:06.050497] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.073 [2024-07-14 05:48:06.050915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.073 [2024-07-14 05:48:06.050946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:59.073 [2024-07-14 05:48:06.050965] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:59.073 [2024-07-14 05:48:06.051204] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:59.073 [2024-07-14 05:48:06.051448] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.073 [2024-07-14 05:48:06.051471] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.073 [2024-07-14 05:48:06.051487] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.073 [2024-07-14 05:48:06.055086] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.073 [2024-07-14 05:48:06.064445] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.073 [2024-07-14 05:48:06.064878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.073 [2024-07-14 05:48:06.064910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:59.073 [2024-07-14 05:48:06.064928] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:59.073 [2024-07-14 05:48:06.065167] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:59.073 [2024-07-14 05:48:06.065411] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.073 [2024-07-14 05:48:06.065434] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.073 [2024-07-14 05:48:06.065449] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.073 [2024-07-14 05:48:06.069045] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.073 [2024-07-14 05:48:06.078380] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.074 [2024-07-14 05:48:06.078825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.074 [2024-07-14 05:48:06.078856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:59.074 [2024-07-14 05:48:06.078891] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:59.074 [2024-07-14 05:48:06.079131] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:59.074 [2024-07-14 05:48:06.079375] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.074 [2024-07-14 05:48:06.079398] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.074 [2024-07-14 05:48:06.079429] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.074 [2024-07-14 05:48:06.083033] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.074 [2024-07-14 05:48:06.091966] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.074 [2024-07-14 05:48:06.092401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.074 [2024-07-14 05:48:06.092429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:59.074 [2024-07-14 05:48:06.092445] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:59.074 [2024-07-14 05:48:06.092683] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:59.074 [2024-07-14 05:48:06.092912] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.074 [2024-07-14 05:48:06.092934] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.074 [2024-07-14 05:48:06.092949] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.074 [2024-07-14 05:48:06.096253] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.074 [2024-07-14 05:48:06.097425] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:59.074 [2024-07-14 05:48:06.097497] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:59.074 [2024-07-14 05:48:06.105547] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.074 [2024-07-14 05:48:06.105983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.074 [2024-07-14 05:48:06.106011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:59.074 [2024-07-14 05:48:06.106027] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:59.074 [2024-07-14 05:48:06.106243] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:59.074 [2024-07-14 05:48:06.106486] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.074 [2024-07-14 05:48:06.106507] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.074 [2024-07-14 05:48:06.106520] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.074 [2024-07-14 05:48:06.109750] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.074 [2024-07-14 05:48:06.119235] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.074 [2024-07-14 05:48:06.119693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.074 [2024-07-14 05:48:06.119719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:59.074 [2024-07-14 05:48:06.119751] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:59.074 [2024-07-14 05:48:06.120006] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:59.074 [2024-07-14 05:48:06.120254] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.074 [2024-07-14 05:48:06.120273] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.074 [2024-07-14 05:48:06.120287] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.074 [2024-07-14 05:48:06.123697] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.074 [2024-07-14 05:48:06.132747] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.074 [2024-07-14 05:48:06.133151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.074 [2024-07-14 05:48:06.133190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:59.074 [2024-07-14 05:48:06.133206] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:59.074 [2024-07-14 05:48:06.133446] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:59.074 [2024-07-14 05:48:06.133645] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.074 [2024-07-14 05:48:06.133664] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.074 [2024-07-14 05:48:06.133677] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.074 [2024-07-14 05:48:06.136959] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.074 EAL: No free 2048 kB hugepages reported on node 1 00:33:59.074 [2024-07-14 05:48:06.146619] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.074 [2024-07-14 05:48:06.147050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.074 [2024-07-14 05:48:06.147078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:59.074 [2024-07-14 05:48:06.147094] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:59.074 [2024-07-14 05:48:06.147339] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:59.074 [2024-07-14 05:48:06.147557] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.074 [2024-07-14 05:48:06.147579] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.074 [2024-07-14 05:48:06.147594] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.074 [2024-07-14 05:48:06.151000] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.074 [2024-07-14 05:48:06.160718] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.074 [2024-07-14 05:48:06.161130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.074 [2024-07-14 05:48:06.161157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:59.074 [2024-07-14 05:48:06.161181] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:59.074 [2024-07-14 05:48:06.161424] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:59.074 [2024-07-14 05:48:06.161630] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.074 [2024-07-14 05:48:06.161649] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.074 [2024-07-14 05:48:06.161663] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.074 [2024-07-14 05:48:06.165275] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.074 [2024-07-14 05:48:06.173707] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:59.074 [2024-07-14 05:48:06.174999] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.074 [2024-07-14 05:48:06.175495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.074 [2024-07-14 05:48:06.175529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:59.074 [2024-07-14 05:48:06.175548] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:59.074 [2024-07-14 05:48:06.175809] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:59.074 [2024-07-14 05:48:06.176052] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.074 [2024-07-14 05:48:06.176074] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.074 [2024-07-14 05:48:06.176089] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.334 [2024-07-14 05:48:06.180030] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.334 [2024-07-14 05:48:06.189091] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.334 [2024-07-14 05:48:06.189661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.334 [2024-07-14 05:48:06.189701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:59.334 [2024-07-14 05:48:06.189723] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:59.334 [2024-07-14 05:48:06.190004] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:59.334 [2024-07-14 05:48:06.190244] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.334 [2024-07-14 05:48:06.190266] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.334 [2024-07-14 05:48:06.190297] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.334 [2024-07-14 05:48:06.193879] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.334 [2024-07-14 05:48:06.203118] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.334 [2024-07-14 05:48:06.203609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.334 [2024-07-14 05:48:06.203641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:59.334 [2024-07-14 05:48:06.203659] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:59.334 [2024-07-14 05:48:06.203913] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:59.334 [2024-07-14 05:48:06.204133] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.334 [2024-07-14 05:48:06.204177] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.334 [2024-07-14 05:48:06.204191] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.334 [2024-07-14 05:48:06.207753] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.334 [2024-07-14 05:48:06.217071] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.334 [2024-07-14 05:48:06.217528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.334 [2024-07-14 05:48:06.217557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:59.334 [2024-07-14 05:48:06.217574] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:59.334 [2024-07-14 05:48:06.217816] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:59.334 [2024-07-14 05:48:06.218061] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.334 [2024-07-14 05:48:06.218083] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.334 [2024-07-14 05:48:06.218098] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.334 [2024-07-14 05:48:06.221628] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.334 [2024-07-14 05:48:06.231097] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.334 [2024-07-14 05:48:06.231889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.334 [2024-07-14 05:48:06.231946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:59.334 [2024-07-14 05:48:06.231969] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:59.334 [2024-07-14 05:48:06.232225] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:59.334 [2024-07-14 05:48:06.232437] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.334 [2024-07-14 05:48:06.232458] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.334 [2024-07-14 05:48:06.232473] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.334 [2024-07-14 05:48:06.236005] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.334 [2024-07-14 05:48:06.244995] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.334 [2024-07-14 05:48:06.245438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.334 [2024-07-14 05:48:06.245473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:59.334 [2024-07-14 05:48:06.245492] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:59.334 [2024-07-14 05:48:06.245732] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:59.334 [2024-07-14 05:48:06.245971] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.334 [2024-07-14 05:48:06.245993] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.334 [2024-07-14 05:48:06.246008] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.334 [2024-07-14 05:48:06.249587] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.334 [2024-07-14 05:48:06.259016] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.334 [2024-07-14 05:48:06.259484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.334 [2024-07-14 05:48:06.259516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:59.334 [2024-07-14 05:48:06.259534] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:59.334 [2024-07-14 05:48:06.259795] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:59.334 [2024-07-14 05:48:06.260049] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.334 [2024-07-14 05:48:06.260072] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.334 [2024-07-14 05:48:06.260087] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.334 [2024-07-14 05:48:06.263635] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.334 [2024-07-14 05:48:06.267223] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:59.334 [2024-07-14 05:48:06.267258] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:59.334 [2024-07-14 05:48:06.267274] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:59.334 [2024-07-14 05:48:06.267303] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:59.334 [2024-07-14 05:48:06.267313] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:59.334 [2024-07-14 05:48:06.267482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:59.334 [2024-07-14 05:48:06.267544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:59.334 [2024-07-14 05:48:06.267546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:59.334 [2024-07-14 05:48:06.272673] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.334 [2024-07-14 05:48:06.273172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.334 [2024-07-14 05:48:06.273205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:59.334 [2024-07-14 05:48:06.273222] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:59.334 [2024-07-14 05:48:06.273444] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:59.334 [2024-07-14 05:48:06.273666] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.335 [2024-07-14 05:48:06.273687] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.335 [2024-07-14 05:48:06.273703] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.335 [2024-07-14 05:48:06.276979] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.335 [2024-07-14 05:48:06.286284] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.335 [2024-07-14 05:48:06.286875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.335 [2024-07-14 05:48:06.286911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:59.335 [2024-07-14 05:48:06.286931] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:59.335 [2024-07-14 05:48:06.287155] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:59.335 [2024-07-14 05:48:06.287378] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.335 [2024-07-14 05:48:06.287399] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.335 [2024-07-14 05:48:06.287415] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.335 [2024-07-14 05:48:06.290704] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.335 [2024-07-14 05:48:06.300012] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.335 [2024-07-14 05:48:06.300587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.335 [2024-07-14 05:48:06.300624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:59.335 [2024-07-14 05:48:06.300643] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:59.335 [2024-07-14 05:48:06.300875] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:59.335 [2024-07-14 05:48:06.301110] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.335 [2024-07-14 05:48:06.301132] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.335 [2024-07-14 05:48:06.301149] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.335 [2024-07-14 05:48:06.304434] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.335 [2024-07-14 05:48:06.313712] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.335 [2024-07-14 05:48:06.314323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.335 [2024-07-14 05:48:06.314361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:59.335 [2024-07-14 05:48:06.314381] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:59.335 [2024-07-14 05:48:06.314606] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:59.335 [2024-07-14 05:48:06.314828] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.335 [2024-07-14 05:48:06.314850] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.335 [2024-07-14 05:48:06.314875] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.335 [2024-07-14 05:48:06.318141] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.335 [2024-07-14 05:48:06.327445] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.335 [2024-07-14 05:48:06.328002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.335 [2024-07-14 05:48:06.328038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:59.335 [2024-07-14 05:48:06.328057] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:59.335 [2024-07-14 05:48:06.328280] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:59.335 [2024-07-14 05:48:06.328503] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.335 [2024-07-14 05:48:06.328524] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.335 [2024-07-14 05:48:06.328541] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.335 [2024-07-14 05:48:06.331796] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.335 [2024-07-14 05:48:06.341174] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.335 [2024-07-14 05:48:06.341728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.335 [2024-07-14 05:48:06.341765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:59.335 [2024-07-14 05:48:06.341784] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:59.335 [2024-07-14 05:48:06.342016] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:59.335 [2024-07-14 05:48:06.342239] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.335 [2024-07-14 05:48:06.342261] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.335 [2024-07-14 05:48:06.342278] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.335 [2024-07-14 05:48:06.345574] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.335 [2024-07-14 05:48:06.354878] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.335 [2024-07-14 05:48:06.355299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.335 [2024-07-14 05:48:06.355328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:59.335 [2024-07-14 05:48:06.355344] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:59.335 [2024-07-14 05:48:06.355561] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:59.335 [2024-07-14 05:48:06.355780] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.335 [2024-07-14 05:48:06.355801] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.335 [2024-07-14 05:48:06.355815] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.335 [2024-07-14 05:48:06.359091] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.335 [2024-07-14 05:48:06.368469] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.335 [2024-07-14 05:48:06.368903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.335 [2024-07-14 05:48:06.368931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:59.335 [2024-07-14 05:48:06.368948] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:59.335 [2024-07-14 05:48:06.369164] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:59.335 [2024-07-14 05:48:06.369388] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.335 [2024-07-14 05:48:06.369409] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.335 [2024-07-14 05:48:06.369423] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.335 [2024-07-14 05:48:06.372654] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.335 05:48:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:59.335 05:48:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:33:59.335 05:48:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:59.335 05:48:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:59.335 05:48:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:59.335 [2024-07-14 05:48:06.382093] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.335 [2024-07-14 05:48:06.382489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.335 [2024-07-14 05:48:06.382517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:59.335 [2024-07-14 05:48:06.382534] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:59.335 [2024-07-14 05:48:06.382750] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:59.335 [2024-07-14 05:48:06.382977] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.335 [2024-07-14 05:48:06.382999] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.335 [2024-07-14 05:48:06.383014] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.335 [2024-07-14 05:48:06.386311] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.335 05:48:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:59.335 05:48:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:59.335 05:48:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.335 05:48:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:59.335 [2024-07-14 05:48:06.395791] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.335 [2024-07-14 05:48:06.396246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.335 [2024-07-14 05:48:06.396273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:59.335 [2024-07-14 05:48:06.396289] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:59.335 [2024-07-14 05:48:06.396504] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:59.335 [2024-07-14 05:48:06.396724] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.335 [2024-07-14 05:48:06.396745] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.335 [2024-07-14 05:48:06.396759] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.335 [2024-07-14 05:48:06.399830] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:59.335 [2024-07-14 05:48:06.400042] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.335 05:48:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:59.335 05:48:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:59.335 05:48:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.335 05:48:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:59.335 [2024-07-14 05:48:06.409498] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.335 [2024-07-14 05:48:06.409938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.335 [2024-07-14 05:48:06.409966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:59.335 [2024-07-14 05:48:06.409982] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:59.335 [2024-07-14 05:48:06.410198] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:59.335 [2024-07-14 05:48:06.410425] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.335 [2024-07-14 05:48:06.410445] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.335 [2024-07-14 05:48:06.410459] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.335 [2024-07-14 05:48:06.413721] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.335 [2024-07-14 05:48:06.423183] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.335 [2024-07-14 05:48:06.423622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.335 [2024-07-14 05:48:06.423650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:59.335 [2024-07-14 05:48:06.423666] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:59.336 [2024-07-14 05:48:06.423892] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:59.336 [2024-07-14 05:48:06.424111] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.336 [2024-07-14 05:48:06.424138] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.336 [2024-07-14 05:48:06.424153] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.336 [2024-07-14 05:48:06.427460] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.336 [2024-07-14 05:48:06.437097] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.336 [2024-07-14 05:48:06.437621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.336 [2024-07-14 05:48:06.437661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:59.336 [2024-07-14 05:48:06.437681] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:59.336 [2024-07-14 05:48:06.437916] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:59.336 [2024-07-14 05:48:06.438161] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.336 [2024-07-14 05:48:06.438185] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.336 [2024-07-14 05:48:06.438203] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.594 [2024-07-14 05:48:06.441582] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.594 Malloc0 00:33:59.594 05:48:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:59.594 05:48:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:59.594 05:48:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.594 05:48:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:59.594 05:48:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:59.594 05:48:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:59.594 05:48:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.594 05:48:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:59.594 [2024-07-14 05:48:06.450787] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.594 [2024-07-14 05:48:06.451203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.594 [2024-07-14 05:48:06.451232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7301e0 with addr=10.0.0.2, port=4420 00:33:59.594 [2024-07-14 05:48:06.451249] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7301e0 is same with the state(5) to be set 00:33:59.594 [2024-07-14 05:48:06.451465] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7301e0 (9): Bad file descriptor 00:33:59.594 [2024-07-14 05:48:06.451685] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.594 [2024-07-14 05:48:06.451706] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.594 [2024-07-14 05:48:06.451721] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.594 [2024-07-14 05:48:06.454996] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.594 05:48:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:59.594 05:48:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:59.594 05:48:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.594 05:48:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:59.594 [2024-07-14 05:48:06.462242] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:59.594 [2024-07-14 05:48:06.464483] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.594 05:48:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:59.594 05:48:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3388946 00:33:59.594 [2024-07-14 05:48:06.580423] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:09.560 00:34:09.560 Latency(us) 00:34:09.560 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:09.560 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:09.560 Verification LBA range: start 0x0 length 0x4000 00:34:09.560 Nvme1n1 : 15.01 6835.15 26.70 9140.12 0.00 7988.59 1104.40 19806.44 00:34:09.560 =================================================================================================================== 00:34:09.560 Total : 6835.15 26.70 9140.12 0.00 7988.59 1104.40 19806.44 00:34:09.560 05:48:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:34:09.560 05:48:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:09.560 05:48:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:09.560 05:48:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:09.560 05:48:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:09.560 05:48:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:34:09.560 05:48:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:34:09.560 05:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:09.560 05:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:34:09.560 05:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:09.560 05:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:34:09.560 05:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:09.560 05:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:09.560 rmmod nvme_tcp 00:34:09.560 rmmod nvme_fabrics 00:34:09.560 rmmod nvme_keyring 00:34:09.560 05:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:09.560 05:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:34:09.560 05:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:34:09.560 05:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 3389721 ']' 00:34:09.560 05:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 3389721 00:34:09.560 05:48:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@946 -- # '[' -z 3389721 ']' 00:34:09.560 05:48:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@950 -- # kill -0 3389721 00:34:09.560 05:48:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # uname 00:34:09.560 05:48:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:09.560 05:48:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3389721 00:34:09.560 05:48:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:34:09.560 05:48:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:34:09.560 05:48:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3389721' 00:34:09.560 killing process with pid 3389721 00:34:09.560 05:48:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@965 -- # kill 3389721 00:34:09.560 05:48:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@970 -- # wait 3389721 00:34:09.560 05:48:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:09.560 05:48:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:09.560 05:48:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:09.560 05:48:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:09.560 05:48:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:09.560 05:48:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:09.560 05:48:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:09.560 05:48:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:11.466 05:48:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:11.466 00:34:11.466 real 0m22.309s 00:34:11.466 user 0m59.846s 00:34:11.466 sys 0m4.090s 00:34:11.466 05:48:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:11.466 05:48:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:11.466 ************************************ 00:34:11.466 END TEST nvmf_bdevperf 00:34:11.466 ************************************ 00:34:11.466 05:48:18 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:34:11.466 05:48:18 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:34:11.466 05:48:18 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:11.466 05:48:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:11.466 ************************************ 00:34:11.466 START TEST nvmf_target_disconnect 00:34:11.466 ************************************ 00:34:11.466 05:48:18 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:34:11.466 * Looking for test storage... 00:34:11.466 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:11.466 05:48:18 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:11.466 05:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:34:11.466 05:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:11.466 05:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:11.466 05:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:11.466 05:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:11.466 05:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:11.466 05:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:11.466 05:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:11.466 05:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:11.466 05:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:11.466 05:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:11.466 05:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:11.466 05:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:11.466 05:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:11.466 05:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:11.466 05:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:11.466 05:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:11.466 05:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:11.466 05:48:18 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:11.466 05:48:18 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:11.466 05:48:18 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:11.466 05:48:18 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:11.466 05:48:18 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:11.466 05:48:18 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:11.466 05:48:18 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:34:11.466 05:48:18 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:11.466 05:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:34:11.466 05:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:11.466 05:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:11.466 05:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:11.466 05:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:11.466 05:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:11.466 05:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:11.466 05:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:11.466 05:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:11.466 05:48:18 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:34:11.466 05:48:18 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:34:11.466 05:48:18 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:34:11.466 05:48:18 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:34:11.466 05:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:11.466 05:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:11.466 05:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:11.466 05:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:11.466 05:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:11.466 05:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:11.466 05:48:18 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:11.466 05:48:18 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:11.466 05:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:11.466 05:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:11.466 05:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:34:11.466 05:48:18 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:13.372 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:13.372 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:13.372 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:13.372 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:13.372 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:13.372 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.149 ms 00:34:13.372 00:34:13.372 --- 10.0.0.2 ping statistics --- 00:34:13.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:13.372 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:13.372 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:13.372 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:34:13.372 00:34:13.372 --- 10.0.0.1 ping statistics --- 00:34:13.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:13.372 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:13.372 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:13.373 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:13.373 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:13.373 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:13.373 05:48:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:13.373 05:48:20 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:34:13.373 05:48:20 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:34:13.373 05:48:20 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:13.373 05:48:20 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:13.373 ************************************ 00:34:13.373 START TEST nvmf_target_disconnect_tc1 00:34:13.373 ************************************ 00:34:13.373 05:48:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc1 00:34:13.373 05:48:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:13.373 05:48:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:34:13.373 05:48:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:13.373 05:48:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:13.373 05:48:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:13.373 05:48:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:13.373 05:48:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:13.373 05:48:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:13.373 05:48:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:13.373 05:48:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:13.373 05:48:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:34:13.373 05:48:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:13.373 EAL: No free 2048 kB hugepages reported on node 1 00:34:13.373 [2024-07-14 05:48:20.461321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.373 [2024-07-14 05:48:20.461394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88e740 with addr=10.0.0.2, port=4420 00:34:13.373 [2024-07-14 05:48:20.461432] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:13.373 [2024-07-14 05:48:20.461454] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:13.373 [2024-07-14 05:48:20.461468] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:34:13.373 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:34:13.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:34:13.373 Initializing NVMe Controllers 00:34:13.373 05:48:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:34:13.373 05:48:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:13.373 05:48:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:13.373 05:48:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:13.373 00:34:13.373 real 0m0.102s 00:34:13.373 user 0m0.041s 00:34:13.373 sys 0m0.061s 00:34:13.373 05:48:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:13.373 05:48:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:34:13.373 ************************************ 00:34:13.373 END TEST nvmf_target_disconnect_tc1 00:34:13.373 ************************************ 00:34:13.632 05:48:20 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:34:13.632 05:48:20 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:34:13.632 05:48:20 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:13.632 05:48:20 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:13.632 ************************************ 00:34:13.632 START TEST nvmf_target_disconnect_tc2 00:34:13.632 ************************************ 00:34:13.632 05:48:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc2 00:34:13.632 05:48:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:34:13.632 05:48:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:13.632 05:48:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:13.632 05:48:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:13.632 05:48:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:13.632 05:48:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3393366 00:34:13.632 05:48:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:13.632 05:48:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3393366 00:34:13.632 05:48:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 3393366 ']' 00:34:13.632 05:48:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:13.632 05:48:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:13.632 05:48:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:13.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:13.632 05:48:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:13.632 05:48:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:13.632 [2024-07-14 05:48:20.576608] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:34:13.632 [2024-07-14 05:48:20.576705] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:13.632 EAL: No free 2048 kB hugepages reported on node 1 00:34:13.632 [2024-07-14 05:48:20.642103] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:13.632 [2024-07-14 05:48:20.727819] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:13.632 [2024-07-14 05:48:20.727901] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:13.632 [2024-07-14 05:48:20.727915] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:13.632 [2024-07-14 05:48:20.727926] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:13.632 [2024-07-14 05:48:20.727949] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:13.632 [2024-07-14 05:48:20.728260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:34:13.632 [2024-07-14 05:48:20.728323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:34:13.632 [2024-07-14 05:48:20.728385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:34:13.632 [2024-07-14 05:48:20.728388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:34:13.891 05:48:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:13.891 05:48:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:34:13.891 05:48:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:13.891 05:48:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:13.891 05:48:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:13.891 05:48:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:13.891 05:48:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:13.891 05:48:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.891 05:48:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:13.891 Malloc0 00:34:13.891 05:48:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:13.891 05:48:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:13.891 05:48:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.891 05:48:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:13.891 [2024-07-14 05:48:20.885979] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:13.891 05:48:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:13.891 05:48:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:13.891 05:48:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.891 05:48:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:13.891 05:48:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:13.891 05:48:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:13.891 05:48:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.891 05:48:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:13.891 05:48:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:13.891 05:48:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:13.891 05:48:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.891 05:48:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:13.891 [2024-07-14 05:48:20.914242] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:13.891 05:48:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:13.891 05:48:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:13.891 05:48:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.891 05:48:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:13.891 05:48:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:13.891 05:48:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3393472 00:34:13.891 05:48:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:13.891 05:48:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:34:13.891 EAL: No free 2048 kB hugepages reported on node 1 00:34:16.457 05:48:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3393366 00:34:16.457 05:48:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:34:16.457 Read completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Read completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Read completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Read completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Read completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Read completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Read completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Read completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Read completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Read completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Write completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Write completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Write completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Write completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Read completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Write completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Write completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Write completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Write completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Write completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Read completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Read completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Read completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Write completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Write completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Read completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Read completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Read completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Write completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Write completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Read completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Write completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Read completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Read completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 [2024-07-14 05:48:22.939174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:16.457 Read completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Read completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Read completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Read completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Read completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Read completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Read completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Read completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Read completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Read completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Read completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Read completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Write completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Write completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Write completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Read completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Read completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Read completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Read completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Write completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Read completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Read completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Read completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Read completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Read completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Write completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Write completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Read completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Read completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Write completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 [2024-07-14 05:48:22.939541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.457 Read completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Read completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Read completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Read completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Read completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Read completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Read completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Read completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Read completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Read completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Read completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Read completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Write completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Read completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Read completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Write completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Write completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Read completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Write completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Read completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Read completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Read completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Read completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Write completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Write completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Read completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Read completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Read completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Read completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.457 Write completed with error (sct=0, sc=8) 00:34:16.457 starting I/O failed 00:34:16.458 Write completed with error (sct=0, sc=8) 00:34:16.458 starting I/O failed 00:34:16.458 Write completed with error (sct=0, sc=8) 00:34:16.458 starting I/O failed 00:34:16.458 [2024-07-14 05:48:22.939846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.458 Read completed with error (sct=0, sc=8) 00:34:16.458 starting I/O failed 00:34:16.458 Read completed with error (sct=0, sc=8) 00:34:16.458 starting I/O failed 00:34:16.458 Read completed with error (sct=0, sc=8) 00:34:16.458 starting I/O failed 00:34:16.458 Read completed with error (sct=0, sc=8) 00:34:16.458 starting I/O failed 00:34:16.458 Read completed with error (sct=0, sc=8) 00:34:16.458 starting I/O failed 00:34:16.458 Read completed with error (sct=0, sc=8) 00:34:16.458 starting I/O failed 00:34:16.458 Read completed with error (sct=0, sc=8) 00:34:16.458 starting I/O failed 00:34:16.458 Read completed with error (sct=0, sc=8) 00:34:16.458 starting I/O failed 00:34:16.458 Read completed with error (sct=0, sc=8) 00:34:16.458 starting I/O failed 00:34:16.458 Read completed with error (sct=0, sc=8) 00:34:16.458 starting I/O failed 00:34:16.458 Read completed with error (sct=0, sc=8) 00:34:16.458 starting I/O failed 00:34:16.458 Read completed with error (sct=0, sc=8) 00:34:16.458 starting I/O failed 00:34:16.458 Read completed with error (sct=0, sc=8) 00:34:16.458 starting I/O failed 00:34:16.458 Write completed with error (sct=0, sc=8) 00:34:16.458 starting I/O failed 00:34:16.458 Write completed with error (sct=0, sc=8) 00:34:16.458 starting I/O failed 00:34:16.458 Read completed with error (sct=0, sc=8) 00:34:16.458 starting I/O failed 00:34:16.458 Write completed with error (sct=0, sc=8) 00:34:16.458 starting I/O failed 00:34:16.458 Read completed with error (sct=0, sc=8) 00:34:16.458 starting I/O failed 00:34:16.458 Read completed with error (sct=0, sc=8) 00:34:16.458 starting I/O failed 00:34:16.458 Read completed with error (sct=0, sc=8) 00:34:16.458 starting I/O failed 00:34:16.458 Write completed with error (sct=0, sc=8) 00:34:16.458 starting I/O failed 00:34:16.458 Read completed with error (sct=0, sc=8) 00:34:16.458 starting I/O failed 00:34:16.458 Write completed with error (sct=0, sc=8) 00:34:16.458 starting I/O failed 00:34:16.458 Read completed with error (sct=0, sc=8) 00:34:16.458 starting I/O failed 00:34:16.458 Read completed with error (sct=0, sc=8) 00:34:16.458 starting I/O failed 00:34:16.458 Write completed with error (sct=0, sc=8) 00:34:16.458 starting I/O failed 00:34:16.458 Read completed with error (sct=0, sc=8) 00:34:16.458 starting I/O failed 00:34:16.458 Write completed with error (sct=0, sc=8) 00:34:16.458 starting I/O failed 00:34:16.458 Read completed with error (sct=0, sc=8) 00:34:16.458 starting I/O failed 00:34:16.458 Write completed with error (sct=0, sc=8) 00:34:16.458 starting I/O failed 00:34:16.458 Read completed with error (sct=0, sc=8) 00:34:16.458 starting I/O failed 00:34:16.458 Read completed with error (sct=0, sc=8) 00:34:16.458 starting I/O failed 00:34:16.458 [2024-07-14 05:48:22.940176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.458 [2024-07-14 05:48:22.940448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.458 [2024-07-14 05:48:22.940488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.458 qpair failed and we were unable to recover it. 00:34:16.458 [2024-07-14 05:48:22.940687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.458 [2024-07-14 05:48:22.940716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.458 qpair failed and we were unable to recover it. 00:34:16.458 [2024-07-14 05:48:22.940888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.458 [2024-07-14 05:48:22.940915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.458 qpair failed and we were unable to recover it. 00:34:16.458 [2024-07-14 05:48:22.941085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.458 [2024-07-14 05:48:22.941111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.458 qpair failed and we were unable to recover it. 00:34:16.458 [2024-07-14 05:48:22.941299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.458 [2024-07-14 05:48:22.941328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.458 qpair failed and we were unable to recover it. 00:34:16.458 [2024-07-14 05:48:22.941538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.458 [2024-07-14 05:48:22.941581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.458 qpair failed and we were unable to recover it. 00:34:16.458 [2024-07-14 05:48:22.941793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.458 [2024-07-14 05:48:22.941820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.458 qpair failed and we were unable to recover it. 00:34:16.458 [2024-07-14 05:48:22.942001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.458 [2024-07-14 05:48:22.942028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.458 qpair failed and we were unable to recover it. 00:34:16.458 [2024-07-14 05:48:22.942190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.458 [2024-07-14 05:48:22.942216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.458 qpair failed and we were unable to recover it. 00:34:16.458 [2024-07-14 05:48:22.942435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.458 [2024-07-14 05:48:22.942461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.458 qpair failed and we were unable to recover it. 00:34:16.458 [2024-07-14 05:48:22.942619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.458 [2024-07-14 05:48:22.942661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.458 qpair failed and we were unable to recover it. 00:34:16.458 [2024-07-14 05:48:22.942881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.458 [2024-07-14 05:48:22.942925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.458 qpair failed and we were unable to recover it. 00:34:16.458 [2024-07-14 05:48:22.943089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.458 [2024-07-14 05:48:22.943121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.458 qpair failed and we were unable to recover it. 00:34:16.458 [2024-07-14 05:48:22.943374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.458 [2024-07-14 05:48:22.943403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.458 qpair failed and we were unable to recover it. 00:34:16.458 [2024-07-14 05:48:22.943605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.458 [2024-07-14 05:48:22.943646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.458 qpair failed and we were unable to recover it. 00:34:16.458 [2024-07-14 05:48:22.943876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.458 [2024-07-14 05:48:22.943902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.458 qpair failed and we were unable to recover it. 00:34:16.458 [2024-07-14 05:48:22.944071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.458 [2024-07-14 05:48:22.944097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.458 qpair failed and we were unable to recover it. 00:34:16.458 [2024-07-14 05:48:22.944392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.458 [2024-07-14 05:48:22.944418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.458 qpair failed and we were unable to recover it. 00:34:16.458 [2024-07-14 05:48:22.944744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.458 [2024-07-14 05:48:22.944795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.458 qpair failed and we were unable to recover it. 00:34:16.458 [2024-07-14 05:48:22.945019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.458 [2024-07-14 05:48:22.945059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.458 qpair failed and we were unable to recover it. 00:34:16.458 [2024-07-14 05:48:22.945297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.458 [2024-07-14 05:48:22.945338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.458 qpair failed and we were unable to recover it. 00:34:16.458 [2024-07-14 05:48:22.945528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.458 [2024-07-14 05:48:22.945554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.458 qpair failed and we were unable to recover it. 00:34:16.458 [2024-07-14 05:48:22.945714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.458 [2024-07-14 05:48:22.945742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.458 qpair failed and we were unable to recover it. 00:34:16.458 [2024-07-14 05:48:22.945937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.458 [2024-07-14 05:48:22.945965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.458 qpair failed and we were unable to recover it. 00:34:16.458 [2024-07-14 05:48:22.946156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.458 [2024-07-14 05:48:22.946181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.458 qpair failed and we were unable to recover it. 00:34:16.458 [2024-07-14 05:48:22.946425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.458 [2024-07-14 05:48:22.946451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.458 qpair failed and we were unable to recover it. 00:34:16.458 [2024-07-14 05:48:22.946643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.458 [2024-07-14 05:48:22.946667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.458 qpair failed and we were unable to recover it. 00:34:16.458 [2024-07-14 05:48:22.946850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.459 [2024-07-14 05:48:22.946880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.459 qpair failed and we were unable to recover it. 00:34:16.459 [2024-07-14 05:48:22.947073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.459 [2024-07-14 05:48:22.947099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.459 qpair failed and we were unable to recover it. 00:34:16.459 [2024-07-14 05:48:22.947316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.459 [2024-07-14 05:48:22.947341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.459 qpair failed and we were unable to recover it. 00:34:16.459 [2024-07-14 05:48:22.947519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.459 [2024-07-14 05:48:22.947564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.459 qpair failed and we were unable to recover it. 00:34:16.459 [2024-07-14 05:48:22.947790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.459 [2024-07-14 05:48:22.947816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.459 qpair failed and we were unable to recover it. 00:34:16.459 [2024-07-14 05:48:22.948023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.459 [2024-07-14 05:48:22.948049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.459 qpair failed and we were unable to recover it. 00:34:16.459 [2024-07-14 05:48:22.948211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.459 [2024-07-14 05:48:22.948239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.459 qpair failed and we were unable to recover it. 00:34:16.459 [2024-07-14 05:48:22.948420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.459 [2024-07-14 05:48:22.948446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.459 qpair failed and we were unable to recover it. 00:34:16.459 [2024-07-14 05:48:22.948635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.459 [2024-07-14 05:48:22.948661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.459 qpair failed and we were unable to recover it. 00:34:16.459 [2024-07-14 05:48:22.948815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.459 [2024-07-14 05:48:22.948842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.459 qpair failed and we were unable to recover it. 00:34:16.459 [2024-07-14 05:48:22.949050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.459 [2024-07-14 05:48:22.949089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.459 qpair failed and we were unable to recover it. 00:34:16.459 [2024-07-14 05:48:22.949285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.459 [2024-07-14 05:48:22.949312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.459 qpair failed and we were unable to recover it. 00:34:16.459 [2024-07-14 05:48:22.949508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.459 [2024-07-14 05:48:22.949535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.459 qpair failed and we were unable to recover it. 00:34:16.459 [2024-07-14 05:48:22.949696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.459 [2024-07-14 05:48:22.949723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.459 qpair failed and we were unable to recover it. 00:34:16.459 [2024-07-14 05:48:22.949912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.459 [2024-07-14 05:48:22.949939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.459 qpair failed and we were unable to recover it. 00:34:16.459 [2024-07-14 05:48:22.950122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.459 [2024-07-14 05:48:22.950163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.459 qpair failed and we were unable to recover it. 00:34:16.459 [2024-07-14 05:48:22.950398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.459 [2024-07-14 05:48:22.950424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.459 qpair failed and we were unable to recover it. 00:34:16.459 [2024-07-14 05:48:22.950635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.459 [2024-07-14 05:48:22.950661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.459 qpair failed and we were unable to recover it. 00:34:16.459 [2024-07-14 05:48:22.950876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.459 [2024-07-14 05:48:22.950903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.459 qpair failed and we were unable to recover it. 00:34:16.459 [2024-07-14 05:48:22.951089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.459 [2024-07-14 05:48:22.951115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.459 qpair failed and we were unable to recover it. 00:34:16.459 [2024-07-14 05:48:22.951301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.459 [2024-07-14 05:48:22.951326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.459 qpair failed and we were unable to recover it. 00:34:16.459 [2024-07-14 05:48:22.951525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.459 [2024-07-14 05:48:22.951556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.459 qpair failed and we were unable to recover it. 00:34:16.459 [2024-07-14 05:48:22.951778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.459 [2024-07-14 05:48:22.951803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.459 qpair failed and we were unable to recover it. 00:34:16.459 [2024-07-14 05:48:22.952003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.459 [2024-07-14 05:48:22.952030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.459 qpair failed and we were unable to recover it. 00:34:16.459 [2024-07-14 05:48:22.952221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.459 [2024-07-14 05:48:22.952247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.459 qpair failed and we were unable to recover it. 00:34:16.459 [2024-07-14 05:48:22.952494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.459 [2024-07-14 05:48:22.952525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.459 qpair failed and we were unable to recover it. 00:34:16.459 [2024-07-14 05:48:22.952677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.459 [2024-07-14 05:48:22.952704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.459 qpair failed and we were unable to recover it. 00:34:16.459 [2024-07-14 05:48:22.952887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.459 [2024-07-14 05:48:22.952914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.459 qpair failed and we were unable to recover it. 00:34:16.459 [2024-07-14 05:48:22.953068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.459 [2024-07-14 05:48:22.953094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.459 qpair failed and we were unable to recover it. 00:34:16.459 [2024-07-14 05:48:22.953280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.459 [2024-07-14 05:48:22.953307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.459 qpair failed and we were unable to recover it. 00:34:16.459 [2024-07-14 05:48:22.953527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.459 [2024-07-14 05:48:22.953556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.459 qpair failed and we were unable to recover it. 00:34:16.459 [2024-07-14 05:48:22.953716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.459 [2024-07-14 05:48:22.953745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.459 qpair failed and we were unable to recover it. 00:34:16.459 [2024-07-14 05:48:22.953987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.459 [2024-07-14 05:48:22.954013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.459 qpair failed and we were unable to recover it. 00:34:16.459 [2024-07-14 05:48:22.954169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.459 [2024-07-14 05:48:22.954195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.459 qpair failed and we were unable to recover it. 00:34:16.459 [2024-07-14 05:48:22.954378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.459 [2024-07-14 05:48:22.954404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.459 qpair failed and we were unable to recover it. 00:34:16.459 [2024-07-14 05:48:22.954578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.459 [2024-07-14 05:48:22.954607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.459 qpair failed and we were unable to recover it. 00:34:16.459 [2024-07-14 05:48:22.954791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.459 [2024-07-14 05:48:22.954818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.459 qpair failed and we were unable to recover it. 00:34:16.459 [2024-07-14 05:48:22.955024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.459 [2024-07-14 05:48:22.955051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.459 qpair failed and we were unable to recover it. 00:34:16.459 [2024-07-14 05:48:22.955241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.459 [2024-07-14 05:48:22.955266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.459 qpair failed and we were unable to recover it. 00:34:16.459 [2024-07-14 05:48:22.955432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.459 [2024-07-14 05:48:22.955458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.459 qpair failed and we were unable to recover it. 00:34:16.459 [2024-07-14 05:48:22.955619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.459 [2024-07-14 05:48:22.955645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.459 qpair failed and we were unable to recover it. 00:34:16.460 [2024-07-14 05:48:22.955882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.460 [2024-07-14 05:48:22.955910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.460 qpair failed and we were unable to recover it. 00:34:16.460 [2024-07-14 05:48:22.956092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.460 [2024-07-14 05:48:22.956118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.460 qpair failed and we were unable to recover it. 00:34:16.460 [2024-07-14 05:48:22.956332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.460 [2024-07-14 05:48:22.956360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.460 qpair failed and we were unable to recover it. 00:34:16.460 [2024-07-14 05:48:22.956569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.460 [2024-07-14 05:48:22.956596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.460 qpair failed and we were unable to recover it. 00:34:16.460 [2024-07-14 05:48:22.956802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.460 [2024-07-14 05:48:22.956831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.460 qpair failed and we were unable to recover it. 00:34:16.460 [2024-07-14 05:48:22.957036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.460 [2024-07-14 05:48:22.957062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.460 qpair failed and we were unable to recover it. 00:34:16.460 [2024-07-14 05:48:22.957228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.460 [2024-07-14 05:48:22.957254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.460 qpair failed and we were unable to recover it. 00:34:16.460 [2024-07-14 05:48:22.957461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.460 [2024-07-14 05:48:22.957487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.460 qpair failed and we were unable to recover it. 00:34:16.460 [2024-07-14 05:48:22.957672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.460 [2024-07-14 05:48:22.957699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.460 qpair failed and we were unable to recover it. 00:34:16.460 [2024-07-14 05:48:22.957949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.460 [2024-07-14 05:48:22.957975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.460 qpair failed and we were unable to recover it. 00:34:16.460 [2024-07-14 05:48:22.958179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.460 [2024-07-14 05:48:22.958205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.460 qpair failed and we were unable to recover it. 00:34:16.460 [2024-07-14 05:48:22.958435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.460 [2024-07-14 05:48:22.958462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.460 qpair failed and we were unable to recover it. 00:34:16.460 [2024-07-14 05:48:22.958619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.460 [2024-07-14 05:48:22.958645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.460 qpair failed and we were unable to recover it. 00:34:16.460 [2024-07-14 05:48:22.958806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.460 [2024-07-14 05:48:22.958832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.460 qpair failed and we were unable to recover it. 00:34:16.460 [2024-07-14 05:48:22.959046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.460 [2024-07-14 05:48:22.959072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.460 qpair failed and we were unable to recover it. 00:34:16.460 [2024-07-14 05:48:22.959271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.460 [2024-07-14 05:48:22.959296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.460 qpair failed and we were unable to recover it. 00:34:16.460 [2024-07-14 05:48:22.959477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.460 [2024-07-14 05:48:22.959503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.460 qpair failed and we were unable to recover it. 00:34:16.460 [2024-07-14 05:48:22.959708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.460 [2024-07-14 05:48:22.959733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.460 qpair failed and we were unable to recover it. 00:34:16.460 [2024-07-14 05:48:22.959925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.460 [2024-07-14 05:48:22.959953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.460 qpair failed and we were unable to recover it. 00:34:16.460 [2024-07-14 05:48:22.960141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.460 [2024-07-14 05:48:22.960168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.460 qpair failed and we were unable to recover it. 00:34:16.460 [2024-07-14 05:48:22.960324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.460 [2024-07-14 05:48:22.960351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.460 qpair failed and we were unable to recover it. 00:34:16.460 [2024-07-14 05:48:22.960550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.460 [2024-07-14 05:48:22.960578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.460 qpair failed and we were unable to recover it. 00:34:16.460 [2024-07-14 05:48:22.960823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.460 [2024-07-14 05:48:22.960849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.460 qpair failed and we were unable to recover it. 00:34:16.460 [2024-07-14 05:48:22.961025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.460 [2024-07-14 05:48:22.961052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.460 qpair failed and we were unable to recover it. 00:34:16.460 [2024-07-14 05:48:22.961239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.460 [2024-07-14 05:48:22.961269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.460 qpair failed and we were unable to recover it. 00:34:16.460 [2024-07-14 05:48:22.961516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.460 [2024-07-14 05:48:22.961558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.460 qpair failed and we were unable to recover it. 00:34:16.460 [2024-07-14 05:48:22.961764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.460 [2024-07-14 05:48:22.961789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.460 qpair failed and we were unable to recover it. 00:34:16.460 [2024-07-14 05:48:22.961951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.460 [2024-07-14 05:48:22.961979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.460 qpair failed and we were unable to recover it. 00:34:16.460 [2024-07-14 05:48:22.962189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.460 [2024-07-14 05:48:22.962216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.460 qpair failed and we were unable to recover it. 00:34:16.460 [2024-07-14 05:48:22.962390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.460 [2024-07-14 05:48:22.962416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.460 qpair failed and we were unable to recover it. 00:34:16.460 [2024-07-14 05:48:22.962599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.460 [2024-07-14 05:48:22.962628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.460 qpair failed and we were unable to recover it. 00:34:16.460 [2024-07-14 05:48:22.962845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.460 [2024-07-14 05:48:22.962878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.460 qpair failed and we were unable to recover it. 00:34:16.460 [2024-07-14 05:48:22.963068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.460 [2024-07-14 05:48:22.963094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.460 qpair failed and we were unable to recover it. 00:34:16.460 [2024-07-14 05:48:22.963247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.460 [2024-07-14 05:48:22.963274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.460 qpair failed and we were unable to recover it. 00:34:16.460 [2024-07-14 05:48:22.963429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.460 [2024-07-14 05:48:22.963456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.460 qpair failed and we were unable to recover it. 00:34:16.460 [2024-07-14 05:48:22.963644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.460 [2024-07-14 05:48:22.963669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.460 qpair failed and we were unable to recover it. 00:34:16.460 [2024-07-14 05:48:22.963854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.460 [2024-07-14 05:48:22.963887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.460 qpair failed and we were unable to recover it. 00:34:16.460 [2024-07-14 05:48:22.964090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.460 [2024-07-14 05:48:22.964116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.460 qpair failed and we were unable to recover it. 00:34:16.460 [2024-07-14 05:48:22.964358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.460 [2024-07-14 05:48:22.964387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.460 qpair failed and we were unable to recover it. 00:34:16.460 [2024-07-14 05:48:22.964610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.460 [2024-07-14 05:48:22.964639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.460 qpair failed and we were unable to recover it. 00:34:16.460 [2024-07-14 05:48:22.964841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.460 [2024-07-14 05:48:22.964877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.460 qpair failed and we were unable to recover it. 00:34:16.461 [2024-07-14 05:48:22.965060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.461 [2024-07-14 05:48:22.965086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.461 qpair failed and we were unable to recover it. 00:34:16.461 [2024-07-14 05:48:22.965267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.461 [2024-07-14 05:48:22.965295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.461 qpair failed and we were unable to recover it. 00:34:16.461 [2024-07-14 05:48:22.965478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.461 [2024-07-14 05:48:22.965505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.461 qpair failed and we were unable to recover it. 00:34:16.461 [2024-07-14 05:48:22.965683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.461 [2024-07-14 05:48:22.965708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.461 qpair failed and we were unable to recover it. 00:34:16.461 [2024-07-14 05:48:22.965908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.461 [2024-07-14 05:48:22.965938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.461 qpair failed and we were unable to recover it. 00:34:16.461 [2024-07-14 05:48:22.966138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.461 [2024-07-14 05:48:22.966167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.461 qpair failed and we were unable to recover it. 00:34:16.461 [2024-07-14 05:48:22.966343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.461 [2024-07-14 05:48:22.966368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.461 qpair failed and we were unable to recover it. 00:34:16.461 [2024-07-14 05:48:22.966525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.461 [2024-07-14 05:48:22.966551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.461 qpair failed and we were unable to recover it. 00:34:16.461 [2024-07-14 05:48:22.966732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.461 [2024-07-14 05:48:22.966757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.461 qpair failed and we were unable to recover it. 00:34:16.461 [2024-07-14 05:48:22.966942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.461 [2024-07-14 05:48:22.966969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.461 qpair failed and we were unable to recover it. 00:34:16.461 [2024-07-14 05:48:22.967153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.461 [2024-07-14 05:48:22.967179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.461 qpair failed and we were unable to recover it. 00:34:16.461 [2024-07-14 05:48:22.967358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.461 [2024-07-14 05:48:22.967384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.461 qpair failed and we were unable to recover it. 00:34:16.461 [2024-07-14 05:48:22.967590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.461 [2024-07-14 05:48:22.967616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.461 qpair failed and we were unable to recover it. 00:34:16.461 [2024-07-14 05:48:22.967767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.461 [2024-07-14 05:48:22.967793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.461 qpair failed and we were unable to recover it. 00:34:16.461 [2024-07-14 05:48:22.968014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.461 [2024-07-14 05:48:22.968057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.461 qpair failed and we were unable to recover it. 00:34:16.461 [2024-07-14 05:48:22.968277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.461 [2024-07-14 05:48:22.968303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.461 qpair failed and we were unable to recover it. 00:34:16.461 [2024-07-14 05:48:22.968465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.461 [2024-07-14 05:48:22.968493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.461 qpair failed and we were unable to recover it. 00:34:16.461 [2024-07-14 05:48:22.968645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.461 [2024-07-14 05:48:22.968671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.461 qpair failed and we were unable to recover it. 00:34:16.461 [2024-07-14 05:48:22.968883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.461 [2024-07-14 05:48:22.968910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.461 qpair failed and we were unable to recover it. 00:34:16.461 [2024-07-14 05:48:22.969068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.461 [2024-07-14 05:48:22.969093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.461 qpair failed and we were unable to recover it. 00:34:16.461 [2024-07-14 05:48:22.969254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.461 [2024-07-14 05:48:22.969280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.461 qpair failed and we were unable to recover it. 00:34:16.461 [2024-07-14 05:48:22.969464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.461 [2024-07-14 05:48:22.969490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.461 qpair failed and we were unable to recover it. 00:34:16.461 [2024-07-14 05:48:22.969683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.461 [2024-07-14 05:48:22.969708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.461 qpair failed and we were unable to recover it. 00:34:16.461 [2024-07-14 05:48:22.969881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.461 [2024-07-14 05:48:22.969915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.461 qpair failed and we were unable to recover it. 00:34:16.461 [2024-07-14 05:48:22.970122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.461 [2024-07-14 05:48:22.970148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.461 qpair failed and we were unable to recover it. 00:34:16.461 [2024-07-14 05:48:22.970340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.461 [2024-07-14 05:48:22.970366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.461 qpair failed and we were unable to recover it. 00:34:16.461 [2024-07-14 05:48:22.970576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.461 [2024-07-14 05:48:22.970602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.461 qpair failed and we were unable to recover it. 00:34:16.461 [2024-07-14 05:48:22.970787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.461 [2024-07-14 05:48:22.970813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.461 qpair failed and we were unable to recover it. 00:34:16.461 [2024-07-14 05:48:22.971002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.461 [2024-07-14 05:48:22.971028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.461 qpair failed and we were unable to recover it. 00:34:16.461 [2024-07-14 05:48:22.971211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.461 [2024-07-14 05:48:22.971237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.461 qpair failed and we were unable to recover it. 00:34:16.461 [2024-07-14 05:48:22.971416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.461 [2024-07-14 05:48:22.971442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.461 qpair failed and we were unable to recover it. 00:34:16.461 [2024-07-14 05:48:22.971636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.461 [2024-07-14 05:48:22.971663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:16.461 qpair failed and we were unable to recover it. 00:34:16.461 [2024-07-14 05:48:22.971886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.461 [2024-07-14 05:48:22.971943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.461 qpair failed and we were unable to recover it. 00:34:16.461 [2024-07-14 05:48:22.972143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.462 [2024-07-14 05:48:22.972173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.462 qpair failed and we were unable to recover it. 00:34:16.462 [2024-07-14 05:48:22.972393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.462 [2024-07-14 05:48:22.972436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.462 qpair failed and we were unable to recover it. 00:34:16.462 [2024-07-14 05:48:22.972706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.462 [2024-07-14 05:48:22.972732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.462 qpair failed and we were unable to recover it. 00:34:16.462 [2024-07-14 05:48:22.972923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.462 [2024-07-14 05:48:22.972949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.462 qpair failed and we were unable to recover it. 00:34:16.462 [2024-07-14 05:48:22.973141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.462 [2024-07-14 05:48:22.973168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.462 qpair failed and we were unable to recover it. 00:34:16.462 [2024-07-14 05:48:22.973324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.462 [2024-07-14 05:48:22.973352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.462 qpair failed and we were unable to recover it. 00:34:16.462 [2024-07-14 05:48:22.973542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.462 [2024-07-14 05:48:22.973567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.462 qpair failed and we were unable to recover it. 00:34:16.462 [2024-07-14 05:48:22.973754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.462 [2024-07-14 05:48:22.973780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.462 qpair failed and we were unable to recover it. 00:34:16.462 [2024-07-14 05:48:22.973944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.462 [2024-07-14 05:48:22.973989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.462 qpair failed and we were unable to recover it. 00:34:16.462 [2024-07-14 05:48:22.974201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.462 [2024-07-14 05:48:22.974227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.462 qpair failed and we were unable to recover it. 00:34:16.462 [2024-07-14 05:48:22.974406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.462 [2024-07-14 05:48:22.974431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.462 qpair failed and we were unable to recover it. 00:34:16.462 [2024-07-14 05:48:22.974617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.462 [2024-07-14 05:48:22.974642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.462 qpair failed and we were unable to recover it. 00:34:16.462 [2024-07-14 05:48:22.974852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.462 [2024-07-14 05:48:22.974887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.462 qpair failed and we were unable to recover it. 00:34:16.462 [2024-07-14 05:48:22.975088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.462 [2024-07-14 05:48:22.975114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.462 qpair failed and we were unable to recover it. 00:34:16.462 [2024-07-14 05:48:22.975268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.462 [2024-07-14 05:48:22.975293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.462 qpair failed and we were unable to recover it. 00:34:16.462 [2024-07-14 05:48:22.975477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.462 [2024-07-14 05:48:22.975502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.462 qpair failed and we were unable to recover it. 00:34:16.462 [2024-07-14 05:48:22.975661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.462 [2024-07-14 05:48:22.975688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.462 qpair failed and we were unable to recover it. 00:34:16.462 [2024-07-14 05:48:22.975890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.462 [2024-07-14 05:48:22.975919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.462 qpair failed and we were unable to recover it. 00:34:16.462 [2024-07-14 05:48:22.976152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.462 [2024-07-14 05:48:22.976177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.462 qpair failed and we were unable to recover it. 00:34:16.462 [2024-07-14 05:48:22.976407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.462 [2024-07-14 05:48:22.976435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.462 qpair failed and we were unable to recover it. 00:34:16.462 [2024-07-14 05:48:22.976661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.462 [2024-07-14 05:48:22.976686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.462 qpair failed and we were unable to recover it. 00:34:16.462 [2024-07-14 05:48:22.976876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.462 [2024-07-14 05:48:22.976904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.462 qpair failed and we were unable to recover it. 00:34:16.462 [2024-07-14 05:48:22.977077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.462 [2024-07-14 05:48:22.977105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.462 qpair failed and we were unable to recover it. 00:34:16.462 [2024-07-14 05:48:22.977338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.462 [2024-07-14 05:48:22.977363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.462 qpair failed and we were unable to recover it. 00:34:16.462 [2024-07-14 05:48:22.977545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.462 [2024-07-14 05:48:22.977571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.462 qpair failed and we were unable to recover it. 00:34:16.462 [2024-07-14 05:48:22.977754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.462 [2024-07-14 05:48:22.977780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.462 qpair failed and we were unable to recover it. 00:34:16.462 [2024-07-14 05:48:22.977978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.462 [2024-07-14 05:48:22.978005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.462 qpair failed and we were unable to recover it. 00:34:16.462 [2024-07-14 05:48:22.978169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.462 [2024-07-14 05:48:22.978195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.462 qpair failed and we were unable to recover it. 00:34:16.462 [2024-07-14 05:48:22.978404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.462 [2024-07-14 05:48:22.978429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.462 qpair failed and we were unable to recover it. 00:34:16.462 [2024-07-14 05:48:22.978638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.462 [2024-07-14 05:48:22.978664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.462 qpair failed and we were unable to recover it. 00:34:16.462 [2024-07-14 05:48:22.978849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.462 [2024-07-14 05:48:22.978885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.462 qpair failed and we were unable to recover it. 00:34:16.462 [2024-07-14 05:48:22.979047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.462 [2024-07-14 05:48:22.979072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.462 qpair failed and we were unable to recover it. 00:34:16.462 [2024-07-14 05:48:22.979277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.462 [2024-07-14 05:48:22.979302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.462 qpair failed and we were unable to recover it. 00:34:16.462 [2024-07-14 05:48:22.979512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.462 [2024-07-14 05:48:22.979538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.462 qpair failed and we were unable to recover it. 00:34:16.462 [2024-07-14 05:48:22.979758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.462 [2024-07-14 05:48:22.979783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.462 qpair failed and we were unable to recover it. 00:34:16.462 [2024-07-14 05:48:22.979994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.462 [2024-07-14 05:48:22.980021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.462 qpair failed and we were unable to recover it. 00:34:16.462 [2024-07-14 05:48:22.980184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.462 [2024-07-14 05:48:22.980209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.462 qpair failed and we were unable to recover it. 00:34:16.462 [2024-07-14 05:48:22.980390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.462 [2024-07-14 05:48:22.980417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.462 qpair failed and we were unable to recover it. 00:34:16.462 [2024-07-14 05:48:22.980599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.462 [2024-07-14 05:48:22.980626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.462 qpair failed and we were unable to recover it. 00:34:16.462 [2024-07-14 05:48:22.980840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.462 [2024-07-14 05:48:22.980871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.462 qpair failed and we were unable to recover it. 00:34:16.462 [2024-07-14 05:48:22.981090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.462 [2024-07-14 05:48:22.981116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.462 qpair failed and we were unable to recover it. 00:34:16.462 [2024-07-14 05:48:22.981270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.463 [2024-07-14 05:48:22.981296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.463 qpair failed and we were unable to recover it. 00:34:16.463 [2024-07-14 05:48:22.981480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.463 [2024-07-14 05:48:22.981505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.463 qpair failed and we were unable to recover it. 00:34:16.463 [2024-07-14 05:48:22.981711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.463 [2024-07-14 05:48:22.981740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.463 qpair failed and we were unable to recover it. 00:34:16.463 [2024-07-14 05:48:22.981972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.463 [2024-07-14 05:48:22.981998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.463 qpair failed and we were unable to recover it. 00:34:16.463 [2024-07-14 05:48:22.982183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.463 [2024-07-14 05:48:22.982208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.463 qpair failed and we were unable to recover it. 00:34:16.463 [2024-07-14 05:48:22.982362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.463 [2024-07-14 05:48:22.982387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.463 qpair failed and we were unable to recover it. 00:34:16.463 [2024-07-14 05:48:22.982542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.463 [2024-07-14 05:48:22.982568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.463 qpair failed and we were unable to recover it. 00:34:16.463 [2024-07-14 05:48:22.982749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.463 [2024-07-14 05:48:22.982775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.463 qpair failed and we were unable to recover it. 00:34:16.463 [2024-07-14 05:48:22.982954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.463 [2024-07-14 05:48:22.982980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.463 qpair failed and we were unable to recover it. 00:34:16.463 [2024-07-14 05:48:22.983141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.463 [2024-07-14 05:48:22.983167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.463 qpair failed and we were unable to recover it. 00:34:16.463 [2024-07-14 05:48:22.983348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.463 [2024-07-14 05:48:22.983374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.463 qpair failed and we were unable to recover it. 00:34:16.463 [2024-07-14 05:48:22.983578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.463 [2024-07-14 05:48:22.983606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.463 qpair failed and we were unable to recover it. 00:34:16.463 [2024-07-14 05:48:22.983838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.463 [2024-07-14 05:48:22.983863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.463 qpair failed and we were unable to recover it. 00:34:16.463 [2024-07-14 05:48:22.984055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.463 [2024-07-14 05:48:22.984080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.463 qpair failed and we were unable to recover it. 00:34:16.463 [2024-07-14 05:48:22.984246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.463 [2024-07-14 05:48:22.984271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.463 qpair failed and we were unable to recover it. 00:34:16.463 [2024-07-14 05:48:22.984474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.463 [2024-07-14 05:48:22.984500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.463 qpair failed and we were unable to recover it. 00:34:16.463 [2024-07-14 05:48:22.984711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.463 [2024-07-14 05:48:22.984737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.463 qpair failed and we were unable to recover it. 00:34:16.463 [2024-07-14 05:48:22.984914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.463 [2024-07-14 05:48:22.984943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.463 qpair failed and we were unable to recover it. 00:34:16.463 [2024-07-14 05:48:22.985178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.463 [2024-07-14 05:48:22.985204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.463 qpair failed and we were unable to recover it. 00:34:16.463 [2024-07-14 05:48:22.985387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.463 [2024-07-14 05:48:22.985414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.463 qpair failed and we were unable to recover it. 00:34:16.463 [2024-07-14 05:48:22.985618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.463 [2024-07-14 05:48:22.985647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.463 qpair failed and we were unable to recover it. 00:34:16.463 [2024-07-14 05:48:22.985857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.463 [2024-07-14 05:48:22.985890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.463 qpair failed and we were unable to recover it. 00:34:16.463 [2024-07-14 05:48:22.986100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.463 [2024-07-14 05:48:22.986126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.463 qpair failed and we were unable to recover it. 00:34:16.463 [2024-07-14 05:48:22.986277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.463 [2024-07-14 05:48:22.986303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.463 qpair failed and we were unable to recover it. 00:34:16.463 [2024-07-14 05:48:22.986458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.463 [2024-07-14 05:48:22.986485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.463 qpair failed and we were unable to recover it. 00:34:16.463 [2024-07-14 05:48:22.986663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.463 [2024-07-14 05:48:22.986689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.463 qpair failed and we were unable to recover it. 00:34:16.463 [2024-07-14 05:48:22.986875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.463 [2024-07-14 05:48:22.986901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.463 qpair failed and we were unable to recover it. 00:34:16.463 [2024-07-14 05:48:22.987077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.463 [2024-07-14 05:48:22.987103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.463 qpair failed and we were unable to recover it. 00:34:16.463 [2024-07-14 05:48:22.987259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.463 [2024-07-14 05:48:22.987285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.463 qpair failed and we were unable to recover it. 00:34:16.463 [2024-07-14 05:48:22.987461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.463 [2024-07-14 05:48:22.987491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.463 qpair failed and we were unable to recover it. 00:34:16.463 [2024-07-14 05:48:22.987678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.463 [2024-07-14 05:48:22.987704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.463 qpair failed and we were unable to recover it. 00:34:16.463 [2024-07-14 05:48:22.987909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.463 [2024-07-14 05:48:22.987952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.463 qpair failed and we were unable to recover it. 00:34:16.463 [2024-07-14 05:48:22.988141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.463 [2024-07-14 05:48:22.988166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.463 qpair failed and we were unable to recover it. 00:34:16.463 [2024-07-14 05:48:22.988378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.463 [2024-07-14 05:48:22.988404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.463 qpair failed and we were unable to recover it. 00:34:16.463 [2024-07-14 05:48:22.988589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.463 [2024-07-14 05:48:22.988615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.463 qpair failed and we were unable to recover it. 00:34:16.463 [2024-07-14 05:48:22.988800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.463 [2024-07-14 05:48:22.988825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.463 qpair failed and we were unable to recover it. 00:34:16.463 [2024-07-14 05:48:22.989014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.463 [2024-07-14 05:48:22.989041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.463 qpair failed and we were unable to recover it. 00:34:16.463 [2024-07-14 05:48:22.989221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.463 [2024-07-14 05:48:22.989246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.463 qpair failed and we were unable to recover it. 00:34:16.463 [2024-07-14 05:48:22.989427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.463 [2024-07-14 05:48:22.989452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.463 qpair failed and we were unable to recover it. 00:34:16.463 [2024-07-14 05:48:22.989600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.463 [2024-07-14 05:48:22.989626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.463 qpair failed and we were unable to recover it. 00:34:16.463 [2024-07-14 05:48:22.989814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.463 [2024-07-14 05:48:22.989839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.463 qpair failed and we were unable to recover it. 00:34:16.463 [2024-07-14 05:48:22.989993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.464 [2024-07-14 05:48:22.990019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.464 qpair failed and we were unable to recover it. 00:34:16.464 [2024-07-14 05:48:22.990233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.464 [2024-07-14 05:48:22.990258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.464 qpair failed and we were unable to recover it. 00:34:16.464 [2024-07-14 05:48:22.990416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.464 [2024-07-14 05:48:22.990442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.464 qpair failed and we were unable to recover it. 00:34:16.464 [2024-07-14 05:48:22.990652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.464 [2024-07-14 05:48:22.990678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.464 qpair failed and we were unable to recover it. 00:34:16.464 [2024-07-14 05:48:22.990829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.464 [2024-07-14 05:48:22.990855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.464 qpair failed and we were unable to recover it. 00:34:16.464 [2024-07-14 05:48:22.991043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.464 [2024-07-14 05:48:22.991069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.464 qpair failed and we were unable to recover it. 00:34:16.464 [2024-07-14 05:48:22.991252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.464 [2024-07-14 05:48:22.991278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.464 qpair failed and we were unable to recover it. 00:34:16.464 [2024-07-14 05:48:22.991485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.464 [2024-07-14 05:48:22.991511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.464 qpair failed and we were unable to recover it. 00:34:16.464 [2024-07-14 05:48:22.991692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.464 [2024-07-14 05:48:22.991718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.464 qpair failed and we were unable to recover it. 00:34:16.464 [2024-07-14 05:48:22.991895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.464 [2024-07-14 05:48:22.991922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.464 qpair failed and we were unable to recover it. 00:34:16.464 [2024-07-14 05:48:22.992131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.464 [2024-07-14 05:48:22.992156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.464 qpair failed and we were unable to recover it. 00:34:16.464 [2024-07-14 05:48:22.992338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.464 [2024-07-14 05:48:22.992363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.464 qpair failed and we were unable to recover it. 00:34:16.464 [2024-07-14 05:48:22.992564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.464 [2024-07-14 05:48:22.992592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.464 qpair failed and we were unable to recover it. 00:34:16.464 [2024-07-14 05:48:22.992798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.464 [2024-07-14 05:48:22.992825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.464 qpair failed and we were unable to recover it. 00:34:16.464 [2024-07-14 05:48:22.993036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.464 [2024-07-14 05:48:22.993062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.464 qpair failed and we were unable to recover it. 00:34:16.464 [2024-07-14 05:48:22.993240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.464 [2024-07-14 05:48:22.993266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.464 qpair failed and we were unable to recover it. 00:34:16.464 [2024-07-14 05:48:22.993444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.464 [2024-07-14 05:48:22.993470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.464 qpair failed and we were unable to recover it. 00:34:16.464 [2024-07-14 05:48:22.993624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.464 [2024-07-14 05:48:22.993649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.464 qpair failed and we were unable to recover it. 00:34:16.464 [2024-07-14 05:48:22.993803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.464 [2024-07-14 05:48:22.993829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.464 qpair failed and we were unable to recover it. 00:34:16.464 [2024-07-14 05:48:22.994048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.464 [2024-07-14 05:48:22.994074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.464 qpair failed and we were unable to recover it. 00:34:16.464 [2024-07-14 05:48:22.994253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.464 [2024-07-14 05:48:22.994278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.464 qpair failed and we were unable to recover it. 00:34:16.464 [2024-07-14 05:48:22.994489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.464 [2024-07-14 05:48:22.994515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.464 qpair failed and we were unable to recover it. 00:34:16.464 [2024-07-14 05:48:22.994721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.464 [2024-07-14 05:48:22.994749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.464 qpair failed and we were unable to recover it. 00:34:16.464 [2024-07-14 05:48:22.994962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.464 [2024-07-14 05:48:22.994988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.464 qpair failed and we were unable to recover it. 00:34:16.464 [2024-07-14 05:48:22.995153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.464 [2024-07-14 05:48:22.995178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.464 qpair failed and we were unable to recover it. 00:34:16.464 [2024-07-14 05:48:22.995358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.464 [2024-07-14 05:48:22.995383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.464 qpair failed and we were unable to recover it. 00:34:16.464 [2024-07-14 05:48:22.995587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.464 [2024-07-14 05:48:22.995612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.464 qpair failed and we were unable to recover it. 00:34:16.464 [2024-07-14 05:48:22.995793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.464 [2024-07-14 05:48:22.995818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.464 qpair failed and we were unable to recover it. 00:34:16.464 [2024-07-14 05:48:22.995984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.464 [2024-07-14 05:48:22.996014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.464 qpair failed and we were unable to recover it. 00:34:16.464 [2024-07-14 05:48:22.996201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.464 [2024-07-14 05:48:22.996227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.464 qpair failed and we were unable to recover it. 00:34:16.464 [2024-07-14 05:48:22.996414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.464 [2024-07-14 05:48:22.996440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.464 qpair failed and we were unable to recover it. 00:34:16.464 [2024-07-14 05:48:22.996617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.464 [2024-07-14 05:48:22.996642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.464 qpair failed and we were unable to recover it. 00:34:16.464 [2024-07-14 05:48:22.996803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.464 [2024-07-14 05:48:22.996829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.464 qpair failed and we were unable to recover it. 00:34:16.464 [2024-07-14 05:48:22.997023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.464 [2024-07-14 05:48:22.997050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.464 qpair failed and we were unable to recover it. 00:34:16.464 [2024-07-14 05:48:22.997236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.464 [2024-07-14 05:48:22.997262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.464 qpair failed and we were unable to recover it. 00:34:16.464 [2024-07-14 05:48:22.997445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.464 [2024-07-14 05:48:22.997471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.464 qpair failed and we were unable to recover it. 00:34:16.464 [2024-07-14 05:48:22.997649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.464 [2024-07-14 05:48:22.997674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.464 qpair failed and we were unable to recover it. 00:34:16.464 [2024-07-14 05:48:22.997850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.464 [2024-07-14 05:48:22.997882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.464 qpair failed and we were unable to recover it. 00:34:16.464 [2024-07-14 05:48:22.998040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.464 [2024-07-14 05:48:22.998066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.464 qpair failed and we were unable to recover it. 00:34:16.464 [2024-07-14 05:48:22.998253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.464 [2024-07-14 05:48:22.998278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.464 qpair failed and we were unable to recover it. 00:34:16.464 [2024-07-14 05:48:22.998457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.464 [2024-07-14 05:48:22.998482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.464 qpair failed and we were unable to recover it. 00:34:16.465 [2024-07-14 05:48:22.998691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.465 [2024-07-14 05:48:22.998717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.465 qpair failed and we were unable to recover it. 00:34:16.465 [2024-07-14 05:48:22.998930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.465 [2024-07-14 05:48:22.998957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.465 qpair failed and we were unable to recover it. 00:34:16.465 [2024-07-14 05:48:22.999163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.465 [2024-07-14 05:48:22.999189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.465 qpair failed and we were unable to recover it. 00:34:16.465 [2024-07-14 05:48:22.999374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.465 [2024-07-14 05:48:22.999400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.465 qpair failed and we were unable to recover it. 00:34:16.465 [2024-07-14 05:48:22.999543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.465 [2024-07-14 05:48:22.999569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.465 qpair failed and we were unable to recover it. 00:34:16.465 [2024-07-14 05:48:22.999776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.465 [2024-07-14 05:48:22.999802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.465 qpair failed and we were unable to recover it. 00:34:16.465 [2024-07-14 05:48:22.999998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.465 [2024-07-14 05:48:23.000024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.465 qpair failed and we were unable to recover it. 00:34:16.465 [2024-07-14 05:48:23.000207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.465 [2024-07-14 05:48:23.000232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.465 qpair failed and we were unable to recover it. 00:34:16.465 [2024-07-14 05:48:23.000440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.465 [2024-07-14 05:48:23.000465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.465 qpair failed and we were unable to recover it. 00:34:16.465 [2024-07-14 05:48:23.000652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.465 [2024-07-14 05:48:23.000679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.465 qpair failed and we were unable to recover it. 00:34:16.465 [2024-07-14 05:48:23.000876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.465 [2024-07-14 05:48:23.000905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.465 qpair failed and we were unable to recover it. 00:34:16.465 [2024-07-14 05:48:23.001151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.465 [2024-07-14 05:48:23.001177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.465 qpair failed and we were unable to recover it. 00:34:16.465 [2024-07-14 05:48:23.001399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.465 [2024-07-14 05:48:23.001425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.465 qpair failed and we were unable to recover it. 00:34:16.465 [2024-07-14 05:48:23.001641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.465 [2024-07-14 05:48:23.001666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.465 qpair failed and we were unable to recover it. 00:34:16.465 [2024-07-14 05:48:23.001858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.465 [2024-07-14 05:48:23.001889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.465 qpair failed and we were unable to recover it. 00:34:16.465 [2024-07-14 05:48:23.002070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.465 [2024-07-14 05:48:23.002095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.465 qpair failed and we were unable to recover it. 00:34:16.465 [2024-07-14 05:48:23.002284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.465 [2024-07-14 05:48:23.002310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.465 qpair failed and we were unable to recover it. 00:34:16.465 [2024-07-14 05:48:23.002518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.465 [2024-07-14 05:48:23.002544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.465 qpair failed and we were unable to recover it. 00:34:16.465 [2024-07-14 05:48:23.002730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.465 [2024-07-14 05:48:23.002756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.465 qpair failed and we were unable to recover it. 00:34:16.465 [2024-07-14 05:48:23.003000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.465 [2024-07-14 05:48:23.003026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.465 qpair failed and we were unable to recover it. 00:34:16.465 [2024-07-14 05:48:23.003203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.465 [2024-07-14 05:48:23.003229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.465 qpair failed and we were unable to recover it. 00:34:16.465 [2024-07-14 05:48:23.003404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.465 [2024-07-14 05:48:23.003429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.465 qpair failed and we were unable to recover it. 00:34:16.465 [2024-07-14 05:48:23.003613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.465 [2024-07-14 05:48:23.003639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.465 qpair failed and we were unable to recover it. 00:34:16.465 [2024-07-14 05:48:23.003817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.465 [2024-07-14 05:48:23.003843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.465 qpair failed and we were unable to recover it. 00:34:16.465 [2024-07-14 05:48:23.004007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.465 [2024-07-14 05:48:23.004033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.465 qpair failed and we were unable to recover it. 00:34:16.465 [2024-07-14 05:48:23.004216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.465 [2024-07-14 05:48:23.004242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.465 qpair failed and we were unable to recover it. 00:34:16.465 [2024-07-14 05:48:23.004409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.465 [2024-07-14 05:48:23.004435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.465 qpair failed and we were unable to recover it. 00:34:16.465 [2024-07-14 05:48:23.004615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.465 [2024-07-14 05:48:23.004645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.465 qpair failed and we were unable to recover it. 00:34:16.465 [2024-07-14 05:48:23.004821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.465 [2024-07-14 05:48:23.004850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.465 qpair failed and we were unable to recover it. 00:34:16.465 [2024-07-14 05:48:23.005113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.465 [2024-07-14 05:48:23.005139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.465 qpair failed and we were unable to recover it. 00:34:16.465 [2024-07-14 05:48:23.005296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.465 [2024-07-14 05:48:23.005322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.465 qpair failed and we were unable to recover it. 00:34:16.465 [2024-07-14 05:48:23.005502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.465 [2024-07-14 05:48:23.005527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.465 qpair failed and we were unable to recover it. 00:34:16.465 [2024-07-14 05:48:23.005732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.465 [2024-07-14 05:48:23.005758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.465 qpair failed and we were unable to recover it. 00:34:16.465 [2024-07-14 05:48:23.005947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.465 [2024-07-14 05:48:23.005974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.465 qpair failed and we were unable to recover it. 00:34:16.465 [2024-07-14 05:48:23.006159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.465 [2024-07-14 05:48:23.006185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.465 qpair failed and we were unable to recover it. 00:34:16.465 [2024-07-14 05:48:23.006343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.466 [2024-07-14 05:48:23.006369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.466 qpair failed and we were unable to recover it. 00:34:16.466 [2024-07-14 05:48:23.006579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.466 [2024-07-14 05:48:23.006605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.466 qpair failed and we were unable to recover it. 00:34:16.466 [2024-07-14 05:48:23.006792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.466 [2024-07-14 05:48:23.006818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.466 qpair failed and we were unable to recover it. 00:34:16.466 [2024-07-14 05:48:23.007041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.466 [2024-07-14 05:48:23.007067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.466 qpair failed and we were unable to recover it. 00:34:16.466 [2024-07-14 05:48:23.007295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.466 [2024-07-14 05:48:23.007321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.466 qpair failed and we were unable to recover it. 00:34:16.466 [2024-07-14 05:48:23.007532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.466 [2024-07-14 05:48:23.007558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.466 qpair failed and we were unable to recover it. 00:34:16.466 [2024-07-14 05:48:23.007740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.466 [2024-07-14 05:48:23.007766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.466 qpair failed and we were unable to recover it. 00:34:16.466 [2024-07-14 05:48:23.007974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.466 [2024-07-14 05:48:23.008000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.466 qpair failed and we were unable to recover it. 00:34:16.466 [2024-07-14 05:48:23.008186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.466 [2024-07-14 05:48:23.008212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.466 qpair failed and we were unable to recover it. 00:34:16.466 [2024-07-14 05:48:23.008420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.466 [2024-07-14 05:48:23.008446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.466 qpair failed and we were unable to recover it. 00:34:16.466 [2024-07-14 05:48:23.008656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.466 [2024-07-14 05:48:23.008682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.466 qpair failed and we were unable to recover it. 00:34:16.466 [2024-07-14 05:48:23.008871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.466 [2024-07-14 05:48:23.008896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.466 qpair failed and we were unable to recover it. 00:34:16.466 [2024-07-14 05:48:23.009112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.466 [2024-07-14 05:48:23.009138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.466 qpair failed and we were unable to recover it. 00:34:16.466 [2024-07-14 05:48:23.009323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.466 [2024-07-14 05:48:23.009349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.466 qpair failed and we were unable to recover it. 00:34:16.466 [2024-07-14 05:48:23.009561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.466 [2024-07-14 05:48:23.009586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.466 qpair failed and we were unable to recover it. 00:34:16.466 [2024-07-14 05:48:23.009739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.466 [2024-07-14 05:48:23.009765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.466 qpair failed and we were unable to recover it. 00:34:16.466 [2024-07-14 05:48:23.009963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.466 [2024-07-14 05:48:23.009989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.466 qpair failed and we were unable to recover it. 00:34:16.466 [2024-07-14 05:48:23.010151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.466 [2024-07-14 05:48:23.010177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.466 qpair failed and we were unable to recover it. 00:34:16.466 [2024-07-14 05:48:23.010383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.466 [2024-07-14 05:48:23.010408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.466 qpair failed and we were unable to recover it. 00:34:16.466 [2024-07-14 05:48:23.010652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.466 [2024-07-14 05:48:23.010678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.466 qpair failed and we were unable to recover it. 00:34:16.466 [2024-07-14 05:48:23.010883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.466 [2024-07-14 05:48:23.010910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.466 qpair failed and we were unable to recover it. 00:34:16.466 [2024-07-14 05:48:23.011086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.466 [2024-07-14 05:48:23.011112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.466 qpair failed and we were unable to recover it. 00:34:16.466 [2024-07-14 05:48:23.011271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.466 [2024-07-14 05:48:23.011297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.466 qpair failed and we were unable to recover it. 00:34:16.466 [2024-07-14 05:48:23.011490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.466 [2024-07-14 05:48:23.011518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.466 qpair failed and we were unable to recover it. 00:34:16.466 [2024-07-14 05:48:23.011717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.466 [2024-07-14 05:48:23.011746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.466 qpair failed and we were unable to recover it. 00:34:16.466 [2024-07-14 05:48:23.011975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.466 [2024-07-14 05:48:23.012001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.466 qpair failed and we were unable to recover it. 00:34:16.466 [2024-07-14 05:48:23.012167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.466 [2024-07-14 05:48:23.012193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.466 qpair failed and we were unable to recover it. 00:34:16.466 [2024-07-14 05:48:23.012400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.466 [2024-07-14 05:48:23.012426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.466 qpair failed and we were unable to recover it. 00:34:16.466 [2024-07-14 05:48:23.012603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.466 [2024-07-14 05:48:23.012628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.466 qpair failed and we were unable to recover it. 00:34:16.466 [2024-07-14 05:48:23.012823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.466 [2024-07-14 05:48:23.012848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.466 qpair failed and we were unable to recover it. 00:34:16.466 [2024-07-14 05:48:23.013045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.466 [2024-07-14 05:48:23.013072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.466 qpair failed and we were unable to recover it. 00:34:16.466 [2024-07-14 05:48:23.013250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.466 [2024-07-14 05:48:23.013275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.466 qpair failed and we were unable to recover it. 00:34:16.466 [2024-07-14 05:48:23.013491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.466 [2024-07-14 05:48:23.013521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.466 qpair failed and we were unable to recover it. 00:34:16.466 [2024-07-14 05:48:23.013725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.466 [2024-07-14 05:48:23.013754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.466 qpair failed and we were unable to recover it. 00:34:16.466 [2024-07-14 05:48:23.013961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.466 [2024-07-14 05:48:23.013988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.466 qpair failed and we were unable to recover it. 00:34:16.466 [2024-07-14 05:48:23.014172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.466 [2024-07-14 05:48:23.014198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.466 qpair failed and we were unable to recover it. 00:34:16.466 [2024-07-14 05:48:23.014381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.466 [2024-07-14 05:48:23.014410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.466 qpair failed and we were unable to recover it. 00:34:16.466 [2024-07-14 05:48:23.014614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.466 [2024-07-14 05:48:23.014641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.466 qpair failed and we were unable to recover it. 00:34:16.466 [2024-07-14 05:48:23.014843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.466 [2024-07-14 05:48:23.014878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.466 qpair failed and we were unable to recover it. 00:34:16.466 [2024-07-14 05:48:23.015121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.466 [2024-07-14 05:48:23.015147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.466 qpair failed and we were unable to recover it. 00:34:16.466 [2024-07-14 05:48:23.015331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.466 [2024-07-14 05:48:23.015357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.466 qpair failed and we were unable to recover it. 00:34:16.466 [2024-07-14 05:48:23.015541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.467 [2024-07-14 05:48:23.015567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.467 qpair failed and we were unable to recover it. 00:34:16.467 [2024-07-14 05:48:23.015797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.467 [2024-07-14 05:48:23.015825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.467 qpair failed and we were unable to recover it. 00:34:16.467 [2024-07-14 05:48:23.016006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.467 [2024-07-14 05:48:23.016032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.467 qpair failed and we were unable to recover it. 00:34:16.467 [2024-07-14 05:48:23.016251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.467 [2024-07-14 05:48:23.016277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.467 qpair failed and we were unable to recover it. 00:34:16.467 [2024-07-14 05:48:23.016432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.467 [2024-07-14 05:48:23.016458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.467 qpair failed and we were unable to recover it. 00:34:16.467 [2024-07-14 05:48:23.016655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.467 [2024-07-14 05:48:23.016681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.467 qpair failed and we were unable to recover it. 00:34:16.467 [2024-07-14 05:48:23.016862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.467 [2024-07-14 05:48:23.016902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.467 qpair failed and we were unable to recover it. 00:34:16.467 [2024-07-14 05:48:23.017113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.467 [2024-07-14 05:48:23.017139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.467 qpair failed and we were unable to recover it. 00:34:16.467 [2024-07-14 05:48:23.017324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.467 [2024-07-14 05:48:23.017350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.467 qpair failed and we were unable to recover it. 00:34:16.467 [2024-07-14 05:48:23.017534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.467 [2024-07-14 05:48:23.017560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.467 qpair failed and we were unable to recover it. 00:34:16.467 [2024-07-14 05:48:23.017738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.467 [2024-07-14 05:48:23.017766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.467 qpair failed and we were unable to recover it. 00:34:16.467 [2024-07-14 05:48:23.018014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.467 [2024-07-14 05:48:23.018040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.467 qpair failed and we were unable to recover it. 00:34:16.467 [2024-07-14 05:48:23.018243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.467 [2024-07-14 05:48:23.018268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.467 qpair failed and we were unable to recover it. 00:34:16.467 [2024-07-14 05:48:23.018503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.467 [2024-07-14 05:48:23.018528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.467 qpair failed and we were unable to recover it. 00:34:16.467 [2024-07-14 05:48:23.018710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.467 [2024-07-14 05:48:23.018736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.467 qpair failed and we were unable to recover it. 00:34:16.467 [2024-07-14 05:48:23.018964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.467 [2024-07-14 05:48:23.018993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.467 qpair failed and we were unable to recover it. 00:34:16.467 [2024-07-14 05:48:23.019204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.467 [2024-07-14 05:48:23.019229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.467 qpair failed and we were unable to recover it. 00:34:16.467 [2024-07-14 05:48:23.019413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.467 [2024-07-14 05:48:23.019439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.467 qpair failed and we were unable to recover it. 00:34:16.467 [2024-07-14 05:48:23.019651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.467 [2024-07-14 05:48:23.019681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.467 qpair failed and we were unable to recover it. 00:34:16.467 [2024-07-14 05:48:23.019887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.467 [2024-07-14 05:48:23.019913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.467 qpair failed and we were unable to recover it. 00:34:16.467 [2024-07-14 05:48:23.020099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.467 [2024-07-14 05:48:23.020125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.467 qpair failed and we were unable to recover it. 00:34:16.467 [2024-07-14 05:48:23.020308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.467 [2024-07-14 05:48:23.020336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.467 qpair failed and we were unable to recover it. 00:34:16.467 [2024-07-14 05:48:23.020545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.467 [2024-07-14 05:48:23.020572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.467 qpair failed and we were unable to recover it. 00:34:16.467 [2024-07-14 05:48:23.020777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.467 [2024-07-14 05:48:23.020803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.467 qpair failed and we were unable to recover it. 00:34:16.467 [2024-07-14 05:48:23.021044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.467 [2024-07-14 05:48:23.021073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.467 qpair failed and we were unable to recover it. 00:34:16.467 [2024-07-14 05:48:23.021284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.467 [2024-07-14 05:48:23.021312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.467 qpair failed and we were unable to recover it. 00:34:16.467 [2024-07-14 05:48:23.021540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.467 [2024-07-14 05:48:23.021565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.467 qpair failed and we were unable to recover it. 00:34:16.467 [2024-07-14 05:48:23.021726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.467 [2024-07-14 05:48:23.021751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.467 qpair failed and we were unable to recover it. 00:34:16.467 [2024-07-14 05:48:23.021964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.467 [2024-07-14 05:48:23.021991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.467 qpair failed and we were unable to recover it. 00:34:16.467 [2024-07-14 05:48:23.022207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.467 [2024-07-14 05:48:23.022233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.467 qpair failed and we were unable to recover it. 00:34:16.467 [2024-07-14 05:48:23.022452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.467 [2024-07-14 05:48:23.022479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.467 qpair failed and we were unable to recover it. 00:34:16.467 [2024-07-14 05:48:23.022666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.467 [2024-07-14 05:48:23.022692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.467 qpair failed and we were unable to recover it. 00:34:16.467 [2024-07-14 05:48:23.022875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.467 [2024-07-14 05:48:23.022902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.467 qpair failed and we were unable to recover it. 00:34:16.467 [2024-07-14 05:48:23.023105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.467 [2024-07-14 05:48:23.023134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.467 qpair failed and we were unable to recover it. 00:34:16.467 [2024-07-14 05:48:23.023360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.467 [2024-07-14 05:48:23.023385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.467 qpair failed and we were unable to recover it. 00:34:16.467 [2024-07-14 05:48:23.023565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.467 [2024-07-14 05:48:23.023591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.467 qpair failed and we were unable to recover it. 00:34:16.467 [2024-07-14 05:48:23.023798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.467 [2024-07-14 05:48:23.023824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.467 qpair failed and we were unable to recover it. 00:34:16.467 [2024-07-14 05:48:23.024014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.467 [2024-07-14 05:48:23.024041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.467 qpair failed and we were unable to recover it. 00:34:16.467 [2024-07-14 05:48:23.024250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.467 [2024-07-14 05:48:23.024276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.467 qpair failed and we were unable to recover it. 00:34:16.467 [2024-07-14 05:48:23.024483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.467 [2024-07-14 05:48:23.024511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.467 qpair failed and we were unable to recover it. 00:34:16.467 [2024-07-14 05:48:23.024696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.467 [2024-07-14 05:48:23.024722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.467 qpair failed and we were unable to recover it. 00:34:16.467 [2024-07-14 05:48:23.024904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.468 [2024-07-14 05:48:23.024930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.468 qpair failed and we were unable to recover it. 00:34:16.468 [2024-07-14 05:48:23.025080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.468 [2024-07-14 05:48:23.025106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.468 qpair failed and we were unable to recover it. 00:34:16.468 [2024-07-14 05:48:23.025295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.468 [2024-07-14 05:48:23.025322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.468 qpair failed and we were unable to recover it. 00:34:16.468 [2024-07-14 05:48:23.025508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.468 [2024-07-14 05:48:23.025535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.468 qpair failed and we were unable to recover it. 00:34:16.468 [2024-07-14 05:48:23.025720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.468 [2024-07-14 05:48:23.025746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.468 qpair failed and we were unable to recover it. 00:34:16.468 [2024-07-14 05:48:23.025926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.468 [2024-07-14 05:48:23.025952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.468 qpair failed and we were unable to recover it. 00:34:16.468 [2024-07-14 05:48:23.026116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.468 [2024-07-14 05:48:23.026142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.468 qpair failed and we were unable to recover it. 00:34:16.468 [2024-07-14 05:48:23.026303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.468 [2024-07-14 05:48:23.026329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.468 qpair failed and we were unable to recover it. 00:34:16.468 [2024-07-14 05:48:23.026481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.468 [2024-07-14 05:48:23.026508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.468 qpair failed and we were unable to recover it. 00:34:16.468 [2024-07-14 05:48:23.026688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.468 [2024-07-14 05:48:23.026714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.468 qpair failed and we were unable to recover it. 00:34:16.468 [2024-07-14 05:48:23.026900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.468 [2024-07-14 05:48:23.026927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.468 qpair failed and we were unable to recover it. 00:34:16.468 [2024-07-14 05:48:23.027112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.468 [2024-07-14 05:48:23.027138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.468 qpair failed and we were unable to recover it. 00:34:16.468 [2024-07-14 05:48:23.027319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.468 [2024-07-14 05:48:23.027345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.468 qpair failed and we were unable to recover it. 00:34:16.468 [2024-07-14 05:48:23.027527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.468 [2024-07-14 05:48:23.027553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.468 qpair failed and we were unable to recover it. 00:34:16.468 [2024-07-14 05:48:23.027707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.468 [2024-07-14 05:48:23.027732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.468 qpair failed and we were unable to recover it. 00:34:16.468 [2024-07-14 05:48:23.027924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.468 [2024-07-14 05:48:23.027950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.468 qpair failed and we were unable to recover it. 00:34:16.468 [2024-07-14 05:48:23.028130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.468 [2024-07-14 05:48:23.028156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.468 qpair failed and we were unable to recover it. 00:34:16.468 [2024-07-14 05:48:23.028332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.468 [2024-07-14 05:48:23.028362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.468 qpair failed and we were unable to recover it. 00:34:16.468 [2024-07-14 05:48:23.028522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.468 [2024-07-14 05:48:23.028549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.468 qpair failed and we were unable to recover it. 00:34:16.468 [2024-07-14 05:48:23.028731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.468 [2024-07-14 05:48:23.028757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.468 qpair failed and we were unable to recover it. 00:34:16.468 [2024-07-14 05:48:23.028946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.468 [2024-07-14 05:48:23.028973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.468 qpair failed and we were unable to recover it. 00:34:16.468 [2024-07-14 05:48:23.029184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.468 [2024-07-14 05:48:23.029210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.468 qpair failed and we were unable to recover it. 00:34:16.468 [2024-07-14 05:48:23.029416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.468 [2024-07-14 05:48:23.029441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.468 qpair failed and we were unable to recover it. 00:34:16.468 [2024-07-14 05:48:23.029641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.468 [2024-07-14 05:48:23.029670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.468 qpair failed and we were unable to recover it. 00:34:16.468 [2024-07-14 05:48:23.029881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.468 [2024-07-14 05:48:23.029907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.468 qpair failed and we were unable to recover it. 00:34:16.468 [2024-07-14 05:48:23.030120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.468 [2024-07-14 05:48:23.030162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.468 qpair failed and we were unable to recover it. 00:34:16.468 [2024-07-14 05:48:23.030410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.468 [2024-07-14 05:48:23.030436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.468 qpair failed and we were unable to recover it. 00:34:16.468 [2024-07-14 05:48:23.030649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.468 [2024-07-14 05:48:23.030675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.468 qpair failed and we were unable to recover it. 00:34:16.468 [2024-07-14 05:48:23.030855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.468 [2024-07-14 05:48:23.030886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.468 qpair failed and we were unable to recover it. 00:34:16.468 [2024-07-14 05:48:23.031051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.468 [2024-07-14 05:48:23.031077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.468 qpair failed and we were unable to recover it. 00:34:16.468 [2024-07-14 05:48:23.031276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.468 [2024-07-14 05:48:23.031301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.468 qpair failed and we were unable to recover it. 00:34:16.468 [2024-07-14 05:48:23.031509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.468 [2024-07-14 05:48:23.031539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.468 qpair failed and we were unable to recover it. 00:34:16.468 [2024-07-14 05:48:23.031714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.468 [2024-07-14 05:48:23.031742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.468 qpair failed and we were unable to recover it. 00:34:16.468 [2024-07-14 05:48:23.031980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.468 [2024-07-14 05:48:23.032006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.468 qpair failed and we were unable to recover it. 00:34:16.468 [2024-07-14 05:48:23.032194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.468 [2024-07-14 05:48:23.032221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.468 qpair failed and we were unable to recover it. 00:34:16.468 [2024-07-14 05:48:23.032404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.468 [2024-07-14 05:48:23.032431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.468 qpair failed and we were unable to recover it. 00:34:16.468 [2024-07-14 05:48:23.032594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.468 [2024-07-14 05:48:23.032620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.468 qpair failed and we were unable to recover it. 00:34:16.468 [2024-07-14 05:48:23.032794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.468 [2024-07-14 05:48:23.032821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.468 qpair failed and we were unable to recover it. 00:34:16.468 [2024-07-14 05:48:23.033031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.468 [2024-07-14 05:48:23.033074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.468 qpair failed and we were unable to recover it. 00:34:16.468 [2024-07-14 05:48:23.033325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.468 [2024-07-14 05:48:23.033351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.468 qpair failed and we were unable to recover it. 00:34:16.468 [2024-07-14 05:48:23.033499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.468 [2024-07-14 05:48:23.033543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.468 qpair failed and we were unable to recover it. 00:34:16.469 [2024-07-14 05:48:23.033803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.469 [2024-07-14 05:48:23.033832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.469 qpair failed and we were unable to recover it. 00:34:16.469 [2024-07-14 05:48:23.034043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.469 [2024-07-14 05:48:23.034070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.469 qpair failed and we were unable to recover it. 00:34:16.469 [2024-07-14 05:48:23.034226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.469 [2024-07-14 05:48:23.034252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.469 qpair failed and we were unable to recover it. 00:34:16.469 [2024-07-14 05:48:23.034409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.469 [2024-07-14 05:48:23.034435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.469 qpair failed and we were unable to recover it. 00:34:16.469 [2024-07-14 05:48:23.034641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.469 [2024-07-14 05:48:23.034667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.469 qpair failed and we were unable to recover it. 00:34:16.469 [2024-07-14 05:48:23.034816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.469 [2024-07-14 05:48:23.034841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.469 qpair failed and we were unable to recover it. 00:34:16.469 [2024-07-14 05:48:23.035005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.469 [2024-07-14 05:48:23.035031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.469 qpair failed and we were unable to recover it. 00:34:16.469 [2024-07-14 05:48:23.035218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.469 [2024-07-14 05:48:23.035243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.469 qpair failed and we were unable to recover it. 00:34:16.469 [2024-07-14 05:48:23.035414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.469 [2024-07-14 05:48:23.035440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.469 qpair failed and we were unable to recover it. 00:34:16.469 [2024-07-14 05:48:23.035615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.469 [2024-07-14 05:48:23.035640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.469 qpair failed and we were unable to recover it. 00:34:16.469 [2024-07-14 05:48:23.035796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.469 [2024-07-14 05:48:23.035822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.469 qpair failed and we were unable to recover it. 00:34:16.469 [2024-07-14 05:48:23.036011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.469 [2024-07-14 05:48:23.036037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.469 qpair failed and we were unable to recover it. 00:34:16.469 [2024-07-14 05:48:23.036242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.469 [2024-07-14 05:48:23.036270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.469 qpair failed and we were unable to recover it. 00:34:16.469 [2024-07-14 05:48:23.036474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.469 [2024-07-14 05:48:23.036501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.469 qpair failed and we were unable to recover it. 00:34:16.469 [2024-07-14 05:48:23.036701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.469 [2024-07-14 05:48:23.036730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.469 qpair failed and we were unable to recover it. 00:34:16.469 [2024-07-14 05:48:23.036899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.469 [2024-07-14 05:48:23.036929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.469 qpair failed and we were unable to recover it. 00:34:16.469 [2024-07-14 05:48:23.037110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.469 [2024-07-14 05:48:23.037140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.469 qpair failed and we were unable to recover it. 00:34:16.469 [2024-07-14 05:48:23.037291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.469 [2024-07-14 05:48:23.037318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.469 qpair failed and we were unable to recover it. 00:34:16.469 [2024-07-14 05:48:23.037502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.469 [2024-07-14 05:48:23.037528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.469 qpair failed and we were unable to recover it. 00:34:16.469 [2024-07-14 05:48:23.037680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.469 [2024-07-14 05:48:23.037705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.469 qpair failed and we were unable to recover it. 00:34:16.469 [2024-07-14 05:48:23.037884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.469 [2024-07-14 05:48:23.037911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.469 qpair failed and we were unable to recover it. 00:34:16.469 [2024-07-14 05:48:23.038092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.469 [2024-07-14 05:48:23.038118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.469 qpair failed and we were unable to recover it. 00:34:16.469 [2024-07-14 05:48:23.038263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.469 [2024-07-14 05:48:23.038289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.469 qpair failed and we were unable to recover it. 00:34:16.469 [2024-07-14 05:48:23.038463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.469 [2024-07-14 05:48:23.038489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.469 qpair failed and we were unable to recover it. 00:34:16.469 [2024-07-14 05:48:23.038686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.469 [2024-07-14 05:48:23.038715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.469 qpair failed and we were unable to recover it. 00:34:16.469 [2024-07-14 05:48:23.038949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.469 [2024-07-14 05:48:23.038975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.469 qpair failed and we were unable to recover it. 00:34:16.469 [2024-07-14 05:48:23.039134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.469 [2024-07-14 05:48:23.039161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.469 qpair failed and we were unable to recover it. 00:34:16.469 [2024-07-14 05:48:23.039347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.469 [2024-07-14 05:48:23.039372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.469 qpair failed and we were unable to recover it. 00:34:16.469 [2024-07-14 05:48:23.039560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.469 [2024-07-14 05:48:23.039586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.469 qpair failed and we were unable to recover it. 00:34:16.469 [2024-07-14 05:48:23.039793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.469 [2024-07-14 05:48:23.039818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.469 qpair failed and we were unable to recover it. 00:34:16.469 [2024-07-14 05:48:23.040047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.469 [2024-07-14 05:48:23.040073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.469 qpair failed and we were unable to recover it. 00:34:16.469 [2024-07-14 05:48:23.040262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.469 [2024-07-14 05:48:23.040288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.469 qpair failed and we were unable to recover it. 00:34:16.469 [2024-07-14 05:48:23.040446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.469 [2024-07-14 05:48:23.040471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.469 qpair failed and we were unable to recover it. 00:34:16.469 [2024-07-14 05:48:23.040650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.469 [2024-07-14 05:48:23.040675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.469 qpair failed and we were unable to recover it. 00:34:16.469 [2024-07-14 05:48:23.040886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.469 [2024-07-14 05:48:23.040912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.469 qpair failed and we were unable to recover it. 00:34:16.469 [2024-07-14 05:48:23.041109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.469 [2024-07-14 05:48:23.041134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.469 qpair failed and we were unable to recover it. 00:34:16.469 [2024-07-14 05:48:23.041321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.469 [2024-07-14 05:48:23.041346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.469 qpair failed and we were unable to recover it. 00:34:16.469 [2024-07-14 05:48:23.041492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.469 [2024-07-14 05:48:23.041518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.469 qpair failed and we were unable to recover it. 00:34:16.469 [2024-07-14 05:48:23.041745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.469 [2024-07-14 05:48:23.041774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.469 qpair failed and we were unable to recover it. 00:34:16.469 [2024-07-14 05:48:23.041973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.469 [2024-07-14 05:48:23.042002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.469 qpair failed and we were unable to recover it. 00:34:16.469 [2024-07-14 05:48:23.042207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.469 [2024-07-14 05:48:23.042234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.469 qpair failed and we were unable to recover it. 00:34:16.470 [2024-07-14 05:48:23.042441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.470 [2024-07-14 05:48:23.042467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.470 qpair failed and we were unable to recover it. 00:34:16.470 [2024-07-14 05:48:23.042649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.470 [2024-07-14 05:48:23.042675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.470 qpair failed and we were unable to recover it. 00:34:16.470 [2024-07-14 05:48:23.042875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.470 [2024-07-14 05:48:23.042901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.470 qpair failed and we were unable to recover it. 00:34:16.470 [2024-07-14 05:48:23.043100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.470 [2024-07-14 05:48:23.043126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.470 qpair failed and we were unable to recover it. 00:34:16.470 [2024-07-14 05:48:23.043360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.470 [2024-07-14 05:48:23.043389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.470 qpair failed and we were unable to recover it. 00:34:16.470 [2024-07-14 05:48:23.043569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.470 [2024-07-14 05:48:23.043594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.470 qpair failed and we were unable to recover it. 00:34:16.470 [2024-07-14 05:48:23.043814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.470 [2024-07-14 05:48:23.043843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.470 qpair failed and we were unable to recover it. 00:34:16.470 [2024-07-14 05:48:23.044054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.470 [2024-07-14 05:48:23.044079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.470 qpair failed and we were unable to recover it. 00:34:16.470 [2024-07-14 05:48:23.044296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.470 [2024-07-14 05:48:23.044322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.470 qpair failed and we were unable to recover it. 00:34:16.470 [2024-07-14 05:48:23.044526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.470 [2024-07-14 05:48:23.044554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.470 qpair failed and we were unable to recover it. 00:34:16.470 [2024-07-14 05:48:23.044729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.470 [2024-07-14 05:48:23.044758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.470 qpair failed and we were unable to recover it. 00:34:16.470 [2024-07-14 05:48:23.044952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.470 [2024-07-14 05:48:23.044978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.470 qpair failed and we were unable to recover it. 00:34:16.470 [2024-07-14 05:48:23.045132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.470 [2024-07-14 05:48:23.045158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.470 qpair failed and we were unable to recover it. 00:34:16.470 [2024-07-14 05:48:23.045314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.470 [2024-07-14 05:48:23.045340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.470 qpair failed and we were unable to recover it. 00:34:16.470 [2024-07-14 05:48:23.045528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.470 [2024-07-14 05:48:23.045553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.470 qpair failed and we were unable to recover it. 00:34:16.470 [2024-07-14 05:48:23.045737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.470 [2024-07-14 05:48:23.045766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.470 qpair failed and we were unable to recover it. 00:34:16.470 [2024-07-14 05:48:23.045979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.470 [2024-07-14 05:48:23.046008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.470 qpair failed and we were unable to recover it. 00:34:16.470 [2024-07-14 05:48:23.046212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.470 [2024-07-14 05:48:23.046238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.470 qpair failed and we were unable to recover it. 00:34:16.470 [2024-07-14 05:48:23.046477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.470 [2024-07-14 05:48:23.046503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.470 qpair failed and we were unable to recover it. 00:34:16.470 [2024-07-14 05:48:23.046658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.470 [2024-07-14 05:48:23.046684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.470 qpair failed and we were unable to recover it. 00:34:16.470 [2024-07-14 05:48:23.046931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.470 [2024-07-14 05:48:23.046957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.470 qpair failed and we were unable to recover it. 00:34:16.470 [2024-07-14 05:48:23.047167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.470 [2024-07-14 05:48:23.047192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.470 qpair failed and we were unable to recover it. 00:34:16.470 [2024-07-14 05:48:23.047402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.470 [2024-07-14 05:48:23.047431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.470 qpair failed and we were unable to recover it. 00:34:16.470 [2024-07-14 05:48:23.047638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.470 [2024-07-14 05:48:23.047664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.470 qpair failed and we were unable to recover it. 00:34:16.470 [2024-07-14 05:48:23.047886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.470 [2024-07-14 05:48:23.047912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.470 qpair failed and we were unable to recover it. 00:34:16.470 [2024-07-14 05:48:23.048094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.470 [2024-07-14 05:48:23.048120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.470 qpair failed and we were unable to recover it. 00:34:16.470 [2024-07-14 05:48:23.048315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.470 [2024-07-14 05:48:23.048341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.470 qpair failed and we were unable to recover it. 00:34:16.470 [2024-07-14 05:48:23.048526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.470 [2024-07-14 05:48:23.048553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.470 qpair failed and we were unable to recover it. 00:34:16.470 [2024-07-14 05:48:23.048757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.470 [2024-07-14 05:48:23.048786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.470 qpair failed and we were unable to recover it. 00:34:16.470 [2024-07-14 05:48:23.049024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.470 [2024-07-14 05:48:23.049050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.470 qpair failed and we were unable to recover it. 00:34:16.470 [2024-07-14 05:48:23.049256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.470 [2024-07-14 05:48:23.049284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.470 qpair failed and we were unable to recover it. 00:34:16.470 [2024-07-14 05:48:23.049515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.470 [2024-07-14 05:48:23.049544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.470 qpair failed and we were unable to recover it. 00:34:16.470 [2024-07-14 05:48:23.049718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.470 [2024-07-14 05:48:23.049745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.470 qpair failed and we were unable to recover it. 00:34:16.470 [2024-07-14 05:48:23.049950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.470 [2024-07-14 05:48:23.049979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.470 qpair failed and we were unable to recover it. 00:34:16.470 [2024-07-14 05:48:23.050192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.470 [2024-07-14 05:48:23.050218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.470 qpair failed and we were unable to recover it. 00:34:16.470 [2024-07-14 05:48:23.050404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.471 [2024-07-14 05:48:23.050430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.471 qpair failed and we were unable to recover it. 00:34:16.471 [2024-07-14 05:48:23.050632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.471 [2024-07-14 05:48:23.050660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.471 qpair failed and we were unable to recover it. 00:34:16.471 [2024-07-14 05:48:23.050895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.471 [2024-07-14 05:48:23.050921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.471 qpair failed and we were unable to recover it. 00:34:16.471 [2024-07-14 05:48:23.051090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.471 [2024-07-14 05:48:23.051115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.471 qpair failed and we were unable to recover it. 00:34:16.471 [2024-07-14 05:48:23.051343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.471 [2024-07-14 05:48:23.051372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.471 qpair failed and we were unable to recover it. 00:34:16.471 [2024-07-14 05:48:23.051587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.471 [2024-07-14 05:48:23.051616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.471 qpair failed and we were unable to recover it. 00:34:16.471 [2024-07-14 05:48:23.051822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.471 [2024-07-14 05:48:23.051847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.471 qpair failed and we were unable to recover it. 00:34:16.471 [2024-07-14 05:48:23.052042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.471 [2024-07-14 05:48:23.052068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.471 qpair failed and we were unable to recover it. 00:34:16.471 [2024-07-14 05:48:23.052261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.471 [2024-07-14 05:48:23.052287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.471 qpair failed and we were unable to recover it. 00:34:16.471 [2024-07-14 05:48:23.052467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.471 [2024-07-14 05:48:23.052492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.471 qpair failed and we were unable to recover it. 00:34:16.471 [2024-07-14 05:48:23.052678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.471 [2024-07-14 05:48:23.052704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.471 qpair failed and we were unable to recover it. 00:34:16.471 [2024-07-14 05:48:23.052902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.471 [2024-07-14 05:48:23.052931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.471 qpair failed and we were unable to recover it. 00:34:16.471 [2024-07-14 05:48:23.053144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.471 [2024-07-14 05:48:23.053170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.471 qpair failed and we were unable to recover it. 00:34:16.471 [2024-07-14 05:48:23.053353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.471 [2024-07-14 05:48:23.053381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.471 qpair failed and we were unable to recover it. 00:34:16.471 [2024-07-14 05:48:23.053606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.471 [2024-07-14 05:48:23.053635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.471 qpair failed and we were unable to recover it. 00:34:16.471 [2024-07-14 05:48:23.053853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.471 [2024-07-14 05:48:23.053886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.471 qpair failed and we were unable to recover it. 00:34:16.471 [2024-07-14 05:48:23.054087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.471 [2024-07-14 05:48:23.054113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.471 qpair failed and we were unable to recover it. 00:34:16.471 [2024-07-14 05:48:23.054311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.471 [2024-07-14 05:48:23.054340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.471 qpair failed and we were unable to recover it. 00:34:16.471 [2024-07-14 05:48:23.054540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.471 [2024-07-14 05:48:23.054567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.471 qpair failed and we were unable to recover it. 00:34:16.471 [2024-07-14 05:48:23.054740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.471 [2024-07-14 05:48:23.054769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.471 qpair failed and we were unable to recover it. 00:34:16.471 [2024-07-14 05:48:23.055006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.471 [2024-07-14 05:48:23.055036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.471 qpair failed and we were unable to recover it. 00:34:16.471 [2024-07-14 05:48:23.055225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.471 [2024-07-14 05:48:23.055251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.471 qpair failed and we were unable to recover it. 00:34:16.471 [2024-07-14 05:48:23.055432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.471 [2024-07-14 05:48:23.055458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.471 qpair failed and we were unable to recover it. 00:34:16.471 [2024-07-14 05:48:23.055663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.471 [2024-07-14 05:48:23.055691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.471 qpair failed and we were unable to recover it. 00:34:16.471 [2024-07-14 05:48:23.055904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.471 [2024-07-14 05:48:23.055931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.471 qpair failed and we were unable to recover it. 00:34:16.471 [2024-07-14 05:48:23.056116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.471 [2024-07-14 05:48:23.056142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.471 qpair failed and we were unable to recover it. 00:34:16.471 [2024-07-14 05:48:23.056346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.471 [2024-07-14 05:48:23.056374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.471 qpair failed and we were unable to recover it. 00:34:16.471 [2024-07-14 05:48:23.056640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.471 [2024-07-14 05:48:23.056666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.471 qpair failed and we were unable to recover it. 00:34:16.471 [2024-07-14 05:48:23.056877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.471 [2024-07-14 05:48:23.056904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.471 qpair failed and we were unable to recover it. 00:34:16.471 [2024-07-14 05:48:23.057105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.471 [2024-07-14 05:48:23.057134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.471 qpair failed and we were unable to recover it. 00:34:16.471 [2024-07-14 05:48:23.057339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.471 [2024-07-14 05:48:23.057366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.471 qpair failed and we were unable to recover it. 00:34:16.471 [2024-07-14 05:48:23.057548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.471 [2024-07-14 05:48:23.057577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.471 qpair failed and we were unable to recover it. 00:34:16.471 [2024-07-14 05:48:23.057775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.471 [2024-07-14 05:48:23.057804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.471 qpair failed and we were unable to recover it. 00:34:16.471 [2024-07-14 05:48:23.058022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.471 [2024-07-14 05:48:23.058048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.471 qpair failed and we were unable to recover it. 00:34:16.471 [2024-07-14 05:48:23.058227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.471 [2024-07-14 05:48:23.058256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.471 qpair failed and we were unable to recover it. 00:34:16.471 [2024-07-14 05:48:23.058471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.471 [2024-07-14 05:48:23.058497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.471 qpair failed and we were unable to recover it. 00:34:16.471 [2024-07-14 05:48:23.058659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.471 [2024-07-14 05:48:23.058686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.471 qpair failed and we were unable to recover it. 00:34:16.471 [2024-07-14 05:48:23.058875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.471 [2024-07-14 05:48:23.058901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.471 qpair failed and we were unable to recover it. 00:34:16.471 [2024-07-14 05:48:23.059065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.471 [2024-07-14 05:48:23.059091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.471 qpair failed and we were unable to recover it. 00:34:16.471 [2024-07-14 05:48:23.059274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.471 [2024-07-14 05:48:23.059300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.471 qpair failed and we were unable to recover it. 00:34:16.471 [2024-07-14 05:48:23.059473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.471 [2024-07-14 05:48:23.059501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.471 qpair failed and we were unable to recover it. 00:34:16.471 [2024-07-14 05:48:23.059843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.472 [2024-07-14 05:48:23.059911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.472 qpair failed and we were unable to recover it. 00:34:16.472 [2024-07-14 05:48:23.060144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.472 [2024-07-14 05:48:23.060170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.472 qpair failed and we were unable to recover it. 00:34:16.472 [2024-07-14 05:48:23.060375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.472 [2024-07-14 05:48:23.060403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.472 qpair failed and we were unable to recover it. 00:34:16.472 [2024-07-14 05:48:23.060629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.472 [2024-07-14 05:48:23.060655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.472 qpair failed and we were unable to recover it. 00:34:16.472 [2024-07-14 05:48:23.060839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.472 [2024-07-14 05:48:23.060878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.472 qpair failed and we were unable to recover it. 00:34:16.472 [2024-07-14 05:48:23.061039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.472 [2024-07-14 05:48:23.061065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.472 qpair failed and we were unable to recover it. 00:34:16.472 [2024-07-14 05:48:23.061256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.472 [2024-07-14 05:48:23.061282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.472 qpair failed and we were unable to recover it. 00:34:16.472 [2024-07-14 05:48:23.061489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.472 [2024-07-14 05:48:23.061515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.472 qpair failed and we were unable to recover it. 00:34:16.472 [2024-07-14 05:48:23.061748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.472 [2024-07-14 05:48:23.061776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.472 qpair failed and we were unable to recover it. 00:34:16.472 [2024-07-14 05:48:23.062020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.472 [2024-07-14 05:48:23.062047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.472 qpair failed and we were unable to recover it. 00:34:16.472 [2024-07-14 05:48:23.062198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.472 [2024-07-14 05:48:23.062223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.472 qpair failed and we were unable to recover it. 00:34:16.472 [2024-07-14 05:48:23.062404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.472 [2024-07-14 05:48:23.062429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.472 qpair failed and we were unable to recover it. 00:34:16.472 [2024-07-14 05:48:23.062614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.472 [2024-07-14 05:48:23.062640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.472 qpair failed and we were unable to recover it. 00:34:16.472 [2024-07-14 05:48:23.062843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.472 [2024-07-14 05:48:23.062873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.472 qpair failed and we were unable to recover it. 00:34:16.472 [2024-07-14 05:48:23.063056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.472 [2024-07-14 05:48:23.063085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.472 qpair failed and we were unable to recover it. 00:34:16.472 [2024-07-14 05:48:23.063287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.472 [2024-07-14 05:48:23.063317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.472 qpair failed and we were unable to recover it. 00:34:16.472 [2024-07-14 05:48:23.063497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.472 [2024-07-14 05:48:23.063523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.472 qpair failed and we were unable to recover it. 00:34:16.472 [2024-07-14 05:48:23.063723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.472 [2024-07-14 05:48:23.063752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.472 qpair failed and we were unable to recover it. 00:34:16.472 [2024-07-14 05:48:23.063932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.472 [2024-07-14 05:48:23.063961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.472 qpair failed and we were unable to recover it. 00:34:16.472 [2024-07-14 05:48:23.064173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.472 [2024-07-14 05:48:23.064204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.472 qpair failed and we were unable to recover it. 00:34:16.472 [2024-07-14 05:48:23.064416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.472 [2024-07-14 05:48:23.064445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.472 qpair failed and we were unable to recover it. 00:34:16.472 [2024-07-14 05:48:23.064650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.472 [2024-07-14 05:48:23.064679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.472 qpair failed and we were unable to recover it. 00:34:16.472 [2024-07-14 05:48:23.064859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.472 [2024-07-14 05:48:23.064889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.472 qpair failed and we were unable to recover it. 00:34:16.472 [2024-07-14 05:48:23.065091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.472 [2024-07-14 05:48:23.065121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.472 qpair failed and we were unable to recover it. 00:34:16.472 [2024-07-14 05:48:23.065324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.472 [2024-07-14 05:48:23.065351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.472 qpair failed and we were unable to recover it. 00:34:16.472 [2024-07-14 05:48:23.065533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.472 [2024-07-14 05:48:23.065559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.472 qpair failed and we were unable to recover it. 00:34:16.472 [2024-07-14 05:48:23.065792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.472 [2024-07-14 05:48:23.065821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.472 qpair failed and we were unable to recover it. 00:34:16.472 [2024-07-14 05:48:23.066038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.472 [2024-07-14 05:48:23.066063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.472 qpair failed and we were unable to recover it. 00:34:16.472 [2024-07-14 05:48:23.066245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.472 [2024-07-14 05:48:23.066271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.472 qpair failed and we were unable to recover it. 00:34:16.472 [2024-07-14 05:48:23.066471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.472 [2024-07-14 05:48:23.066500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.472 qpair failed and we were unable to recover it. 00:34:16.472 [2024-07-14 05:48:23.066679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.472 [2024-07-14 05:48:23.066706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.472 qpair failed and we were unable to recover it. 00:34:16.472 [2024-07-14 05:48:23.066870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.472 [2024-07-14 05:48:23.066896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.472 qpair failed and we were unable to recover it. 00:34:16.472 [2024-07-14 05:48:23.067057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.472 [2024-07-14 05:48:23.067083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.472 qpair failed and we were unable to recover it. 00:34:16.472 [2024-07-14 05:48:23.067333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.472 [2024-07-14 05:48:23.067361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.472 qpair failed and we were unable to recover it. 00:34:16.472 [2024-07-14 05:48:23.067534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.472 [2024-07-14 05:48:23.067560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.472 qpair failed and we were unable to recover it. 00:34:16.472 [2024-07-14 05:48:23.067790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.472 [2024-07-14 05:48:23.067819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.472 qpair failed and we were unable to recover it. 00:34:16.472 [2024-07-14 05:48:23.068106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.472 [2024-07-14 05:48:23.068135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.472 qpair failed and we were unable to recover it. 00:34:16.472 [2024-07-14 05:48:23.068317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.472 [2024-07-14 05:48:23.068342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.472 qpair failed and we were unable to recover it. 00:34:16.472 [2024-07-14 05:48:23.068575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.472 [2024-07-14 05:48:23.068604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.472 qpair failed and we were unable to recover it. 00:34:16.472 [2024-07-14 05:48:23.068850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.472 [2024-07-14 05:48:23.068891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.472 qpair failed and we were unable to recover it. 00:34:16.472 [2024-07-14 05:48:23.069053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.472 [2024-07-14 05:48:23.069078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.472 qpair failed and we were unable to recover it. 00:34:16.472 [2024-07-14 05:48:23.069292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.473 [2024-07-14 05:48:23.069321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.473 qpair failed and we were unable to recover it. 00:34:16.473 [2024-07-14 05:48:23.069554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.473 [2024-07-14 05:48:23.069580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.473 qpair failed and we were unable to recover it. 00:34:16.473 [2024-07-14 05:48:23.069766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.473 [2024-07-14 05:48:23.069793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.473 qpair failed and we were unable to recover it. 00:34:16.473 [2024-07-14 05:48:23.069982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.473 [2024-07-14 05:48:23.070008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.473 qpair failed and we were unable to recover it. 00:34:16.473 [2024-07-14 05:48:23.070220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.473 [2024-07-14 05:48:23.070249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.473 qpair failed and we were unable to recover it. 00:34:16.473 [2024-07-14 05:48:23.070431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.473 [2024-07-14 05:48:23.070457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.473 qpair failed and we were unable to recover it. 00:34:16.473 [2024-07-14 05:48:23.070617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.473 [2024-07-14 05:48:23.070643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.473 qpair failed and we were unable to recover it. 00:34:16.473 [2024-07-14 05:48:23.070801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.473 [2024-07-14 05:48:23.070828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.473 qpair failed and we were unable to recover it. 00:34:16.473 [2024-07-14 05:48:23.071013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.473 [2024-07-14 05:48:23.071040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.473 qpair failed and we were unable to recover it. 00:34:16.473 [2024-07-14 05:48:23.071238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.473 [2024-07-14 05:48:23.071267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.473 qpair failed and we were unable to recover it. 00:34:16.473 [2024-07-14 05:48:23.071503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.473 [2024-07-14 05:48:23.071529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.473 qpair failed and we were unable to recover it. 00:34:16.473 [2024-07-14 05:48:23.071681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.473 [2024-07-14 05:48:23.071707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.473 qpair failed and we were unable to recover it. 00:34:16.473 [2024-07-14 05:48:23.071889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.473 [2024-07-14 05:48:23.071915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.473 qpair failed and we were unable to recover it. 00:34:16.473 [2024-07-14 05:48:23.072073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.473 [2024-07-14 05:48:23.072099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.473 qpair failed and we were unable to recover it. 00:34:16.473 [2024-07-14 05:48:23.072307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.473 [2024-07-14 05:48:23.072333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.473 qpair failed and we were unable to recover it. 00:34:16.473 [2024-07-14 05:48:23.072543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.473 [2024-07-14 05:48:23.072571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.473 qpair failed and we were unable to recover it. 00:34:16.473 [2024-07-14 05:48:23.072770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.473 [2024-07-14 05:48:23.072799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.473 qpair failed and we were unable to recover it. 00:34:16.473 [2024-07-14 05:48:23.072994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.473 [2024-07-14 05:48:23.073020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.473 qpair failed and we were unable to recover it. 00:34:16.473 [2024-07-14 05:48:23.073183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.473 [2024-07-14 05:48:23.073212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.473 qpair failed and we were unable to recover it. 00:34:16.473 [2024-07-14 05:48:23.073385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.473 [2024-07-14 05:48:23.073411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.473 qpair failed and we were unable to recover it. 00:34:16.473 [2024-07-14 05:48:23.073553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.473 [2024-07-14 05:48:23.073578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.473 qpair failed and we were unable to recover it. 00:34:16.473 [2024-07-14 05:48:23.073760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.473 [2024-07-14 05:48:23.073786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.473 qpair failed and we were unable to recover it. 00:34:16.473 [2024-07-14 05:48:23.073978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.473 [2024-07-14 05:48:23.074006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.473 qpair failed and we were unable to recover it. 00:34:16.473 [2024-07-14 05:48:23.074188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.473 [2024-07-14 05:48:23.074213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.473 qpair failed and we were unable to recover it. 00:34:16.473 [2024-07-14 05:48:23.074428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.473 [2024-07-14 05:48:23.074457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.473 qpair failed and we were unable to recover it. 00:34:16.473 [2024-07-14 05:48:23.074627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.473 [2024-07-14 05:48:23.074655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.473 qpair failed and we were unable to recover it. 00:34:16.473 [2024-07-14 05:48:23.074823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.473 [2024-07-14 05:48:23.074849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.473 qpair failed and we were unable to recover it. 00:34:16.473 [2024-07-14 05:48:23.075064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.473 [2024-07-14 05:48:23.075093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.473 qpair failed and we were unable to recover it. 00:34:16.473 [2024-07-14 05:48:23.075311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.473 [2024-07-14 05:48:23.075337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.473 qpair failed and we were unable to recover it. 00:34:16.473 [2024-07-14 05:48:23.075548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.473 [2024-07-14 05:48:23.075573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.473 qpair failed and we were unable to recover it. 00:34:16.473 [2024-07-14 05:48:23.075772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.473 [2024-07-14 05:48:23.075800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.473 qpair failed and we were unable to recover it. 00:34:16.473 [2024-07-14 05:48:23.076004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.473 [2024-07-14 05:48:23.076033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.473 qpair failed and we were unable to recover it. 00:34:16.473 [2024-07-14 05:48:23.076238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.473 [2024-07-14 05:48:23.076264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.473 qpair failed and we were unable to recover it. 00:34:16.473 [2024-07-14 05:48:23.076474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.473 [2024-07-14 05:48:23.076499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.473 qpair failed and we were unable to recover it. 00:34:16.473 [2024-07-14 05:48:23.076704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.473 [2024-07-14 05:48:23.076732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.473 qpair failed and we were unable to recover it. 00:34:16.473 [2024-07-14 05:48:23.076962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.473 [2024-07-14 05:48:23.076990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.473 qpair failed and we were unable to recover it. 00:34:16.473 [2024-07-14 05:48:23.077154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.473 [2024-07-14 05:48:23.077180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.473 qpair failed and we were unable to recover it. 00:34:16.473 [2024-07-14 05:48:23.077325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.473 [2024-07-14 05:48:23.077351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.473 qpair failed and we were unable to recover it. 00:34:16.473 [2024-07-14 05:48:23.077536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.473 [2024-07-14 05:48:23.077562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.473 qpair failed and we were unable to recover it. 00:34:16.473 [2024-07-14 05:48:23.077814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.473 [2024-07-14 05:48:23.077840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.473 qpair failed and we were unable to recover it. 00:34:16.473 [2024-07-14 05:48:23.078053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.473 [2024-07-14 05:48:23.078079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.473 qpair failed and we were unable to recover it. 00:34:16.473 [2024-07-14 05:48:23.078266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.474 [2024-07-14 05:48:23.078292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.474 qpair failed and we were unable to recover it. 00:34:16.474 [2024-07-14 05:48:23.078492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.474 [2024-07-14 05:48:23.078521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.474 qpair failed and we were unable to recover it. 00:34:16.474 [2024-07-14 05:48:23.078697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.474 [2024-07-14 05:48:23.078724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.474 qpair failed and we were unable to recover it. 00:34:16.474 [2024-07-14 05:48:23.078933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.474 [2024-07-14 05:48:23.078960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.474 qpair failed and we were unable to recover it. 00:34:16.474 [2024-07-14 05:48:23.079152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.474 [2024-07-14 05:48:23.079178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.474 qpair failed and we were unable to recover it. 00:34:16.474 [2024-07-14 05:48:23.079384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.474 [2024-07-14 05:48:23.079412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.474 qpair failed and we were unable to recover it. 00:34:16.474 [2024-07-14 05:48:23.079592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.474 [2024-07-14 05:48:23.079618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.474 qpair failed and we were unable to recover it. 00:34:16.474 [2024-07-14 05:48:23.079812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.474 [2024-07-14 05:48:23.079841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.474 qpair failed and we were unable to recover it. 00:34:16.474 [2024-07-14 05:48:23.080073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.474 [2024-07-14 05:48:23.080099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.474 qpair failed and we were unable to recover it. 00:34:16.474 [2024-07-14 05:48:23.080287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.474 [2024-07-14 05:48:23.080312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.474 qpair failed and we were unable to recover it. 00:34:16.474 [2024-07-14 05:48:23.080532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.474 [2024-07-14 05:48:23.080559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.474 qpair failed and we were unable to recover it. 00:34:16.474 [2024-07-14 05:48:23.080724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.474 [2024-07-14 05:48:23.080749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.474 qpair failed and we were unable to recover it. 00:34:16.474 [2024-07-14 05:48:23.080938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.474 [2024-07-14 05:48:23.080965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.474 qpair failed and we were unable to recover it. 00:34:16.474 [2024-07-14 05:48:23.081154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.474 [2024-07-14 05:48:23.081180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.474 qpair failed and we were unable to recover it. 00:34:16.474 [2024-07-14 05:48:23.081423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.474 [2024-07-14 05:48:23.081452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.474 qpair failed and we were unable to recover it. 00:34:16.474 [2024-07-14 05:48:23.081683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.474 [2024-07-14 05:48:23.081708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.474 qpair failed and we were unable to recover it. 00:34:16.474 [2024-07-14 05:48:23.081948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.474 [2024-07-14 05:48:23.081975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.474 qpair failed and we were unable to recover it. 00:34:16.474 [2024-07-14 05:48:23.082137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.474 [2024-07-14 05:48:23.082167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.474 qpair failed and we were unable to recover it. 00:34:16.474 [2024-07-14 05:48:23.082378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.474 [2024-07-14 05:48:23.082404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.474 qpair failed and we were unable to recover it. 00:34:16.474 [2024-07-14 05:48:23.082602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.474 [2024-07-14 05:48:23.082630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.474 qpair failed and we were unable to recover it. 00:34:16.474 [2024-07-14 05:48:23.082805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.474 [2024-07-14 05:48:23.082832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.474 qpair failed and we were unable to recover it. 00:34:16.474 [2024-07-14 05:48:23.083046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.474 [2024-07-14 05:48:23.083072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.474 qpair failed and we were unable to recover it. 00:34:16.474 [2024-07-14 05:48:23.083249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.474 [2024-07-14 05:48:23.083275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.474 qpair failed and we were unable to recover it. 00:34:16.474 [2024-07-14 05:48:23.083477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.474 [2024-07-14 05:48:23.083505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.474 qpair failed and we were unable to recover it. 00:34:16.474 [2024-07-14 05:48:23.083693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.474 [2024-07-14 05:48:23.083718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.474 qpair failed and we were unable to recover it. 00:34:16.474 [2024-07-14 05:48:23.083931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.474 [2024-07-14 05:48:23.083958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.474 qpair failed and we were unable to recover it. 00:34:16.474 [2024-07-14 05:48:23.084117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.474 [2024-07-14 05:48:23.084143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.474 qpair failed and we were unable to recover it. 00:34:16.474 [2024-07-14 05:48:23.084329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.474 [2024-07-14 05:48:23.084356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.474 qpair failed and we were unable to recover it. 00:34:16.474 [2024-07-14 05:48:23.084549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.474 [2024-07-14 05:48:23.084575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.474 qpair failed and we were unable to recover it. 00:34:16.474 [2024-07-14 05:48:23.084776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.474 [2024-07-14 05:48:23.084804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.474 qpair failed and we were unable to recover it. 00:34:16.474 [2024-07-14 05:48:23.085008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.474 [2024-07-14 05:48:23.085035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.474 qpair failed and we were unable to recover it. 00:34:16.474 [2024-07-14 05:48:23.085210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.474 [2024-07-14 05:48:23.085239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.474 qpair failed and we were unable to recover it. 00:34:16.474 [2024-07-14 05:48:23.085441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.474 [2024-07-14 05:48:23.085470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.474 qpair failed and we were unable to recover it. 00:34:16.474 [2024-07-14 05:48:23.085647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.474 [2024-07-14 05:48:23.085673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.474 qpair failed and we were unable to recover it. 00:34:16.474 [2024-07-14 05:48:23.085854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.474 [2024-07-14 05:48:23.085885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.474 qpair failed and we were unable to recover it. 00:34:16.474 [2024-07-14 05:48:23.086081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.474 [2024-07-14 05:48:23.086109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.474 qpair failed and we were unable to recover it. 00:34:16.475 [2024-07-14 05:48:23.086333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.475 [2024-07-14 05:48:23.086359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.475 qpair failed and we were unable to recover it. 00:34:16.475 [2024-07-14 05:48:23.086596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.475 [2024-07-14 05:48:23.086625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.475 qpair failed and we were unable to recover it. 00:34:16.475 [2024-07-14 05:48:23.086818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.475 [2024-07-14 05:48:23.086847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.475 qpair failed and we were unable to recover it. 00:34:16.475 [2024-07-14 05:48:23.087123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.475 [2024-07-14 05:48:23.087149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.475 qpair failed and we were unable to recover it. 00:34:16.475 [2024-07-14 05:48:23.087328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.475 [2024-07-14 05:48:23.087357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.475 qpair failed and we were unable to recover it. 00:34:16.475 [2024-07-14 05:48:23.087559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.475 [2024-07-14 05:48:23.087588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.475 qpair failed and we were unable to recover it. 00:34:16.475 [2024-07-14 05:48:23.087789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.475 [2024-07-14 05:48:23.087815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.475 qpair failed and we were unable to recover it. 00:34:16.475 [2024-07-14 05:48:23.088006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.475 [2024-07-14 05:48:23.088032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.475 qpair failed and we were unable to recover it. 00:34:16.475 [2024-07-14 05:48:23.088219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.475 [2024-07-14 05:48:23.088245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.475 qpair failed and we were unable to recover it. 00:34:16.475 [2024-07-14 05:48:23.088456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.475 [2024-07-14 05:48:23.088482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.475 qpair failed and we were unable to recover it. 00:34:16.475 [2024-07-14 05:48:23.088683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.475 [2024-07-14 05:48:23.088712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.475 qpair failed and we were unable to recover it. 00:34:16.475 [2024-07-14 05:48:23.088914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.475 [2024-07-14 05:48:23.088940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.475 qpair failed and we were unable to recover it. 00:34:16.475 [2024-07-14 05:48:23.089121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.475 [2024-07-14 05:48:23.089147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.475 qpair failed and we were unable to recover it. 00:34:16.475 [2024-07-14 05:48:23.089348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.475 [2024-07-14 05:48:23.089376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.475 qpair failed and we were unable to recover it. 00:34:16.475 [2024-07-14 05:48:23.089613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.475 [2024-07-14 05:48:23.089639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.475 qpair failed and we were unable to recover it. 00:34:16.475 [2024-07-14 05:48:23.089929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.475 [2024-07-14 05:48:23.089956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.475 qpair failed and we were unable to recover it. 00:34:16.475 [2024-07-14 05:48:23.090186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.475 [2024-07-14 05:48:23.090215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.475 qpair failed and we were unable to recover it. 00:34:16.475 [2024-07-14 05:48:23.090419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.475 [2024-07-14 05:48:23.090447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.475 qpair failed and we were unable to recover it. 00:34:16.475 [2024-07-14 05:48:23.090653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.475 [2024-07-14 05:48:23.090679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.475 qpair failed and we were unable to recover it. 00:34:16.475 [2024-07-14 05:48:23.090903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.475 [2024-07-14 05:48:23.090930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.475 qpair failed and we were unable to recover it. 00:34:16.475 [2024-07-14 05:48:23.091111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.475 [2024-07-14 05:48:23.091137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.475 qpair failed and we were unable to recover it. 00:34:16.475 [2024-07-14 05:48:23.091298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.475 [2024-07-14 05:48:23.091328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.475 qpair failed and we were unable to recover it. 00:34:16.475 [2024-07-14 05:48:23.091504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.475 [2024-07-14 05:48:23.091532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.475 qpair failed and we were unable to recover it. 00:34:16.475 [2024-07-14 05:48:23.091731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.475 [2024-07-14 05:48:23.091760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.475 qpair failed and we were unable to recover it. 00:34:16.475 [2024-07-14 05:48:23.091952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.475 [2024-07-14 05:48:23.091978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.475 qpair failed and we were unable to recover it. 00:34:16.475 [2024-07-14 05:48:23.092158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.475 [2024-07-14 05:48:23.092183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.475 qpair failed and we were unable to recover it. 00:34:16.475 [2024-07-14 05:48:23.092385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.475 [2024-07-14 05:48:23.092413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.475 qpair failed and we were unable to recover it. 00:34:16.475 [2024-07-14 05:48:23.092616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.475 [2024-07-14 05:48:23.092645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.475 qpair failed and we were unable to recover it. 00:34:16.475 [2024-07-14 05:48:23.092895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.475 [2024-07-14 05:48:23.092921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.475 qpair failed and we were unable to recover it. 00:34:16.475 [2024-07-14 05:48:23.093081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.475 [2024-07-14 05:48:23.093107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.475 qpair failed and we were unable to recover it. 00:34:16.475 [2024-07-14 05:48:23.093372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.475 [2024-07-14 05:48:23.093398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.475 qpair failed and we were unable to recover it. 00:34:16.475 [2024-07-14 05:48:23.093616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.475 [2024-07-14 05:48:23.093642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.475 qpair failed and we were unable to recover it. 00:34:16.475 [2024-07-14 05:48:23.093817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.475 [2024-07-14 05:48:23.093843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.475 qpair failed and we were unable to recover it. 00:34:16.475 [2024-07-14 05:48:23.094107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.475 [2024-07-14 05:48:23.094146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.475 qpair failed and we were unable to recover it. 00:34:16.475 [2024-07-14 05:48:23.094370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.475 [2024-07-14 05:48:23.094398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.475 qpair failed and we were unable to recover it. 00:34:16.475 [2024-07-14 05:48:23.094587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.475 [2024-07-14 05:48:23.094614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.475 qpair failed and we were unable to recover it. 00:34:16.475 [2024-07-14 05:48:23.094823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.475 [2024-07-14 05:48:23.094875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.475 qpair failed and we were unable to recover it. 00:34:16.475 [2024-07-14 05:48:23.095060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.475 [2024-07-14 05:48:23.095086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.475 qpair failed and we were unable to recover it. 00:34:16.475 [2024-07-14 05:48:23.095276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.475 [2024-07-14 05:48:23.095302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.475 qpair failed and we were unable to recover it. 00:34:16.475 [2024-07-14 05:48:23.095507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.475 [2024-07-14 05:48:23.095533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.475 qpair failed and we were unable to recover it. 00:34:16.475 [2024-07-14 05:48:23.095715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.475 [2024-07-14 05:48:23.095758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.475 qpair failed and we were unable to recover it. 00:34:16.476 [2024-07-14 05:48:23.095950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.476 [2024-07-14 05:48:23.095977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.476 qpair failed and we were unable to recover it. 00:34:16.476 [2024-07-14 05:48:23.096165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.476 [2024-07-14 05:48:23.096193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.476 qpair failed and we were unable to recover it. 00:34:16.476 [2024-07-14 05:48:23.096376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.476 [2024-07-14 05:48:23.096403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.476 qpair failed and we were unable to recover it. 00:34:16.476 [2024-07-14 05:48:23.096644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.476 [2024-07-14 05:48:23.096686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.476 qpair failed and we were unable to recover it. 00:34:16.476 [2024-07-14 05:48:23.096878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.476 [2024-07-14 05:48:23.096904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.476 qpair failed and we were unable to recover it. 00:34:16.476 [2024-07-14 05:48:23.097111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.476 [2024-07-14 05:48:23.097139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.476 qpair failed and we were unable to recover it. 00:34:16.476 [2024-07-14 05:48:23.097343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.476 [2024-07-14 05:48:23.097385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.476 qpair failed and we were unable to recover it. 00:34:16.476 [2024-07-14 05:48:23.097598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.476 [2024-07-14 05:48:23.097641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.476 qpair failed and we were unable to recover it. 00:34:16.476 [2024-07-14 05:48:23.097797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.476 [2024-07-14 05:48:23.097823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.476 qpair failed and we were unable to recover it. 00:34:16.476 [2024-07-14 05:48:23.098013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.476 [2024-07-14 05:48:23.098039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.476 qpair failed and we were unable to recover it. 00:34:16.476 [2024-07-14 05:48:23.098245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.476 [2024-07-14 05:48:23.098287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.476 qpair failed and we were unable to recover it. 00:34:16.476 [2024-07-14 05:48:23.098506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.476 [2024-07-14 05:48:23.098533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.476 qpair failed and we were unable to recover it. 00:34:16.476 [2024-07-14 05:48:23.098743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.476 [2024-07-14 05:48:23.098770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.476 qpair failed and we were unable to recover it. 00:34:16.476 [2024-07-14 05:48:23.098979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.476 [2024-07-14 05:48:23.099024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.476 qpair failed and we were unable to recover it. 00:34:16.476 [2024-07-14 05:48:23.099269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.476 [2024-07-14 05:48:23.099312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.476 qpair failed and we were unable to recover it. 00:34:16.476 [2024-07-14 05:48:23.099528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.476 [2024-07-14 05:48:23.099554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.476 qpair failed and we were unable to recover it. 00:34:16.476 [2024-07-14 05:48:23.099762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.476 [2024-07-14 05:48:23.099788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.476 qpair failed and we were unable to recover it. 00:34:16.476 [2024-07-14 05:48:23.100004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.476 [2024-07-14 05:48:23.100033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.476 qpair failed and we were unable to recover it. 00:34:16.476 [2024-07-14 05:48:23.100218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.476 [2024-07-14 05:48:23.100265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.476 qpair failed and we were unable to recover it. 00:34:16.476 [2024-07-14 05:48:23.100483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.476 [2024-07-14 05:48:23.100508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.476 qpair failed and we were unable to recover it. 00:34:16.476 [2024-07-14 05:48:23.100662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.476 [2024-07-14 05:48:23.100695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.476 qpair failed and we were unable to recover it. 00:34:16.476 [2024-07-14 05:48:23.100911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.476 [2024-07-14 05:48:23.100938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.476 qpair failed and we were unable to recover it. 00:34:16.476 [2024-07-14 05:48:23.101160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.476 [2024-07-14 05:48:23.101204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.476 qpair failed and we were unable to recover it. 00:34:16.476 [2024-07-14 05:48:23.101420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.476 [2024-07-14 05:48:23.101447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.476 qpair failed and we were unable to recover it. 00:34:16.476 [2024-07-14 05:48:23.101668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.476 [2024-07-14 05:48:23.101712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.476 qpair failed and we were unable to recover it. 00:34:16.476 [2024-07-14 05:48:23.101895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.476 [2024-07-14 05:48:23.101921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.476 qpair failed and we were unable to recover it. 00:34:16.476 [2024-07-14 05:48:23.102129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.476 [2024-07-14 05:48:23.102173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.476 qpair failed and we were unable to recover it. 00:34:16.476 [2024-07-14 05:48:23.102374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.476 [2024-07-14 05:48:23.102417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.476 qpair failed and we were unable to recover it. 00:34:16.476 [2024-07-14 05:48:23.102729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.476 [2024-07-14 05:48:23.102780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.476 qpair failed and we were unable to recover it. 00:34:16.476 [2024-07-14 05:48:23.103024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.476 [2024-07-14 05:48:23.103067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.476 qpair failed and we were unable to recover it. 00:34:16.476 [2024-07-14 05:48:23.103278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.476 [2024-07-14 05:48:23.103321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.476 qpair failed and we were unable to recover it. 00:34:16.476 [2024-07-14 05:48:23.103534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.476 [2024-07-14 05:48:23.103578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.476 qpair failed and we were unable to recover it. 00:34:16.476 [2024-07-14 05:48:23.103786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.476 [2024-07-14 05:48:23.103812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.476 qpair failed and we were unable to recover it. 00:34:16.476 [2024-07-14 05:48:23.104028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.476 [2024-07-14 05:48:23.104058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.476 qpair failed and we were unable to recover it. 00:34:16.476 [2024-07-14 05:48:23.104283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.476 [2024-07-14 05:48:23.104327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.476 qpair failed and we were unable to recover it. 00:34:16.476 [2024-07-14 05:48:23.104545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.476 [2024-07-14 05:48:23.104572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.476 qpair failed and we were unable to recover it. 00:34:16.476 [2024-07-14 05:48:23.104756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.476 [2024-07-14 05:48:23.104781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.476 qpair failed and we were unable to recover it. 00:34:16.476 [2024-07-14 05:48:23.105016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.476 [2024-07-14 05:48:23.105058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.476 qpair failed and we were unable to recover it. 00:34:16.476 [2024-07-14 05:48:23.105268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.476 [2024-07-14 05:48:23.105311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.476 qpair failed and we were unable to recover it. 00:34:16.476 [2024-07-14 05:48:23.105495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.476 [2024-07-14 05:48:23.105521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.476 qpair failed and we were unable to recover it. 00:34:16.476 [2024-07-14 05:48:23.105705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.476 [2024-07-14 05:48:23.105730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.477 qpair failed and we were unable to recover it. 00:34:16.477 [2024-07-14 05:48:23.105934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.477 [2024-07-14 05:48:23.105978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.477 qpair failed and we were unable to recover it. 00:34:16.477 [2024-07-14 05:48:23.106184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.477 [2024-07-14 05:48:23.106228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.477 qpair failed and we were unable to recover it. 00:34:16.477 [2024-07-14 05:48:23.106475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.477 [2024-07-14 05:48:23.106500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.477 qpair failed and we were unable to recover it. 00:34:16.477 [2024-07-14 05:48:23.106681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.477 [2024-07-14 05:48:23.106708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.477 qpair failed and we were unable to recover it. 00:34:16.477 [2024-07-14 05:48:23.106891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.477 [2024-07-14 05:48:23.106917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.477 qpair failed and we were unable to recover it. 00:34:16.477 [2024-07-14 05:48:23.107125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.477 [2024-07-14 05:48:23.107154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.477 qpair failed and we were unable to recover it. 00:34:16.477 [2024-07-14 05:48:23.107410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.477 [2024-07-14 05:48:23.107454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.477 qpair failed and we were unable to recover it. 00:34:16.477 [2024-07-14 05:48:23.107642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.477 [2024-07-14 05:48:23.107685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.477 qpair failed and we were unable to recover it. 00:34:16.477 [2024-07-14 05:48:23.107875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.477 [2024-07-14 05:48:23.107901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.477 qpair failed and we were unable to recover it. 00:34:16.477 [2024-07-14 05:48:23.108058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.477 [2024-07-14 05:48:23.108083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.477 qpair failed and we were unable to recover it. 00:34:16.477 [2024-07-14 05:48:23.108326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.477 [2024-07-14 05:48:23.108369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.477 qpair failed and we were unable to recover it. 00:34:16.477 [2024-07-14 05:48:23.108570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.477 [2024-07-14 05:48:23.108613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.477 qpair failed and we were unable to recover it. 00:34:16.477 [2024-07-14 05:48:23.108772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.477 [2024-07-14 05:48:23.108798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.477 qpair failed and we were unable to recover it. 00:34:16.477 [2024-07-14 05:48:23.108983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.477 [2024-07-14 05:48:23.109010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.477 qpair failed and we were unable to recover it. 00:34:16.477 [2024-07-14 05:48:23.109246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.477 [2024-07-14 05:48:23.109289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.477 qpair failed and we were unable to recover it. 00:34:16.477 [2024-07-14 05:48:23.109510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.477 [2024-07-14 05:48:23.109552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.477 qpair failed and we were unable to recover it. 00:34:16.477 [2024-07-14 05:48:23.109740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.477 [2024-07-14 05:48:23.109765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.477 qpair failed and we were unable to recover it. 00:34:16.477 [2024-07-14 05:48:23.109967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.477 [2024-07-14 05:48:23.110012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.477 qpair failed and we were unable to recover it. 00:34:16.477 [2024-07-14 05:48:23.110263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.477 [2024-07-14 05:48:23.110289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.477 qpair failed and we were unable to recover it. 00:34:16.477 [2024-07-14 05:48:23.110477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.477 [2024-07-14 05:48:23.110507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.477 qpair failed and we were unable to recover it. 00:34:16.477 [2024-07-14 05:48:23.110662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.477 [2024-07-14 05:48:23.110688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.477 qpair failed and we were unable to recover it. 00:34:16.477 [2024-07-14 05:48:23.110842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.477 [2024-07-14 05:48:23.110873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.477 qpair failed and we were unable to recover it. 00:34:16.477 [2024-07-14 05:48:23.111064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.477 [2024-07-14 05:48:23.111090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.477 qpair failed and we were unable to recover it. 00:34:16.477 [2024-07-14 05:48:23.111321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.477 [2024-07-14 05:48:23.111364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.477 qpair failed and we were unable to recover it. 00:34:16.477 [2024-07-14 05:48:23.111608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.477 [2024-07-14 05:48:23.111651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.477 qpair failed and we were unable to recover it. 00:34:16.477 [2024-07-14 05:48:23.111877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.477 [2024-07-14 05:48:23.111903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.477 qpair failed and we were unable to recover it. 00:34:16.477 [2024-07-14 05:48:23.112085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.477 [2024-07-14 05:48:23.112127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.477 qpair failed and we were unable to recover it. 00:34:16.477 [2024-07-14 05:48:23.112338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.477 [2024-07-14 05:48:23.112381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.477 qpair failed and we were unable to recover it. 00:34:16.477 [2024-07-14 05:48:23.112584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.477 [2024-07-14 05:48:23.112627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.477 qpair failed and we were unable to recover it. 00:34:16.477 [2024-07-14 05:48:23.112826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.477 [2024-07-14 05:48:23.112851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.477 qpair failed and we were unable to recover it. 00:34:16.477 [2024-07-14 05:48:23.113080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.477 [2024-07-14 05:48:23.113125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.477 qpair failed and we were unable to recover it. 00:34:16.477 [2024-07-14 05:48:23.113403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.477 [2024-07-14 05:48:23.113446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.477 qpair failed and we were unable to recover it. 00:34:16.477 [2024-07-14 05:48:23.113652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.477 [2024-07-14 05:48:23.113695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.477 qpair failed and we were unable to recover it. 00:34:16.477 [2024-07-14 05:48:23.113885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.477 [2024-07-14 05:48:23.113911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.477 qpair failed and we were unable to recover it. 00:34:16.477 [2024-07-14 05:48:23.114092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.477 [2024-07-14 05:48:23.114119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.477 qpair failed and we were unable to recover it. 00:34:16.477 [2024-07-14 05:48:23.114388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.477 [2024-07-14 05:48:23.114432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.477 qpair failed and we were unable to recover it. 00:34:16.477 [2024-07-14 05:48:23.114666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.477 [2024-07-14 05:48:23.114710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.477 qpair failed and we were unable to recover it. 00:34:16.477 [2024-07-14 05:48:23.114979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.477 [2024-07-14 05:48:23.115005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.477 qpair failed and we were unable to recover it. 00:34:16.477 [2024-07-14 05:48:23.115197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.477 [2024-07-14 05:48:23.115224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.477 qpair failed and we were unable to recover it. 00:34:16.477 [2024-07-14 05:48:23.115412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.477 [2024-07-14 05:48:23.115455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.477 qpair failed and we were unable to recover it. 00:34:16.477 [2024-07-14 05:48:23.115668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.477 [2024-07-14 05:48:23.115712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.478 qpair failed and we were unable to recover it. 00:34:16.478 [2024-07-14 05:48:23.115899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.478 [2024-07-14 05:48:23.115925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.478 qpair failed and we were unable to recover it. 00:34:16.478 [2024-07-14 05:48:23.116128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.478 [2024-07-14 05:48:23.116170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.478 qpair failed and we were unable to recover it. 00:34:16.478 [2024-07-14 05:48:23.116355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.478 [2024-07-14 05:48:23.116381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.478 qpair failed and we were unable to recover it. 00:34:16.478 [2024-07-14 05:48:23.116569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.478 [2024-07-14 05:48:23.116612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.478 qpair failed and we were unable to recover it. 00:34:16.478 [2024-07-14 05:48:23.116798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.478 [2024-07-14 05:48:23.116824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.478 qpair failed and we were unable to recover it. 00:34:16.478 [2024-07-14 05:48:23.117020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.478 [2024-07-14 05:48:23.117064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.478 qpair failed and we were unable to recover it. 00:34:16.478 [2024-07-14 05:48:23.117250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.478 [2024-07-14 05:48:23.117276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.478 qpair failed and we were unable to recover it. 00:34:16.478 [2024-07-14 05:48:23.117539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.478 [2024-07-14 05:48:23.117565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.478 qpair failed and we were unable to recover it. 00:34:16.478 [2024-07-14 05:48:23.117725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.478 [2024-07-14 05:48:23.117750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.478 qpair failed and we were unable to recover it. 00:34:16.478 [2024-07-14 05:48:23.117945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.478 [2024-07-14 05:48:23.117972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.478 qpair failed and we were unable to recover it. 00:34:16.478 [2024-07-14 05:48:23.118173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.478 [2024-07-14 05:48:23.118202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.478 qpair failed and we were unable to recover it. 00:34:16.478 [2024-07-14 05:48:23.118403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.478 [2024-07-14 05:48:23.118447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.478 qpair failed and we were unable to recover it. 00:34:16.478 [2024-07-14 05:48:23.118709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.478 [2024-07-14 05:48:23.118734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.478 qpair failed and we were unable to recover it. 00:34:16.478 [2024-07-14 05:48:23.118915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.478 [2024-07-14 05:48:23.118941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.478 qpair failed and we were unable to recover it. 00:34:16.478 [2024-07-14 05:48:23.119170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.478 [2024-07-14 05:48:23.119212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.478 qpair failed and we were unable to recover it. 00:34:16.478 [2024-07-14 05:48:23.119429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.478 [2024-07-14 05:48:23.119456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.478 qpair failed and we were unable to recover it. 00:34:16.478 [2024-07-14 05:48:23.119642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.478 [2024-07-14 05:48:23.119668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.478 qpair failed and we were unable to recover it. 00:34:16.478 [2024-07-14 05:48:23.119959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.478 [2024-07-14 05:48:23.120003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.478 qpair failed and we were unable to recover it. 00:34:16.478 [2024-07-14 05:48:23.120191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.478 [2024-07-14 05:48:23.120225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.478 qpair failed and we were unable to recover it. 00:34:16.478 [2024-07-14 05:48:23.120429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.478 [2024-07-14 05:48:23.120473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.478 qpair failed and we were unable to recover it. 00:34:16.478 [2024-07-14 05:48:23.120659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.478 [2024-07-14 05:48:23.120686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.478 qpair failed and we were unable to recover it. 00:34:16.478 [2024-07-14 05:48:23.120898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.478 [2024-07-14 05:48:23.120926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.478 qpair failed and we were unable to recover it. 00:34:16.478 [2024-07-14 05:48:23.121140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.478 [2024-07-14 05:48:23.121166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.478 qpair failed and we were unable to recover it. 00:34:16.478 [2024-07-14 05:48:23.121412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.478 [2024-07-14 05:48:23.121456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.478 qpair failed and we were unable to recover it. 00:34:16.478 [2024-07-14 05:48:23.121696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.478 [2024-07-14 05:48:23.121740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.478 qpair failed and we were unable to recover it. 00:34:16.478 [2024-07-14 05:48:23.121927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.478 [2024-07-14 05:48:23.121954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.478 qpair failed and we were unable to recover it. 00:34:16.478 [2024-07-14 05:48:23.122242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.478 [2024-07-14 05:48:23.122286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.478 qpair failed and we were unable to recover it. 00:34:16.478 [2024-07-14 05:48:23.122493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.478 [2024-07-14 05:48:23.122537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.478 qpair failed and we were unable to recover it. 00:34:16.478 [2024-07-14 05:48:23.122801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.478 [2024-07-14 05:48:23.122827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.478 qpair failed and we were unable to recover it. 00:34:16.478 [2024-07-14 05:48:23.123021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.478 [2024-07-14 05:48:23.123047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.478 qpair failed and we were unable to recover it. 00:34:16.478 [2024-07-14 05:48:23.123235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.478 [2024-07-14 05:48:23.123261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.478 qpair failed and we were unable to recover it. 00:34:16.478 [2024-07-14 05:48:23.123443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.478 [2024-07-14 05:48:23.123468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.478 qpair failed and we were unable to recover it. 00:34:16.478 [2024-07-14 05:48:23.123652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.478 [2024-07-14 05:48:23.123681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.478 qpair failed and we were unable to recover it. 00:34:16.478 [2024-07-14 05:48:23.123913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.478 [2024-07-14 05:48:23.123940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.478 qpair failed and we were unable to recover it. 00:34:16.478 [2024-07-14 05:48:23.124146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.478 [2024-07-14 05:48:23.124174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.478 qpair failed and we were unable to recover it. 00:34:16.478 [2024-07-14 05:48:23.124430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.479 [2024-07-14 05:48:23.124472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.479 qpair failed and we were unable to recover it. 00:34:16.479 [2024-07-14 05:48:23.124680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.479 [2024-07-14 05:48:23.124722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.479 qpair failed and we were unable to recover it. 00:34:16.479 [2024-07-14 05:48:23.124965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.479 [2024-07-14 05:48:23.124991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.479 qpair failed and we were unable to recover it. 00:34:16.479 [2024-07-14 05:48:23.125173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.479 [2024-07-14 05:48:23.125216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.479 qpair failed and we were unable to recover it. 00:34:16.479 [2024-07-14 05:48:23.125426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.479 [2024-07-14 05:48:23.125469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.479 qpair failed and we were unable to recover it. 00:34:16.479 [2024-07-14 05:48:23.125703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.479 [2024-07-14 05:48:23.125746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.479 qpair failed and we were unable to recover it. 00:34:16.479 [2024-07-14 05:48:23.125962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.479 [2024-07-14 05:48:23.126006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.479 qpair failed and we were unable to recover it. 00:34:16.479 [2024-07-14 05:48:23.126228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.479 [2024-07-14 05:48:23.126255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.479 qpair failed and we were unable to recover it. 00:34:16.479 [2024-07-14 05:48:23.126441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.479 [2024-07-14 05:48:23.126467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.479 qpair failed and we were unable to recover it. 00:34:16.479 [2024-07-14 05:48:23.126672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.479 [2024-07-14 05:48:23.126698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.479 qpair failed and we were unable to recover it. 00:34:16.479 [2024-07-14 05:48:23.126992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.479 [2024-07-14 05:48:23.127018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.479 qpair failed and we were unable to recover it. 00:34:16.479 [2024-07-14 05:48:23.127264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.479 [2024-07-14 05:48:23.127307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.479 qpair failed and we were unable to recover it. 00:34:16.479 [2024-07-14 05:48:23.127493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.479 [2024-07-14 05:48:23.127536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.479 qpair failed and we were unable to recover it. 00:34:16.479 [2024-07-14 05:48:23.127749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.479 [2024-07-14 05:48:23.127775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.479 qpair failed and we were unable to recover it. 00:34:16.479 [2024-07-14 05:48:23.128054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.479 [2024-07-14 05:48:23.128097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.479 qpair failed and we were unable to recover it. 00:34:16.479 [2024-07-14 05:48:23.128338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.479 [2024-07-14 05:48:23.128382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.479 qpair failed and we were unable to recover it. 00:34:16.479 [2024-07-14 05:48:23.128560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.479 [2024-07-14 05:48:23.128605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.479 qpair failed and we were unable to recover it. 00:34:16.479 [2024-07-14 05:48:23.128810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.479 [2024-07-14 05:48:23.128836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.479 qpair failed and we were unable to recover it. 00:34:16.479 [2024-07-14 05:48:23.129070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.479 [2024-07-14 05:48:23.129096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.479 qpair failed and we were unable to recover it. 00:34:16.479 [2024-07-14 05:48:23.129305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.479 [2024-07-14 05:48:23.129357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.479 qpair failed and we were unable to recover it. 00:34:16.479 [2024-07-14 05:48:23.129596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.479 [2024-07-14 05:48:23.129640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.479 qpair failed and we were unable to recover it. 00:34:16.479 [2024-07-14 05:48:23.129805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.479 [2024-07-14 05:48:23.129831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.479 qpair failed and we were unable to recover it. 00:34:16.479 [2024-07-14 05:48:23.130068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.479 [2024-07-14 05:48:23.130094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.479 qpair failed and we were unable to recover it. 00:34:16.479 [2024-07-14 05:48:23.130251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.479 [2024-07-14 05:48:23.130281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.479 qpair failed and we were unable to recover it. 00:34:16.479 [2024-07-14 05:48:23.130502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.479 [2024-07-14 05:48:23.130544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.479 qpair failed and we were unable to recover it. 00:34:16.479 [2024-07-14 05:48:23.130701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.479 [2024-07-14 05:48:23.130727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.479 qpair failed and we were unable to recover it. 00:34:16.479 [2024-07-14 05:48:23.131014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.479 [2024-07-14 05:48:23.131059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.479 qpair failed and we were unable to recover it. 00:34:16.479 [2024-07-14 05:48:23.131301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.479 [2024-07-14 05:48:23.131343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.479 qpair failed and we were unable to recover it. 00:34:16.479 [2024-07-14 05:48:23.131557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.479 [2024-07-14 05:48:23.131599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.479 qpair failed and we were unable to recover it. 00:34:16.479 [2024-07-14 05:48:23.131810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.479 [2024-07-14 05:48:23.131836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.479 qpair failed and we were unable to recover it. 00:34:16.479 [2024-07-14 05:48:23.132032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.479 [2024-07-14 05:48:23.132076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.479 qpair failed and we were unable to recover it. 00:34:16.479 [2024-07-14 05:48:23.132244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.479 [2024-07-14 05:48:23.132288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.479 qpair failed and we were unable to recover it. 00:34:16.479 [2024-07-14 05:48:23.132497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.479 [2024-07-14 05:48:23.132540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.479 qpair failed and we were unable to recover it. 00:34:16.479 [2024-07-14 05:48:23.132715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.479 [2024-07-14 05:48:23.132742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.479 qpair failed and we were unable to recover it. 00:34:16.479 [2024-07-14 05:48:23.132982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.479 [2024-07-14 05:48:23.133026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.479 qpair failed and we were unable to recover it. 00:34:16.479 [2024-07-14 05:48:23.133238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.479 [2024-07-14 05:48:23.133263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.479 qpair failed and we were unable to recover it. 00:34:16.479 [2024-07-14 05:48:23.133439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.479 [2024-07-14 05:48:23.133483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.479 qpair failed and we were unable to recover it. 00:34:16.479 [2024-07-14 05:48:23.133754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.479 [2024-07-14 05:48:23.133780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.479 qpair failed and we were unable to recover it. 00:34:16.479 [2024-07-14 05:48:23.134018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.479 [2024-07-14 05:48:23.134061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.479 qpair failed and we were unable to recover it. 00:34:16.479 [2024-07-14 05:48:23.134227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.479 [2024-07-14 05:48:23.134256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.479 qpair failed and we were unable to recover it. 00:34:16.479 [2024-07-14 05:48:23.134409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.479 [2024-07-14 05:48:23.134434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.479 qpair failed and we were unable to recover it. 00:34:16.479 [2024-07-14 05:48:23.134637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.480 [2024-07-14 05:48:23.134680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.480 qpair failed and we were unable to recover it. 00:34:16.480 [2024-07-14 05:48:23.134945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.480 [2024-07-14 05:48:23.134972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.480 qpair failed and we were unable to recover it. 00:34:16.480 [2024-07-14 05:48:23.135157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.480 [2024-07-14 05:48:23.135200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.480 qpair failed and we were unable to recover it. 00:34:16.480 [2024-07-14 05:48:23.135476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.480 [2024-07-14 05:48:23.135521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.480 qpair failed and we were unable to recover it. 00:34:16.480 [2024-07-14 05:48:23.135735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.480 [2024-07-14 05:48:23.135761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.480 qpair failed and we were unable to recover it. 00:34:16.480 [2024-07-14 05:48:23.135926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.480 [2024-07-14 05:48:23.135952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.480 qpair failed and we were unable to recover it. 00:34:16.480 [2024-07-14 05:48:23.136162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.480 [2024-07-14 05:48:23.136189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.480 qpair failed and we were unable to recover it. 00:34:16.480 [2024-07-14 05:48:23.136342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.480 [2024-07-14 05:48:23.136369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.480 qpair failed and we were unable to recover it. 00:34:16.480 [2024-07-14 05:48:23.136576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.480 [2024-07-14 05:48:23.136602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.480 qpair failed and we were unable to recover it. 00:34:16.480 [2024-07-14 05:48:23.136812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.480 [2024-07-14 05:48:23.136859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.480 qpair failed and we were unable to recover it. 00:34:16.480 [2024-07-14 05:48:23.137065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.480 [2024-07-14 05:48:23.137093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.480 qpair failed and we were unable to recover it. 00:34:16.480 [2024-07-14 05:48:23.137309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.480 [2024-07-14 05:48:23.137351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.480 qpair failed and we were unable to recover it. 00:34:16.480 [2024-07-14 05:48:23.137588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.480 [2024-07-14 05:48:23.137616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.480 qpair failed and we were unable to recover it. 00:34:16.480 [2024-07-14 05:48:23.137803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.480 [2024-07-14 05:48:23.137831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.480 qpair failed and we were unable to recover it. 00:34:16.480 [2024-07-14 05:48:23.138056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.480 [2024-07-14 05:48:23.138081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.480 qpair failed and we were unable to recover it. 00:34:16.480 [2024-07-14 05:48:23.138233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.480 [2024-07-14 05:48:23.138259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.480 qpair failed and we were unable to recover it. 00:34:16.480 [2024-07-14 05:48:23.138419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.480 [2024-07-14 05:48:23.138443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.480 qpair failed and we were unable to recover it. 00:34:16.480 [2024-07-14 05:48:23.138623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.480 [2024-07-14 05:48:23.138650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.480 qpair failed and we were unable to recover it. 00:34:16.480 [2024-07-14 05:48:23.138851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.480 [2024-07-14 05:48:23.138897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.480 qpair failed and we were unable to recover it. 00:34:16.480 [2024-07-14 05:48:23.139109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.480 [2024-07-14 05:48:23.139150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.480 qpair failed and we were unable to recover it. 00:34:16.480 [2024-07-14 05:48:23.139388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.480 [2024-07-14 05:48:23.139417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.480 qpair failed and we were unable to recover it. 00:34:16.480 [2024-07-14 05:48:23.139748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.480 [2024-07-14 05:48:23.139806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.480 qpair failed and we were unable to recover it. 00:34:16.480 [2024-07-14 05:48:23.140028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.480 [2024-07-14 05:48:23.140055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.480 qpair failed and we were unable to recover it. 00:34:16.480 [2024-07-14 05:48:23.140273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.480 [2024-07-14 05:48:23.140302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.480 qpair failed and we were unable to recover it. 00:34:16.480 [2024-07-14 05:48:23.140541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.480 [2024-07-14 05:48:23.140568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.480 qpair failed and we were unable to recover it. 00:34:16.480 [2024-07-14 05:48:23.140830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.480 [2024-07-14 05:48:23.140888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.480 qpair failed and we were unable to recover it. 00:34:16.480 [2024-07-14 05:48:23.141091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.480 [2024-07-14 05:48:23.141116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.480 qpair failed and we were unable to recover it. 00:34:16.480 [2024-07-14 05:48:23.141343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.480 [2024-07-14 05:48:23.141384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.480 qpair failed and we were unable to recover it. 00:34:16.480 [2024-07-14 05:48:23.141619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.480 [2024-07-14 05:48:23.141644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.480 qpair failed and we were unable to recover it. 00:34:16.480 [2024-07-14 05:48:23.141863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.480 [2024-07-14 05:48:23.141894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.480 qpair failed and we were unable to recover it. 00:34:16.480 [2024-07-14 05:48:23.142090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.480 [2024-07-14 05:48:23.142115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.480 qpair failed and we were unable to recover it. 00:34:16.480 [2024-07-14 05:48:23.142289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.480 [2024-07-14 05:48:23.142317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.480 qpair failed and we were unable to recover it. 00:34:16.480 [2024-07-14 05:48:23.142496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.480 [2024-07-14 05:48:23.142524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.480 qpair failed and we were unable to recover it. 00:34:16.480 [2024-07-14 05:48:23.142754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.480 [2024-07-14 05:48:23.142813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.480 qpair failed and we were unable to recover it. 00:34:16.480 [2024-07-14 05:48:23.143029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.480 [2024-07-14 05:48:23.143055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.480 qpair failed and we were unable to recover it. 00:34:16.480 [2024-07-14 05:48:23.143271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.480 [2024-07-14 05:48:23.143299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.480 qpair failed and we were unable to recover it. 00:34:16.480 [2024-07-14 05:48:23.143533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.480 [2024-07-14 05:48:23.143566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.480 qpair failed and we were unable to recover it. 00:34:16.480 [2024-07-14 05:48:23.143784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.480 [2024-07-14 05:48:23.143822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.480 qpair failed and we were unable to recover it. 00:34:16.480 [2024-07-14 05:48:23.144052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.480 [2024-07-14 05:48:23.144078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.480 qpair failed and we were unable to recover it. 00:34:16.480 [2024-07-14 05:48:23.144239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.480 [2024-07-14 05:48:23.144264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.480 qpair failed and we were unable to recover it. 00:34:16.480 [2024-07-14 05:48:23.144465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.480 [2024-07-14 05:48:23.144499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.480 qpair failed and we were unable to recover it. 00:34:16.480 [2024-07-14 05:48:23.144673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.481 [2024-07-14 05:48:23.144700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.481 qpair failed and we were unable to recover it. 00:34:16.481 [2024-07-14 05:48:23.144928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.481 [2024-07-14 05:48:23.144953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.481 qpair failed and we were unable to recover it. 00:34:16.481 [2024-07-14 05:48:23.145109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.481 [2024-07-14 05:48:23.145135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.481 qpair failed and we were unable to recover it. 00:34:16.481 [2024-07-14 05:48:23.145324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.481 [2024-07-14 05:48:23.145352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.481 qpair failed and we were unable to recover it. 00:34:16.481 [2024-07-14 05:48:23.145638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.481 [2024-07-14 05:48:23.145688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.481 qpair failed and we were unable to recover it. 00:34:16.481 [2024-07-14 05:48:23.145894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.481 [2024-07-14 05:48:23.145937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.481 qpair failed and we were unable to recover it. 00:34:16.481 [2024-07-14 05:48:23.146117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.481 [2024-07-14 05:48:23.146142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.481 qpair failed and we were unable to recover it. 00:34:16.481 [2024-07-14 05:48:23.146324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.481 [2024-07-14 05:48:23.146349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.481 qpair failed and we were unable to recover it. 00:34:16.481 [2024-07-14 05:48:23.146582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.481 [2024-07-14 05:48:23.146610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.481 qpair failed and we were unable to recover it. 00:34:16.481 [2024-07-14 05:48:23.146821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.481 [2024-07-14 05:48:23.146847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.481 qpair failed and we were unable to recover it. 00:34:16.481 [2024-07-14 05:48:23.147068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.481 [2024-07-14 05:48:23.147093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.481 qpair failed and we were unable to recover it. 00:34:16.481 [2024-07-14 05:48:23.147282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.481 [2024-07-14 05:48:23.147310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.481 qpair failed and we were unable to recover it. 00:34:16.481 [2024-07-14 05:48:23.147511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.481 [2024-07-14 05:48:23.147538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.481 qpair failed and we were unable to recover it. 00:34:16.481 [2024-07-14 05:48:23.147736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.481 [2024-07-14 05:48:23.147769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.481 qpair failed and we were unable to recover it. 00:34:16.481 [2024-07-14 05:48:23.147994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.481 [2024-07-14 05:48:23.148019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.481 qpair failed and we were unable to recover it. 00:34:16.481 [2024-07-14 05:48:23.148191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.481 [2024-07-14 05:48:23.148219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.481 qpair failed and we were unable to recover it. 00:34:16.481 [2024-07-14 05:48:23.148469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.481 [2024-07-14 05:48:23.148497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.481 qpair failed and we were unable to recover it. 00:34:16.481 [2024-07-14 05:48:23.148697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.481 [2024-07-14 05:48:23.148725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.481 qpair failed and we were unable to recover it. 00:34:16.481 [2024-07-14 05:48:23.148941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.481 [2024-07-14 05:48:23.148967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.481 qpair failed and we were unable to recover it. 00:34:16.481 [2024-07-14 05:48:23.149119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.481 [2024-07-14 05:48:23.149143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.481 qpair failed and we were unable to recover it. 00:34:16.481 [2024-07-14 05:48:23.149348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.481 [2024-07-14 05:48:23.149375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.481 qpair failed and we were unable to recover it. 00:34:16.481 [2024-07-14 05:48:23.149570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.481 [2024-07-14 05:48:23.149597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.481 qpair failed and we were unable to recover it. 00:34:16.481 [2024-07-14 05:48:23.149823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.481 [2024-07-14 05:48:23.149855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.481 qpair failed and we were unable to recover it. 00:34:16.481 [2024-07-14 05:48:23.150043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.481 [2024-07-14 05:48:23.150067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.481 qpair failed and we were unable to recover it. 00:34:16.481 [2024-07-14 05:48:23.150299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.481 [2024-07-14 05:48:23.150327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.481 qpair failed and we were unable to recover it. 00:34:16.481 [2024-07-14 05:48:23.150571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.481 [2024-07-14 05:48:23.150599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.481 qpair failed and we were unable to recover it. 00:34:16.481 [2024-07-14 05:48:23.150802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.481 [2024-07-14 05:48:23.150827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.481 qpair failed and we were unable to recover it. 00:34:16.481 [2024-07-14 05:48:23.151000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.481 [2024-07-14 05:48:23.151026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.481 qpair failed and we were unable to recover it. 00:34:16.481 [2024-07-14 05:48:23.151234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.481 [2024-07-14 05:48:23.151264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.481 qpair failed and we were unable to recover it. 00:34:16.481 [2024-07-14 05:48:23.151458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.481 [2024-07-14 05:48:23.151483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.481 qpair failed and we were unable to recover it. 00:34:16.481 [2024-07-14 05:48:23.151710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.481 [2024-07-14 05:48:23.151737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.481 qpair failed and we were unable to recover it. 00:34:16.481 [2024-07-14 05:48:23.151965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.481 [2024-07-14 05:48:23.151991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.481 qpair failed and we were unable to recover it. 00:34:16.481 [2024-07-14 05:48:23.152201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.481 [2024-07-14 05:48:23.152239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.481 qpair failed and we were unable to recover it. 00:34:16.481 [2024-07-14 05:48:23.152439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.481 [2024-07-14 05:48:23.152467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.481 qpair failed and we were unable to recover it. 00:34:16.481 [2024-07-14 05:48:23.152671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.481 [2024-07-14 05:48:23.152695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.481 qpair failed and we were unable to recover it. 00:34:16.481 [2024-07-14 05:48:23.152901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.481 [2024-07-14 05:48:23.152929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.481 qpair failed and we were unable to recover it. 00:34:16.481 [2024-07-14 05:48:23.153105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.481 [2024-07-14 05:48:23.153132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.481 qpair failed and we were unable to recover it. 00:34:16.481 [2024-07-14 05:48:23.153359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.481 [2024-07-14 05:48:23.153384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.481 qpair failed and we were unable to recover it. 00:34:16.481 [2024-07-14 05:48:23.153617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.481 [2024-07-14 05:48:23.153645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.481 qpair failed and we were unable to recover it. 00:34:16.481 [2024-07-14 05:48:23.153849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.481 [2024-07-14 05:48:23.153883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.481 qpair failed and we were unable to recover it. 00:34:16.481 [2024-07-14 05:48:23.154074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.481 [2024-07-14 05:48:23.154099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.481 qpair failed and we were unable to recover it. 00:34:16.481 [2024-07-14 05:48:23.154287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.481 [2024-07-14 05:48:23.154314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.481 qpair failed and we were unable to recover it. 00:34:16.482 [2024-07-14 05:48:23.154541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.482 [2024-07-14 05:48:23.154579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.482 qpair failed and we were unable to recover it. 00:34:16.482 [2024-07-14 05:48:23.154790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.482 [2024-07-14 05:48:23.154815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.482 qpair failed and we were unable to recover it. 00:34:16.482 [2024-07-14 05:48:23.155040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.482 [2024-07-14 05:48:23.155068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.482 qpair failed and we were unable to recover it. 00:34:16.482 [2024-07-14 05:48:23.155254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.482 [2024-07-14 05:48:23.155279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.482 qpair failed and we were unable to recover it. 00:34:16.482 [2024-07-14 05:48:23.155476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.482 [2024-07-14 05:48:23.155501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.482 qpair failed and we were unable to recover it. 00:34:16.482 [2024-07-14 05:48:23.155684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.482 [2024-07-14 05:48:23.155708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.482 qpair failed and we were unable to recover it. 00:34:16.482 [2024-07-14 05:48:23.155910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.482 [2024-07-14 05:48:23.155939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.482 qpair failed and we were unable to recover it. 00:34:16.482 [2024-07-14 05:48:23.156146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.482 [2024-07-14 05:48:23.156171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.482 qpair failed and we were unable to recover it. 00:34:16.482 [2024-07-14 05:48:23.156382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.482 [2024-07-14 05:48:23.156411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.482 qpair failed and we were unable to recover it. 00:34:16.482 [2024-07-14 05:48:23.156646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.482 [2024-07-14 05:48:23.156675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.482 qpair failed and we were unable to recover it. 00:34:16.482 [2024-07-14 05:48:23.156922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.482 [2024-07-14 05:48:23.156948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.482 qpair failed and we were unable to recover it. 00:34:16.482 [2024-07-14 05:48:23.157156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.482 [2024-07-14 05:48:23.157185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.482 qpair failed and we were unable to recover it. 00:34:16.482 [2024-07-14 05:48:23.157399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.482 [2024-07-14 05:48:23.157424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.482 qpair failed and we were unable to recover it. 00:34:16.482 [2024-07-14 05:48:23.157611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.482 [2024-07-14 05:48:23.157637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.482 qpair failed and we were unable to recover it. 00:34:16.482 [2024-07-14 05:48:23.157863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.482 [2024-07-14 05:48:23.157916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.482 qpair failed and we were unable to recover it. 00:34:16.482 [2024-07-14 05:48:23.158156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.482 [2024-07-14 05:48:23.158181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.482 qpair failed and we were unable to recover it. 00:34:16.482 [2024-07-14 05:48:23.158338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.482 [2024-07-14 05:48:23.158363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.482 qpair failed and we were unable to recover it. 00:34:16.482 [2024-07-14 05:48:23.158610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.482 [2024-07-14 05:48:23.158638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.482 qpair failed and we were unable to recover it. 00:34:16.482 [2024-07-14 05:48:23.158830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.482 [2024-07-14 05:48:23.158858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.482 qpair failed and we were unable to recover it. 00:34:16.482 [2024-07-14 05:48:23.159107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.482 [2024-07-14 05:48:23.159132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.482 qpair failed and we were unable to recover it. 00:34:16.482 [2024-07-14 05:48:23.159366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.482 [2024-07-14 05:48:23.159396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.482 qpair failed and we were unable to recover it. 00:34:16.482 [2024-07-14 05:48:23.159600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.482 [2024-07-14 05:48:23.159629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.482 qpair failed and we were unable to recover it. 00:34:16.482 [2024-07-14 05:48:23.159859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.482 [2024-07-14 05:48:23.159890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.482 qpair failed and we were unable to recover it. 00:34:16.482 [2024-07-14 05:48:23.160083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.482 [2024-07-14 05:48:23.160111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.482 qpair failed and we were unable to recover it. 00:34:16.482 [2024-07-14 05:48:23.160336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.482 [2024-07-14 05:48:23.160364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.482 qpair failed and we were unable to recover it. 00:34:16.482 [2024-07-14 05:48:23.160565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.482 [2024-07-14 05:48:23.160589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.482 qpair failed and we were unable to recover it. 00:34:16.482 [2024-07-14 05:48:23.160765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.482 [2024-07-14 05:48:23.160793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.482 qpair failed and we were unable to recover it. 00:34:16.482 [2024-07-14 05:48:23.161039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.482 [2024-07-14 05:48:23.161068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.482 qpair failed and we were unable to recover it. 00:34:16.482 [2024-07-14 05:48:23.161257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.482 [2024-07-14 05:48:23.161283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.482 qpair failed and we were unable to recover it. 00:34:16.482 [2024-07-14 05:48:23.161489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.482 [2024-07-14 05:48:23.161517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.482 qpair failed and we were unable to recover it. 00:34:16.482 [2024-07-14 05:48:23.161716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.482 [2024-07-14 05:48:23.161744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.482 qpair failed and we were unable to recover it. 00:34:16.482 [2024-07-14 05:48:23.161943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.482 [2024-07-14 05:48:23.161968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.482 qpair failed and we were unable to recover it. 00:34:16.482 [2024-07-14 05:48:23.162150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.482 [2024-07-14 05:48:23.162178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.482 qpair failed and we were unable to recover it. 00:34:16.482 [2024-07-14 05:48:23.162420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.482 [2024-07-14 05:48:23.162445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.482 qpair failed and we were unable to recover it. 00:34:16.482 [2024-07-14 05:48:23.162663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.482 [2024-07-14 05:48:23.162688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.482 qpair failed and we were unable to recover it. 00:34:16.482 [2024-07-14 05:48:23.162920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.482 [2024-07-14 05:48:23.162949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.482 qpair failed and we were unable to recover it. 00:34:16.483 [2024-07-14 05:48:23.163123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.483 [2024-07-14 05:48:23.163150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.483 qpair failed and we were unable to recover it. 00:34:16.483 [2024-07-14 05:48:23.163331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.483 [2024-07-14 05:48:23.163357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.483 qpair failed and we were unable to recover it. 00:34:16.483 [2024-07-14 05:48:23.163559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.483 [2024-07-14 05:48:23.163587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.483 qpair failed and we were unable to recover it. 00:34:16.483 [2024-07-14 05:48:23.163812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.483 [2024-07-14 05:48:23.163840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.483 qpair failed and we were unable to recover it. 00:34:16.483 [2024-07-14 05:48:23.164023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.483 [2024-07-14 05:48:23.164048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.483 qpair failed and we were unable to recover it. 00:34:16.483 [2024-07-14 05:48:23.164235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.483 [2024-07-14 05:48:23.164260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.483 qpair failed and we were unable to recover it. 00:34:16.483 [2024-07-14 05:48:23.164479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.483 [2024-07-14 05:48:23.164507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.483 qpair failed and we were unable to recover it. 00:34:16.483 [2024-07-14 05:48:23.164709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.483 [2024-07-14 05:48:23.164741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.483 qpair failed and we were unable to recover it. 00:34:16.483 [2024-07-14 05:48:23.164966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.483 [2024-07-14 05:48:23.164995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.483 qpair failed and we were unable to recover it. 00:34:16.483 [2024-07-14 05:48:23.165228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.483 [2024-07-14 05:48:23.165257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.483 qpair failed and we were unable to recover it. 00:34:16.483 [2024-07-14 05:48:23.165486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.483 [2024-07-14 05:48:23.165523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.483 qpair failed and we were unable to recover it. 00:34:16.483 [2024-07-14 05:48:23.165723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.483 [2024-07-14 05:48:23.165748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.483 qpair failed and we were unable to recover it. 00:34:16.483 [2024-07-14 05:48:23.165924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.483 [2024-07-14 05:48:23.165957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.483 qpair failed and we were unable to recover it. 00:34:16.483 [2024-07-14 05:48:23.166162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.483 [2024-07-14 05:48:23.166197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.483 qpair failed and we were unable to recover it. 00:34:16.483 [2024-07-14 05:48:23.166378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.483 [2024-07-14 05:48:23.166405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.483 qpair failed and we were unable to recover it. 00:34:16.483 [2024-07-14 05:48:23.166576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.483 [2024-07-14 05:48:23.166603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.483 qpair failed and we were unable to recover it. 00:34:16.483 [2024-07-14 05:48:23.166806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.483 [2024-07-14 05:48:23.166831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.483 qpair failed and we were unable to recover it. 00:34:16.483 [2024-07-14 05:48:23.167038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.483 [2024-07-14 05:48:23.167068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.483 qpair failed and we were unable to recover it. 00:34:16.483 [2024-07-14 05:48:23.167277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.483 [2024-07-14 05:48:23.167309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.483 qpair failed and we were unable to recover it. 00:34:16.483 [2024-07-14 05:48:23.167493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.483 [2024-07-14 05:48:23.167518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.483 qpair failed and we were unable to recover it. 00:34:16.483 [2024-07-14 05:48:23.167713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.483 [2024-07-14 05:48:23.167741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.483 qpair failed and we were unable to recover it. 00:34:16.483 [2024-07-14 05:48:23.167965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.483 [2024-07-14 05:48:23.167994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.483 qpair failed and we were unable to recover it. 00:34:16.483 [2024-07-14 05:48:23.168193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.483 [2024-07-14 05:48:23.168219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.483 qpair failed and we were unable to recover it. 00:34:16.483 [2024-07-14 05:48:23.168452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.483 [2024-07-14 05:48:23.168480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.483 qpair failed and we were unable to recover it. 00:34:16.483 [2024-07-14 05:48:23.168689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.483 [2024-07-14 05:48:23.168717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.483 qpair failed and we were unable to recover it. 00:34:16.483 [2024-07-14 05:48:23.168948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.483 [2024-07-14 05:48:23.168974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.483 qpair failed and we were unable to recover it. 00:34:16.483 [2024-07-14 05:48:23.169164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.483 [2024-07-14 05:48:23.169194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.483 qpair failed and we were unable to recover it. 00:34:16.483 [2024-07-14 05:48:23.169376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.483 [2024-07-14 05:48:23.169404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.483 qpair failed and we were unable to recover it. 00:34:16.483 [2024-07-14 05:48:23.169617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.483 [2024-07-14 05:48:23.169642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.483 qpair failed and we were unable to recover it. 00:34:16.483 [2024-07-14 05:48:23.169819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.483 [2024-07-14 05:48:23.169847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.483 qpair failed and we were unable to recover it. 00:34:16.483 [2024-07-14 05:48:23.170025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.483 [2024-07-14 05:48:23.170053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.483 qpair failed and we were unable to recover it. 00:34:16.483 [2024-07-14 05:48:23.170264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.483 [2024-07-14 05:48:23.170289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.483 qpair failed and we were unable to recover it. 00:34:16.483 [2024-07-14 05:48:23.170519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.483 [2024-07-14 05:48:23.170554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.483 qpair failed and we were unable to recover it. 00:34:16.483 [2024-07-14 05:48:23.170763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.483 [2024-07-14 05:48:23.170790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.483 qpair failed and we were unable to recover it. 00:34:16.483 [2024-07-14 05:48:23.170974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.483 [2024-07-14 05:48:23.170999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.483 qpair failed and we were unable to recover it. 00:34:16.483 [2024-07-14 05:48:23.171232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.483 [2024-07-14 05:48:23.171260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.483 qpair failed and we were unable to recover it. 00:34:16.483 [2024-07-14 05:48:23.171485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.483 [2024-07-14 05:48:23.171513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.483 qpair failed and we were unable to recover it. 00:34:16.483 [2024-07-14 05:48:23.171786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.483 [2024-07-14 05:48:23.171814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.483 qpair failed and we were unable to recover it. 00:34:16.483 [2024-07-14 05:48:23.172012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.483 [2024-07-14 05:48:23.172038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.483 qpair failed and we were unable to recover it. 00:34:16.483 [2024-07-14 05:48:23.172282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.483 [2024-07-14 05:48:23.172311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.483 qpair failed and we were unable to recover it. 00:34:16.483 [2024-07-14 05:48:23.172507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.483 [2024-07-14 05:48:23.172537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.483 qpair failed and we were unable to recover it. 00:34:16.483 [2024-07-14 05:48:23.172765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.483 [2024-07-14 05:48:23.172793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.484 qpair failed and we were unable to recover it. 00:34:16.484 [2024-07-14 05:48:23.173008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.484 [2024-07-14 05:48:23.173036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.484 qpair failed and we were unable to recover it. 00:34:16.484 [2024-07-14 05:48:23.173214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.484 [2024-07-14 05:48:23.173239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.484 qpair failed and we were unable to recover it. 00:34:16.484 [2024-07-14 05:48:23.173452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.484 [2024-07-14 05:48:23.173481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.484 qpair failed and we were unable to recover it. 00:34:16.484 [2024-07-14 05:48:23.173682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.484 [2024-07-14 05:48:23.173709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.484 qpair failed and we were unable to recover it. 00:34:16.484 [2024-07-14 05:48:23.173904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.484 [2024-07-14 05:48:23.173930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.484 qpair failed and we were unable to recover it. 00:34:16.484 [2024-07-14 05:48:23.174092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.484 [2024-07-14 05:48:23.174116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.484 qpair failed and we were unable to recover it. 00:34:16.484 [2024-07-14 05:48:23.174350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.484 [2024-07-14 05:48:23.174378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.484 qpair failed and we were unable to recover it. 00:34:16.484 [2024-07-14 05:48:23.174618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.484 [2024-07-14 05:48:23.174643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.484 qpair failed and we were unable to recover it. 00:34:16.484 [2024-07-14 05:48:23.174880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.484 [2024-07-14 05:48:23.174908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.484 qpair failed and we were unable to recover it. 00:34:16.484 [2024-07-14 05:48:23.175084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.484 [2024-07-14 05:48:23.175113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.484 qpair failed and we were unable to recover it. 00:34:16.484 [2024-07-14 05:48:23.175314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.484 [2024-07-14 05:48:23.175339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.484 qpair failed and we were unable to recover it. 00:34:16.484 [2024-07-14 05:48:23.175550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.484 [2024-07-14 05:48:23.175578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.484 qpair failed and we were unable to recover it. 00:34:16.484 [2024-07-14 05:48:23.175782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.484 [2024-07-14 05:48:23.175816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.484 qpair failed and we were unable to recover it. 00:34:16.484 [2024-07-14 05:48:23.176036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.484 [2024-07-14 05:48:23.176062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.484 qpair failed and we were unable to recover it. 00:34:16.484 [2024-07-14 05:48:23.176225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.484 [2024-07-14 05:48:23.176249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.484 qpair failed and we were unable to recover it. 00:34:16.484 [2024-07-14 05:48:23.176461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.484 [2024-07-14 05:48:23.176487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.484 qpair failed and we were unable to recover it. 00:34:16.484 [2024-07-14 05:48:23.176692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.484 [2024-07-14 05:48:23.176718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.484 qpair failed and we were unable to recover it. 00:34:16.484 [2024-07-14 05:48:23.176963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.484 [2024-07-14 05:48:23.176992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.484 qpair failed and we were unable to recover it. 00:34:16.484 [2024-07-14 05:48:23.177221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.484 [2024-07-14 05:48:23.177248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.484 qpair failed and we were unable to recover it. 00:34:16.484 [2024-07-14 05:48:23.177417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.484 [2024-07-14 05:48:23.177442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.484 qpair failed and we were unable to recover it. 00:34:16.484 [2024-07-14 05:48:23.177645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.484 [2024-07-14 05:48:23.177673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.484 qpair failed and we were unable to recover it. 00:34:16.484 [2024-07-14 05:48:23.177885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.484 [2024-07-14 05:48:23.177912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.484 qpair failed and we were unable to recover it. 00:34:16.484 [2024-07-14 05:48:23.178094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.484 [2024-07-14 05:48:23.178120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.484 qpair failed and we were unable to recover it. 00:34:16.484 [2024-07-14 05:48:23.178351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.484 [2024-07-14 05:48:23.178379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.484 qpair failed and we were unable to recover it. 00:34:16.484 [2024-07-14 05:48:23.178559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.484 [2024-07-14 05:48:23.178590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.484 qpair failed and we were unable to recover it. 00:34:16.484 [2024-07-14 05:48:23.178827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.484 [2024-07-14 05:48:23.178851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.484 qpair failed and we were unable to recover it. 00:34:16.484 [2024-07-14 05:48:23.179094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.484 [2024-07-14 05:48:23.179122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.484 qpair failed and we were unable to recover it. 00:34:16.484 [2024-07-14 05:48:23.179312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.484 [2024-07-14 05:48:23.179339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.484 qpair failed and we were unable to recover it. 00:34:16.484 [2024-07-14 05:48:23.179543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.484 [2024-07-14 05:48:23.179567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.484 qpair failed and we were unable to recover it. 00:34:16.484 [2024-07-14 05:48:23.179741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.484 [2024-07-14 05:48:23.179767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.484 qpair failed and we were unable to recover it. 00:34:16.484 [2024-07-14 05:48:23.179988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.484 [2024-07-14 05:48:23.180014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.484 qpair failed and we were unable to recover it. 00:34:16.484 [2024-07-14 05:48:23.180222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.484 [2024-07-14 05:48:23.180248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.484 qpair failed and we were unable to recover it. 00:34:16.484 [2024-07-14 05:48:23.180468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.484 [2024-07-14 05:48:23.180496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.484 qpair failed and we were unable to recover it. 00:34:16.484 [2024-07-14 05:48:23.180697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.484 [2024-07-14 05:48:23.180726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.484 qpair failed and we were unable to recover it. 00:34:16.484 [2024-07-14 05:48:23.180935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.484 [2024-07-14 05:48:23.180961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.484 qpair failed and we were unable to recover it. 00:34:16.484 [2024-07-14 05:48:23.181150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.484 [2024-07-14 05:48:23.181174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.484 qpair failed and we were unable to recover it. 00:34:16.484 [2024-07-14 05:48:23.181421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.484 [2024-07-14 05:48:23.181447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.484 qpair failed and we were unable to recover it. 00:34:16.484 [2024-07-14 05:48:23.181636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.484 [2024-07-14 05:48:23.181660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.484 qpair failed and we were unable to recover it. 00:34:16.484 [2024-07-14 05:48:23.181847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.484 [2024-07-14 05:48:23.181877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.484 qpair failed and we were unable to recover it. 00:34:16.484 [2024-07-14 05:48:23.182086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.484 [2024-07-14 05:48:23.182113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.484 qpair failed and we were unable to recover it. 00:34:16.484 [2024-07-14 05:48:23.182349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.484 [2024-07-14 05:48:23.182374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.484 qpair failed and we were unable to recover it. 00:34:16.485 [2024-07-14 05:48:23.182563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.485 [2024-07-14 05:48:23.182591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.485 qpair failed and we were unable to recover it. 00:34:16.485 [2024-07-14 05:48:23.182789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.485 [2024-07-14 05:48:23.182817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.485 qpair failed and we were unable to recover it. 00:34:16.485 [2024-07-14 05:48:23.183004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.485 [2024-07-14 05:48:23.183030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.485 qpair failed and we were unable to recover it. 00:34:16.485 [2024-07-14 05:48:23.183190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.485 [2024-07-14 05:48:23.183215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.485 qpair failed and we were unable to recover it. 00:34:16.485 [2024-07-14 05:48:23.183400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.485 [2024-07-14 05:48:23.183427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.485 qpair failed and we were unable to recover it. 00:34:16.485 [2024-07-14 05:48:23.183623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.485 [2024-07-14 05:48:23.183648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.485 qpair failed and we were unable to recover it. 00:34:16.485 [2024-07-14 05:48:23.183828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.485 [2024-07-14 05:48:23.183871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.485 qpair failed and we were unable to recover it. 00:34:16.485 [2024-07-14 05:48:23.184075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.485 [2024-07-14 05:48:23.184103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.485 qpair failed and we were unable to recover it. 00:34:16.485 [2024-07-14 05:48:23.184307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.485 [2024-07-14 05:48:23.184332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.485 qpair failed and we were unable to recover it. 00:34:16.485 [2024-07-14 05:48:23.184572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.485 [2024-07-14 05:48:23.184600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.485 qpair failed and we were unable to recover it. 00:34:16.485 [2024-07-14 05:48:23.184833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.485 [2024-07-14 05:48:23.184860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.485 qpair failed and we were unable to recover it. 00:34:16.485 [2024-07-14 05:48:23.185046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.485 [2024-07-14 05:48:23.185071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.485 qpair failed and we were unable to recover it. 00:34:16.485 [2024-07-14 05:48:23.185242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.485 [2024-07-14 05:48:23.185267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.485 qpair failed and we were unable to recover it. 00:34:16.485 [2024-07-14 05:48:23.185429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.485 [2024-07-14 05:48:23.185454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.485 qpair failed and we were unable to recover it. 00:34:16.485 [2024-07-14 05:48:23.185636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.485 [2024-07-14 05:48:23.185661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.485 qpair failed and we were unable to recover it. 00:34:16.485 [2024-07-14 05:48:23.185822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.485 [2024-07-14 05:48:23.185846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.485 qpair failed and we were unable to recover it. 00:34:16.485 [2024-07-14 05:48:23.186072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.485 [2024-07-14 05:48:23.186100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.485 qpair failed and we were unable to recover it. 00:34:16.485 [2024-07-14 05:48:23.186306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.485 [2024-07-14 05:48:23.186332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.485 qpair failed and we were unable to recover it. 00:34:16.485 [2024-07-14 05:48:23.186573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.485 [2024-07-14 05:48:23.186600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.485 qpair failed and we were unable to recover it. 00:34:16.485 [2024-07-14 05:48:23.186799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.485 [2024-07-14 05:48:23.186826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.485 qpair failed and we were unable to recover it. 00:34:16.485 [2024-07-14 05:48:23.187010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.485 [2024-07-14 05:48:23.187035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.485 qpair failed and we were unable to recover it. 00:34:16.485 [2024-07-14 05:48:23.187220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.485 [2024-07-14 05:48:23.187246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.485 qpair failed and we were unable to recover it. 00:34:16.485 [2024-07-14 05:48:23.187427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.485 [2024-07-14 05:48:23.187454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.485 qpair failed and we were unable to recover it. 00:34:16.485 [2024-07-14 05:48:23.187615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.485 [2024-07-14 05:48:23.187640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.485 qpair failed and we were unable to recover it. 00:34:16.485 [2024-07-14 05:48:23.187823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.485 [2024-07-14 05:48:23.187850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.485 qpair failed and we were unable to recover it. 00:34:16.485 [2024-07-14 05:48:23.188033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.485 [2024-07-14 05:48:23.188064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.485 qpair failed and we were unable to recover it. 00:34:16.485 [2024-07-14 05:48:23.188250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.485 [2024-07-14 05:48:23.188277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.485 qpair failed and we were unable to recover it. 00:34:16.485 [2024-07-14 05:48:23.188522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.485 [2024-07-14 05:48:23.188550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.485 qpair failed and we were unable to recover it. 00:34:16.485 [2024-07-14 05:48:23.188757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.485 [2024-07-14 05:48:23.188785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.485 qpair failed and we were unable to recover it. 00:34:16.485 [2024-07-14 05:48:23.189000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.485 [2024-07-14 05:48:23.189026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.485 qpair failed and we were unable to recover it. 00:34:16.485 [2024-07-14 05:48:23.189188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.485 [2024-07-14 05:48:23.189213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.485 qpair failed and we were unable to recover it. 00:34:16.485 [2024-07-14 05:48:23.189394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.485 [2024-07-14 05:48:23.189418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.485 qpair failed and we were unable to recover it. 00:34:16.485 [2024-07-14 05:48:23.189634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.485 [2024-07-14 05:48:23.189659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.485 qpair failed and we were unable to recover it. 00:34:16.485 [2024-07-14 05:48:23.189876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.485 [2024-07-14 05:48:23.189905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.485 qpair failed and we were unable to recover it. 00:34:16.485 [2024-07-14 05:48:23.190084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.485 [2024-07-14 05:48:23.190112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.485 qpair failed and we were unable to recover it. 00:34:16.485 [2024-07-14 05:48:23.190337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.485 [2024-07-14 05:48:23.190362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.485 qpair failed and we were unable to recover it. 00:34:16.485 [2024-07-14 05:48:23.190574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.485 [2024-07-14 05:48:23.190602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.485 qpair failed and we were unable to recover it. 00:34:16.485 [2024-07-14 05:48:23.190801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.485 [2024-07-14 05:48:23.190829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.485 qpair failed and we were unable to recover it. 00:34:16.485 [2024-07-14 05:48:23.191085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.485 [2024-07-14 05:48:23.191111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.485 qpair failed and we were unable to recover it. 00:34:16.485 [2024-07-14 05:48:23.191318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.485 [2024-07-14 05:48:23.191343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.485 qpair failed and we were unable to recover it. 00:34:16.485 [2024-07-14 05:48:23.191547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.485 [2024-07-14 05:48:23.191575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.485 qpair failed and we were unable to recover it. 00:34:16.485 [2024-07-14 05:48:23.191775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.486 [2024-07-14 05:48:23.191800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.486 qpair failed and we were unable to recover it. 00:34:16.486 [2024-07-14 05:48:23.191987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.486 [2024-07-14 05:48:23.192012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.486 qpair failed and we were unable to recover it. 00:34:16.486 [2024-07-14 05:48:23.192167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.486 [2024-07-14 05:48:23.192192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.486 qpair failed and we were unable to recover it. 00:34:16.486 [2024-07-14 05:48:23.192372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.486 [2024-07-14 05:48:23.192397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.486 qpair failed and we were unable to recover it. 00:34:16.486 [2024-07-14 05:48:23.192571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.486 [2024-07-14 05:48:23.192595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.486 qpair failed and we were unable to recover it. 00:34:16.486 [2024-07-14 05:48:23.192776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.486 [2024-07-14 05:48:23.192800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.486 qpair failed and we were unable to recover it. 00:34:16.486 [2024-07-14 05:48:23.192982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.486 [2024-07-14 05:48:23.193008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.486 qpair failed and we were unable to recover it. 00:34:16.486 [2024-07-14 05:48:23.193161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.486 [2024-07-14 05:48:23.193186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.486 qpair failed and we were unable to recover it. 00:34:16.486 [2024-07-14 05:48:23.193366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.486 [2024-07-14 05:48:23.193392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.486 qpair failed and we were unable to recover it. 00:34:16.486 [2024-07-14 05:48:23.193555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.486 [2024-07-14 05:48:23.193579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.486 qpair failed and we were unable to recover it. 00:34:16.486 [2024-07-14 05:48:23.193815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.486 [2024-07-14 05:48:23.193862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.486 qpair failed and we were unable to recover it. 00:34:16.486 [2024-07-14 05:48:23.194031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.486 [2024-07-14 05:48:23.194056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.486 qpair failed and we were unable to recover it. 00:34:16.486 [2024-07-14 05:48:23.194237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.486 [2024-07-14 05:48:23.194262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.486 qpair failed and we were unable to recover it. 00:34:16.486 [2024-07-14 05:48:23.194417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.486 [2024-07-14 05:48:23.194441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.486 qpair failed and we were unable to recover it. 00:34:16.486 [2024-07-14 05:48:23.194608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.486 [2024-07-14 05:48:23.194632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.486 qpair failed and we were unable to recover it. 00:34:16.486 [2024-07-14 05:48:23.194815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.486 [2024-07-14 05:48:23.194840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.486 qpair failed and we were unable to recover it. 00:34:16.486 [2024-07-14 05:48:23.195033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.486 [2024-07-14 05:48:23.195058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.486 qpair failed and we were unable to recover it. 00:34:16.486 [2024-07-14 05:48:23.195216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.486 [2024-07-14 05:48:23.195240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.486 qpair failed and we were unable to recover it. 00:34:16.486 [2024-07-14 05:48:23.195448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.486 [2024-07-14 05:48:23.195472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.486 qpair failed and we were unable to recover it. 00:34:16.486 [2024-07-14 05:48:23.195674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.486 [2024-07-14 05:48:23.195701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.486 qpair failed and we were unable to recover it. 00:34:16.486 [2024-07-14 05:48:23.195878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.486 [2024-07-14 05:48:23.195906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.486 qpair failed and we were unable to recover it. 00:34:16.486 [2024-07-14 05:48:23.196083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.486 [2024-07-14 05:48:23.196108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.486 qpair failed and we were unable to recover it. 00:34:16.486 [2024-07-14 05:48:23.196269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.486 [2024-07-14 05:48:23.196294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.486 qpair failed and we were unable to recover it. 00:34:16.486 [2024-07-14 05:48:23.196446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.486 [2024-07-14 05:48:23.196471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.486 qpair failed and we were unable to recover it. 00:34:16.486 [2024-07-14 05:48:23.196637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.486 [2024-07-14 05:48:23.196661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.486 qpair failed and we were unable to recover it. 00:34:16.486 [2024-07-14 05:48:23.196848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.486 [2024-07-14 05:48:23.196885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.486 qpair failed and we were unable to recover it. 00:34:16.486 [2024-07-14 05:48:23.197072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.486 [2024-07-14 05:48:23.197096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.486 qpair failed and we were unable to recover it. 00:34:16.486 [2024-07-14 05:48:23.197272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.486 [2024-07-14 05:48:23.197295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.486 qpair failed and we were unable to recover it. 00:34:16.486 [2024-07-14 05:48:23.197479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.486 [2024-07-14 05:48:23.197503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.486 qpair failed and we were unable to recover it. 00:34:16.486 [2024-07-14 05:48:23.197664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.486 [2024-07-14 05:48:23.197688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.486 qpair failed and we were unable to recover it. 00:34:16.486 [2024-07-14 05:48:23.197900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.486 [2024-07-14 05:48:23.197925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.486 qpair failed and we were unable to recover it. 00:34:16.486 [2024-07-14 05:48:23.198106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.486 [2024-07-14 05:48:23.198130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.486 qpair failed and we were unable to recover it. 00:34:16.486 [2024-07-14 05:48:23.198310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.486 [2024-07-14 05:48:23.198333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.486 qpair failed and we were unable to recover it. 00:34:16.486 [2024-07-14 05:48:23.198509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.486 [2024-07-14 05:48:23.198533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.486 qpair failed and we were unable to recover it. 00:34:16.486 [2024-07-14 05:48:23.198708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.486 [2024-07-14 05:48:23.198732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.486 qpair failed and we were unable to recover it. 00:34:16.486 [2024-07-14 05:48:23.198910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.486 [2024-07-14 05:48:23.198936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.486 qpair failed and we were unable to recover it. 00:34:16.486 [2024-07-14 05:48:23.199120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.486 [2024-07-14 05:48:23.199145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.486 qpair failed and we were unable to recover it. 00:34:16.486 [2024-07-14 05:48:23.199336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.486 [2024-07-14 05:48:23.199365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.486 qpair failed and we were unable to recover it. 00:34:16.486 [2024-07-14 05:48:23.199546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.486 [2024-07-14 05:48:23.199570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.486 qpair failed and we were unable to recover it. 00:34:16.486 [2024-07-14 05:48:23.199730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.486 [2024-07-14 05:48:23.199754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.486 qpair failed and we were unable to recover it. 00:34:16.486 [2024-07-14 05:48:23.199915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.486 [2024-07-14 05:48:23.199941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.486 qpair failed and we were unable to recover it. 00:34:16.486 [2024-07-14 05:48:23.200117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.486 [2024-07-14 05:48:23.200142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.486 qpair failed and we were unable to recover it. 00:34:16.487 [2024-07-14 05:48:23.200303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.487 [2024-07-14 05:48:23.200327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.487 qpair failed and we were unable to recover it. 00:34:16.487 [2024-07-14 05:48:23.200505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.487 [2024-07-14 05:48:23.200529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.487 qpair failed and we were unable to recover it. 00:34:16.487 [2024-07-14 05:48:23.200710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.487 [2024-07-14 05:48:23.200735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.487 qpair failed and we were unable to recover it. 00:34:16.487 [2024-07-14 05:48:23.200923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.487 [2024-07-14 05:48:23.200949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.487 qpair failed and we were unable to recover it. 00:34:16.487 [2024-07-14 05:48:23.201112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.487 [2024-07-14 05:48:23.201138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.487 qpair failed and we were unable to recover it. 00:34:16.487 [2024-07-14 05:48:23.201300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.487 [2024-07-14 05:48:23.201326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.487 qpair failed and we were unable to recover it. 00:34:16.487 [2024-07-14 05:48:23.201484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.487 [2024-07-14 05:48:23.201510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.487 qpair failed and we were unable to recover it. 00:34:16.487 [2024-07-14 05:48:23.201691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.487 [2024-07-14 05:48:23.201716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.487 qpair failed and we were unable to recover it. 00:34:16.487 [2024-07-14 05:48:23.201921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.487 [2024-07-14 05:48:23.201947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.487 qpair failed and we were unable to recover it. 00:34:16.487 [2024-07-14 05:48:23.202104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.487 [2024-07-14 05:48:23.202130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.487 qpair failed and we were unable to recover it. 00:34:16.487 [2024-07-14 05:48:23.202292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.487 [2024-07-14 05:48:23.202317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.487 qpair failed and we were unable to recover it. 00:34:16.487 [2024-07-14 05:48:23.202533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.487 [2024-07-14 05:48:23.202560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.487 qpair failed and we were unable to recover it. 00:34:16.487 [2024-07-14 05:48:23.202721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.487 [2024-07-14 05:48:23.202746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.487 qpair failed and we were unable to recover it. 00:34:16.487 [2024-07-14 05:48:23.202902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.487 [2024-07-14 05:48:23.202928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.487 qpair failed and we were unable to recover it. 00:34:16.487 [2024-07-14 05:48:23.203085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.487 [2024-07-14 05:48:23.203110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.487 qpair failed and we were unable to recover it. 00:34:16.487 [2024-07-14 05:48:23.203274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.487 [2024-07-14 05:48:23.203299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.487 qpair failed and we were unable to recover it. 00:34:16.487 [2024-07-14 05:48:23.203450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.487 [2024-07-14 05:48:23.203475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.487 qpair failed and we were unable to recover it. 00:34:16.487 [2024-07-14 05:48:23.203659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.487 [2024-07-14 05:48:23.203684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.487 qpair failed and we were unable to recover it. 00:34:16.487 [2024-07-14 05:48:23.203844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.487 [2024-07-14 05:48:23.203873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.487 qpair failed and we were unable to recover it. 00:34:16.487 [2024-07-14 05:48:23.204026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.487 [2024-07-14 05:48:23.204051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.487 qpair failed and we were unable to recover it. 00:34:16.487 [2024-07-14 05:48:23.204228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.487 [2024-07-14 05:48:23.204253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.487 qpair failed and we were unable to recover it. 00:34:16.487 [2024-07-14 05:48:23.204435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.487 [2024-07-14 05:48:23.204460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.487 qpair failed and we were unable to recover it. 00:34:16.487 [2024-07-14 05:48:23.204610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.487 [2024-07-14 05:48:23.204635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.487 qpair failed and we were unable to recover it. 00:34:16.487 [2024-07-14 05:48:23.204821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.487 [2024-07-14 05:48:23.204846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.487 qpair failed and we were unable to recover it. 00:34:16.487 [2024-07-14 05:48:23.205016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.487 [2024-07-14 05:48:23.205041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.487 qpair failed and we were unable to recover it. 00:34:16.487 [2024-07-14 05:48:23.205196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.487 [2024-07-14 05:48:23.205221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.487 qpair failed and we were unable to recover it. 00:34:16.487 [2024-07-14 05:48:23.205428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.487 [2024-07-14 05:48:23.205456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.487 qpair failed and we were unable to recover it. 00:34:16.487 [2024-07-14 05:48:23.205684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.487 [2024-07-14 05:48:23.205709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.487 qpair failed and we were unable to recover it. 00:34:16.487 [2024-07-14 05:48:23.205884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.487 [2024-07-14 05:48:23.205909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.487 qpair failed and we were unable to recover it. 00:34:16.487 [2024-07-14 05:48:23.206063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.487 [2024-07-14 05:48:23.206088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.487 qpair failed and we were unable to recover it. 00:34:16.487 [2024-07-14 05:48:23.206307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.487 [2024-07-14 05:48:23.206334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.487 qpair failed and we were unable to recover it. 00:34:16.487 [2024-07-14 05:48:23.206491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.487 [2024-07-14 05:48:23.206515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.487 qpair failed and we were unable to recover it. 00:34:16.487 [2024-07-14 05:48:23.206699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.487 [2024-07-14 05:48:23.206724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.487 qpair failed and we were unable to recover it. 00:34:16.487 [2024-07-14 05:48:23.206886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.487 [2024-07-14 05:48:23.206911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.487 qpair failed and we were unable to recover it. 00:34:16.487 [2024-07-14 05:48:23.207065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.487 [2024-07-14 05:48:23.207090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.487 qpair failed and we were unable to recover it. 00:34:16.487 [2024-07-14 05:48:23.207267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.488 [2024-07-14 05:48:23.207292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.488 qpair failed and we were unable to recover it. 00:34:16.488 [2024-07-14 05:48:23.207485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.488 [2024-07-14 05:48:23.207510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.488 qpair failed and we were unable to recover it. 00:34:16.488 [2024-07-14 05:48:23.207692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.488 [2024-07-14 05:48:23.207717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.488 qpair failed and we were unable to recover it. 00:34:16.488 [2024-07-14 05:48:23.207875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.488 [2024-07-14 05:48:23.207901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.488 qpair failed and we were unable to recover it. 00:34:16.488 [2024-07-14 05:48:23.208057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.488 [2024-07-14 05:48:23.208082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.488 qpair failed and we were unable to recover it. 00:34:16.488 [2024-07-14 05:48:23.208302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.488 [2024-07-14 05:48:23.208330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.488 qpair failed and we were unable to recover it. 00:34:16.488 [2024-07-14 05:48:23.208518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.488 [2024-07-14 05:48:23.208546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.488 qpair failed and we were unable to recover it. 00:34:16.488 [2024-07-14 05:48:23.208748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.488 [2024-07-14 05:48:23.208774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.488 qpair failed and we were unable to recover it. 00:34:16.488 [2024-07-14 05:48:23.208940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.488 [2024-07-14 05:48:23.208966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.488 qpair failed and we were unable to recover it. 00:34:16.488 [2024-07-14 05:48:23.209149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.488 [2024-07-14 05:48:23.209178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.488 qpair failed and we were unable to recover it. 00:34:16.488 [2024-07-14 05:48:23.209382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.488 [2024-07-14 05:48:23.209407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.488 qpair failed and we were unable to recover it. 00:34:16.488 [2024-07-14 05:48:23.209585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.488 [2024-07-14 05:48:23.209609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.488 qpair failed and we were unable to recover it. 00:34:16.488 [2024-07-14 05:48:23.209794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.488 [2024-07-14 05:48:23.209819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.488 qpair failed and we were unable to recover it. 00:34:16.488 [2024-07-14 05:48:23.209986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.488 [2024-07-14 05:48:23.210011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.488 qpair failed and we were unable to recover it. 00:34:16.488 [2024-07-14 05:48:23.210165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.488 [2024-07-14 05:48:23.210189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.488 qpair failed and we were unable to recover it. 00:34:16.488 [2024-07-14 05:48:23.210409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.488 [2024-07-14 05:48:23.210435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.488 qpair failed and we were unable to recover it. 00:34:16.488 [2024-07-14 05:48:23.210613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.488 [2024-07-14 05:48:23.210638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.488 qpair failed and we were unable to recover it. 00:34:16.488 [2024-07-14 05:48:23.210800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.488 [2024-07-14 05:48:23.210825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.488 qpair failed and we were unable to recover it. 00:34:16.488 [2024-07-14 05:48:23.210993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.488 [2024-07-14 05:48:23.211019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.488 qpair failed and we were unable to recover it. 00:34:16.488 [2024-07-14 05:48:23.211182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.488 [2024-07-14 05:48:23.211207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.488 qpair failed and we were unable to recover it. 00:34:16.488 [2024-07-14 05:48:23.211385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.488 [2024-07-14 05:48:23.211412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.488 qpair failed and we were unable to recover it. 00:34:16.488 [2024-07-14 05:48:23.211609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.488 [2024-07-14 05:48:23.211637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.488 qpair failed and we were unable to recover it. 00:34:16.488 [2024-07-14 05:48:23.211844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.488 [2024-07-14 05:48:23.211876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.488 qpair failed and we were unable to recover it. 00:34:16.488 [2024-07-14 05:48:23.212040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.488 [2024-07-14 05:48:23.212066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.488 qpair failed and we were unable to recover it. 00:34:16.488 [2024-07-14 05:48:23.212212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.488 [2024-07-14 05:48:23.212237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.488 qpair failed and we were unable to recover it. 00:34:16.488 [2024-07-14 05:48:23.212446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.488 [2024-07-14 05:48:23.212471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.488 qpair failed and we were unable to recover it. 00:34:16.488 [2024-07-14 05:48:23.212630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.488 [2024-07-14 05:48:23.212656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.488 qpair failed and we were unable to recover it. 00:34:16.488 [2024-07-14 05:48:23.212808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.488 [2024-07-14 05:48:23.212833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.488 qpair failed and we were unable to recover it. 00:34:16.488 [2024-07-14 05:48:23.212999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.488 [2024-07-14 05:48:23.213028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.488 qpair failed and we were unable to recover it. 00:34:16.488 [2024-07-14 05:48:23.213253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.488 [2024-07-14 05:48:23.213278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.488 qpair failed and we were unable to recover it. 00:34:16.488 [2024-07-14 05:48:23.213454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.488 [2024-07-14 05:48:23.213479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.488 qpair failed and we were unable to recover it. 00:34:16.488 [2024-07-14 05:48:23.213637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.488 [2024-07-14 05:48:23.213662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.488 qpair failed and we were unable to recover it. 00:34:16.488 [2024-07-14 05:48:23.213842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.488 [2024-07-14 05:48:23.213873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.488 qpair failed and we were unable to recover it. 00:34:16.488 [2024-07-14 05:48:23.214040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.488 [2024-07-14 05:48:23.214065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.488 qpair failed and we were unable to recover it. 00:34:16.488 [2024-07-14 05:48:23.214284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.488 [2024-07-14 05:48:23.214309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.488 qpair failed and we were unable to recover it. 00:34:16.488 [2024-07-14 05:48:23.214478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.488 [2024-07-14 05:48:23.214504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.488 qpair failed and we were unable to recover it. 00:34:16.488 [2024-07-14 05:48:23.214692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.488 [2024-07-14 05:48:23.214716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.488 qpair failed and we were unable to recover it. 00:34:16.488 [2024-07-14 05:48:23.214901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.488 [2024-07-14 05:48:23.214927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.488 qpair failed and we were unable to recover it. 00:34:16.488 [2024-07-14 05:48:23.215102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.488 [2024-07-14 05:48:23.215130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.488 qpair failed and we were unable to recover it. 00:34:16.488 [2024-07-14 05:48:23.215357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.488 [2024-07-14 05:48:23.215385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.488 qpair failed and we were unable to recover it. 00:34:16.488 [2024-07-14 05:48:23.215558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.488 [2024-07-14 05:48:23.215583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.488 qpair failed and we were unable to recover it. 00:34:16.488 [2024-07-14 05:48:23.215780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.489 [2024-07-14 05:48:23.215805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.489 qpair failed and we were unable to recover it. 00:34:16.489 [2024-07-14 05:48:23.215959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.489 [2024-07-14 05:48:23.215984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.489 qpair failed and we were unable to recover it. 00:34:16.489 [2024-07-14 05:48:23.216160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.489 [2024-07-14 05:48:23.216184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.489 qpair failed and we were unable to recover it. 00:34:16.489 [2024-07-14 05:48:23.216400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.489 [2024-07-14 05:48:23.216425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.489 qpair failed and we were unable to recover it. 00:34:16.489 [2024-07-14 05:48:23.216628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.489 [2024-07-14 05:48:23.216655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.489 qpair failed and we were unable to recover it. 00:34:16.489 [2024-07-14 05:48:23.216826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.489 [2024-07-14 05:48:23.216851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.489 qpair failed and we were unable to recover it. 00:34:16.489 [2024-07-14 05:48:23.217067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.489 [2024-07-14 05:48:23.217092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.489 qpair failed and we were unable to recover it. 00:34:16.489 [2024-07-14 05:48:23.217271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.489 [2024-07-14 05:48:23.217296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.489 qpair failed and we were unable to recover it. 00:34:16.489 [2024-07-14 05:48:23.217512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.489 [2024-07-14 05:48:23.217537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.489 qpair failed and we were unable to recover it. 00:34:16.489 [2024-07-14 05:48:23.218358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.489 [2024-07-14 05:48:23.218387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.489 qpair failed and we were unable to recover it. 00:34:16.489 [2024-07-14 05:48:23.218556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.489 [2024-07-14 05:48:23.218583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.489 qpair failed and we were unable to recover it. 00:34:16.489 [2024-07-14 05:48:23.218774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.489 [2024-07-14 05:48:23.218799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.489 qpair failed and we were unable to recover it. 00:34:16.489 [2024-07-14 05:48:23.218966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.489 [2024-07-14 05:48:23.218992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.489 qpair failed and we were unable to recover it. 00:34:16.489 [2024-07-14 05:48:23.219151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.489 [2024-07-14 05:48:23.219176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.489 qpair failed and we were unable to recover it. 00:34:16.489 [2024-07-14 05:48:23.219346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.489 [2024-07-14 05:48:23.219375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.489 qpair failed and we were unable to recover it. 00:34:16.489 [2024-07-14 05:48:23.219558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.489 [2024-07-14 05:48:23.219583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.489 qpair failed and we were unable to recover it. 00:34:16.489 [2024-07-14 05:48:23.219769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.489 [2024-07-14 05:48:23.219794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.489 qpair failed and we were unable to recover it. 00:34:16.489 [2024-07-14 05:48:23.219987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.489 [2024-07-14 05:48:23.220014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.489 qpair failed and we were unable to recover it. 00:34:16.489 [2024-07-14 05:48:23.220175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.489 [2024-07-14 05:48:23.220200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.489 qpair failed and we were unable to recover it. 00:34:16.489 [2024-07-14 05:48:23.220363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.489 [2024-07-14 05:48:23.220388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.489 qpair failed and we were unable to recover it. 00:34:16.489 [2024-07-14 05:48:23.220597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.489 [2024-07-14 05:48:23.220623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.489 qpair failed and we were unable to recover it. 00:34:16.489 [2024-07-14 05:48:23.220806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.489 [2024-07-14 05:48:23.220831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.489 qpair failed and we were unable to recover it. 00:34:16.489 [2024-07-14 05:48:23.221006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.489 [2024-07-14 05:48:23.221031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.489 qpair failed and we were unable to recover it. 00:34:16.489 [2024-07-14 05:48:23.221192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.489 [2024-07-14 05:48:23.221216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.489 qpair failed and we were unable to recover it. 00:34:16.489 [2024-07-14 05:48:23.221992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.489 [2024-07-14 05:48:23.222021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.489 qpair failed and we were unable to recover it. 00:34:16.489 [2024-07-14 05:48:23.222235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.489 [2024-07-14 05:48:23.222266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.489 qpair failed and we were unable to recover it. 00:34:16.489 [2024-07-14 05:48:23.222504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.489 [2024-07-14 05:48:23.222530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.489 qpair failed and we were unable to recover it. 00:34:16.489 [2024-07-14 05:48:23.222710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.489 [2024-07-14 05:48:23.222735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.489 qpair failed and we were unable to recover it. 00:34:16.489 [2024-07-14 05:48:23.222929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.489 [2024-07-14 05:48:23.222955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.489 qpair failed and we were unable to recover it. 00:34:16.489 [2024-07-14 05:48:23.223110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.489 [2024-07-14 05:48:23.223137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.489 qpair failed and we were unable to recover it. 00:34:16.489 [2024-07-14 05:48:23.223295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.489 [2024-07-14 05:48:23.223321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.489 qpair failed and we were unable to recover it. 00:34:16.489 [2024-07-14 05:48:23.223519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.489 [2024-07-14 05:48:23.223544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.489 qpair failed and we were unable to recover it. 00:34:16.489 [2024-07-14 05:48:23.223738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.489 [2024-07-14 05:48:23.223765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.489 qpair failed and we were unable to recover it. 00:34:16.489 [2024-07-14 05:48:23.223951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.489 [2024-07-14 05:48:23.223978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.489 qpair failed and we were unable to recover it. 00:34:16.489 [2024-07-14 05:48:23.224144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.489 [2024-07-14 05:48:23.224169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.489 qpair failed and we were unable to recover it. 00:34:16.489 [2024-07-14 05:48:23.224354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.489 [2024-07-14 05:48:23.224379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.489 qpair failed and we were unable to recover it. 00:34:16.489 [2024-07-14 05:48:23.224568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.489 [2024-07-14 05:48:23.224594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.489 qpair failed and we were unable to recover it. 00:34:16.489 [2024-07-14 05:48:23.224783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.489 [2024-07-14 05:48:23.224808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.489 qpair failed and we were unable to recover it. 00:34:16.489 [2024-07-14 05:48:23.225530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.489 [2024-07-14 05:48:23.225559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.489 qpair failed and we were unable to recover it. 00:34:16.489 [2024-07-14 05:48:23.225790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.489 [2024-07-14 05:48:23.225816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.489 qpair failed and we were unable to recover it. 00:34:16.489 [2024-07-14 05:48:23.225984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.489 [2024-07-14 05:48:23.226009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.489 qpair failed and we were unable to recover it. 00:34:16.489 [2024-07-14 05:48:23.226171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.490 [2024-07-14 05:48:23.226202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.490 qpair failed and we were unable to recover it. 00:34:16.490 [2024-07-14 05:48:23.226378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.490 [2024-07-14 05:48:23.226403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.490 qpair failed and we were unable to recover it. 00:34:16.490 [2024-07-14 05:48:23.226599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.490 [2024-07-14 05:48:23.226624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.490 qpair failed and we were unable to recover it. 00:34:16.490 [2024-07-14 05:48:23.226804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.490 [2024-07-14 05:48:23.226829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.490 qpair failed and we were unable to recover it. 00:34:16.490 [2024-07-14 05:48:23.227000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.490 [2024-07-14 05:48:23.227026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.490 qpair failed and we were unable to recover it. 00:34:16.490 [2024-07-14 05:48:23.227188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.490 [2024-07-14 05:48:23.227213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.490 qpair failed and we were unable to recover it. 00:34:16.490 [2024-07-14 05:48:23.227420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.490 [2024-07-14 05:48:23.227444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.490 qpair failed and we were unable to recover it. 00:34:16.490 [2024-07-14 05:48:23.227626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.490 [2024-07-14 05:48:23.227650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.490 qpair failed and we were unable to recover it. 00:34:16.490 [2024-07-14 05:48:23.227828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.490 [2024-07-14 05:48:23.227852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.490 qpair failed and we were unable to recover it. 00:34:16.490 [2024-07-14 05:48:23.228022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.490 [2024-07-14 05:48:23.228047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.490 qpair failed and we were unable to recover it. 00:34:16.490 [2024-07-14 05:48:23.228230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.490 [2024-07-14 05:48:23.228255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.490 qpair failed and we were unable to recover it. 00:34:16.490 [2024-07-14 05:48:23.228470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.490 [2024-07-14 05:48:23.228495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.490 qpair failed and we were unable to recover it. 00:34:16.490 [2024-07-14 05:48:23.228650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.490 [2024-07-14 05:48:23.228674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.490 qpair failed and we were unable to recover it. 00:34:16.490 [2024-07-14 05:48:23.228854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.490 [2024-07-14 05:48:23.228884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.490 qpair failed and we were unable to recover it. 00:34:16.490 [2024-07-14 05:48:23.229052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.490 [2024-07-14 05:48:23.229078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.490 qpair failed and we were unable to recover it. 00:34:16.490 [2024-07-14 05:48:23.229253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.490 [2024-07-14 05:48:23.229279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.490 qpair failed and we were unable to recover it. 00:34:16.490 [2024-07-14 05:48:23.229433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.490 [2024-07-14 05:48:23.229457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.490 qpair failed and we were unable to recover it. 00:34:16.490 [2024-07-14 05:48:23.229636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.490 [2024-07-14 05:48:23.229661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.490 qpair failed and we were unable to recover it. 00:34:16.490 [2024-07-14 05:48:23.229820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.490 [2024-07-14 05:48:23.229845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.490 qpair failed and we were unable to recover it. 00:34:16.490 [2024-07-14 05:48:23.230009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.490 [2024-07-14 05:48:23.230034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.490 qpair failed and we were unable to recover it. 00:34:16.490 [2024-07-14 05:48:23.230191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.490 [2024-07-14 05:48:23.230216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.490 qpair failed and we were unable to recover it. 00:34:16.490 [2024-07-14 05:48:23.230372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.490 [2024-07-14 05:48:23.230397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.490 qpair failed and we were unable to recover it. 00:34:16.490 [2024-07-14 05:48:23.230586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.490 [2024-07-14 05:48:23.230611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.490 qpair failed and we were unable to recover it. 00:34:16.490 [2024-07-14 05:48:23.230799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.490 [2024-07-14 05:48:23.230824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.490 qpair failed and we were unable to recover it. 00:34:16.490 [2024-07-14 05:48:23.231022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.490 [2024-07-14 05:48:23.231048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.490 qpair failed and we were unable to recover it. 00:34:16.490 [2024-07-14 05:48:23.231199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.490 [2024-07-14 05:48:23.231223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.490 qpair failed and we were unable to recover it. 00:34:16.490 [2024-07-14 05:48:23.231407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.490 [2024-07-14 05:48:23.231434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.490 qpair failed and we were unable to recover it. 00:34:16.490 [2024-07-14 05:48:23.231616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.490 [2024-07-14 05:48:23.231641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.490 qpair failed and we were unable to recover it. 00:34:16.490 [2024-07-14 05:48:23.231833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.490 [2024-07-14 05:48:23.231859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.490 qpair failed and we were unable to recover it. 00:34:16.490 [2024-07-14 05:48:23.232032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.490 [2024-07-14 05:48:23.232057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.490 qpair failed and we were unable to recover it. 00:34:16.490 [2024-07-14 05:48:23.232205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.490 [2024-07-14 05:48:23.232230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.490 qpair failed and we were unable to recover it. 00:34:16.490 [2024-07-14 05:48:23.232413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.490 [2024-07-14 05:48:23.232438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.490 qpair failed and we were unable to recover it. 00:34:16.490 [2024-07-14 05:48:23.232597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.490 [2024-07-14 05:48:23.232623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.490 qpair failed and we were unable to recover it. 00:34:16.490 [2024-07-14 05:48:23.232843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.490 [2024-07-14 05:48:23.232874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.490 qpair failed and we were unable to recover it. 00:34:16.490 [2024-07-14 05:48:23.233044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.490 [2024-07-14 05:48:23.233069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.490 qpair failed and we were unable to recover it. 00:34:16.490 [2024-07-14 05:48:23.233228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.490 [2024-07-14 05:48:23.233253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.490 qpair failed and we were unable to recover it. 00:34:16.490 [2024-07-14 05:48:23.233449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.490 [2024-07-14 05:48:23.233474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.490 qpair failed and we were unable to recover it. 00:34:16.490 [2024-07-14 05:48:23.233640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.490 [2024-07-14 05:48:23.233665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.490 qpair failed and we were unable to recover it. 00:34:16.490 [2024-07-14 05:48:23.233847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.490 [2024-07-14 05:48:23.233883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.490 qpair failed and we were unable to recover it. 00:34:16.490 [2024-07-14 05:48:23.234069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.490 [2024-07-14 05:48:23.234095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.490 qpair failed and we were unable to recover it. 00:34:16.490 [2024-07-14 05:48:23.234278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.490 [2024-07-14 05:48:23.234303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.490 qpair failed and we were unable to recover it. 00:34:16.490 [2024-07-14 05:48:23.234483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.490 [2024-07-14 05:48:23.234508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.490 qpair failed and we were unable to recover it. 00:34:16.491 [2024-07-14 05:48:23.234738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.491 [2024-07-14 05:48:23.234766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.491 qpair failed and we were unable to recover it. 00:34:16.491 [2024-07-14 05:48:23.234978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.491 [2024-07-14 05:48:23.235004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.491 qpair failed and we were unable to recover it. 00:34:16.491 [2024-07-14 05:48:23.235192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.491 [2024-07-14 05:48:23.235217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.491 qpair failed and we were unable to recover it. 00:34:16.491 [2024-07-14 05:48:23.235451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.491 [2024-07-14 05:48:23.235477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.491 qpair failed and we were unable to recover it. 00:34:16.491 [2024-07-14 05:48:23.235636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.491 [2024-07-14 05:48:23.235661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.491 qpair failed and we were unable to recover it. 00:34:16.491 [2024-07-14 05:48:23.235852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.491 [2024-07-14 05:48:23.235886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.491 qpair failed and we were unable to recover it. 00:34:16.491 [2024-07-14 05:48:23.236041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.491 [2024-07-14 05:48:23.236066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.491 qpair failed and we were unable to recover it. 00:34:16.491 [2024-07-14 05:48:23.236238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.491 [2024-07-14 05:48:23.236264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.491 qpair failed and we were unable to recover it. 00:34:16.491 [2024-07-14 05:48:23.236471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.491 [2024-07-14 05:48:23.236496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.491 qpair failed and we were unable to recover it. 00:34:16.491 [2024-07-14 05:48:23.236681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.491 [2024-07-14 05:48:23.236705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.491 qpair failed and we were unable to recover it. 00:34:16.491 [2024-07-14 05:48:23.236860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.491 [2024-07-14 05:48:23.236891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.491 qpair failed and we were unable to recover it. 00:34:16.491 [2024-07-14 05:48:23.237078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.491 [2024-07-14 05:48:23.237103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.491 qpair failed and we were unable to recover it. 00:34:16.491 [2024-07-14 05:48:23.237292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.491 [2024-07-14 05:48:23.237318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.491 qpair failed and we were unable to recover it. 00:34:16.491 [2024-07-14 05:48:23.237557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.491 [2024-07-14 05:48:23.237582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.491 qpair failed and we were unable to recover it. 00:34:16.491 [2024-07-14 05:48:23.237782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.491 [2024-07-14 05:48:23.237807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.491 qpair failed and we were unable to recover it. 00:34:16.491 [2024-07-14 05:48:23.238004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.491 [2024-07-14 05:48:23.238029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.491 qpair failed and we were unable to recover it. 00:34:16.491 [2024-07-14 05:48:23.238213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.491 [2024-07-14 05:48:23.238239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.491 qpair failed and we were unable to recover it. 00:34:16.491 [2024-07-14 05:48:23.238448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.491 [2024-07-14 05:48:23.238472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.491 qpair failed and we were unable to recover it. 00:34:16.491 [2024-07-14 05:48:23.238663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.491 [2024-07-14 05:48:23.238688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.491 qpair failed and we were unable to recover it. 00:34:16.491 [2024-07-14 05:48:23.238873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.491 [2024-07-14 05:48:23.238899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.491 qpair failed and we were unable to recover it. 00:34:16.491 [2024-07-14 05:48:23.239088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.491 [2024-07-14 05:48:23.239113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.491 qpair failed and we were unable to recover it. 00:34:16.491 [2024-07-14 05:48:23.239299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.491 [2024-07-14 05:48:23.239324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.491 qpair failed and we were unable to recover it. 00:34:16.491 [2024-07-14 05:48:23.239480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.491 [2024-07-14 05:48:23.239505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.491 qpair failed and we were unable to recover it. 00:34:16.491 [2024-07-14 05:48:23.239680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.491 [2024-07-14 05:48:23.239717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.491 qpair failed and we were unable to recover it. 00:34:16.491 [2024-07-14 05:48:23.239894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.491 [2024-07-14 05:48:23.239920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.491 qpair failed and we were unable to recover it. 00:34:16.491 [2024-07-14 05:48:23.240077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.491 [2024-07-14 05:48:23.240103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.491 qpair failed and we were unable to recover it. 00:34:16.491 [2024-07-14 05:48:23.240289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.491 [2024-07-14 05:48:23.240319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.491 qpair failed and we were unable to recover it. 00:34:16.491 [2024-07-14 05:48:23.240435] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1413390 is same with the state(5) to be set 00:34:16.491 [2024-07-14 05:48:23.240672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.491 [2024-07-14 05:48:23.240712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.491 qpair failed and we were unable to recover it. 00:34:16.491 [2024-07-14 05:48:23.240979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.491 [2024-07-14 05:48:23.241016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.491 qpair failed and we were unable to recover it. 00:34:16.491 [2024-07-14 05:48:23.241204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.491 [2024-07-14 05:48:23.241239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.491 qpair failed and we were unable to recover it. 00:34:16.491 [2024-07-14 05:48:23.241514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.491 [2024-07-14 05:48:23.241561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.491 qpair failed and we were unable to recover it. 00:34:16.491 [2024-07-14 05:48:23.241777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.491 [2024-07-14 05:48:23.241824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.491 qpair failed and we were unable to recover it. 00:34:16.491 [2024-07-14 05:48:23.242044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.491 [2024-07-14 05:48:23.242074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.491 qpair failed and we were unable to recover it. 00:34:16.491 [2024-07-14 05:48:23.242351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.491 [2024-07-14 05:48:23.242397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.491 qpair failed and we were unable to recover it. 00:34:16.491 [2024-07-14 05:48:23.242610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.491 [2024-07-14 05:48:23.242654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.491 qpair failed and we were unable to recover it. 00:34:16.491 [2024-07-14 05:48:23.242815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.492 [2024-07-14 05:48:23.242842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.492 qpair failed and we were unable to recover it. 00:34:16.492 [2024-07-14 05:48:23.243065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.492 [2024-07-14 05:48:23.243092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.492 qpair failed and we were unable to recover it. 00:34:16.492 [2024-07-14 05:48:23.243288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.492 [2024-07-14 05:48:23.243338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.492 qpair failed and we were unable to recover it. 00:34:16.492 [2024-07-14 05:48:23.243537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.492 [2024-07-14 05:48:23.243583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.492 qpair failed and we were unable to recover it. 00:34:16.492 [2024-07-14 05:48:23.243780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.492 [2024-07-14 05:48:23.243808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.492 qpair failed and we were unable to recover it. 00:34:16.492 [2024-07-14 05:48:23.244004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.492 [2024-07-14 05:48:23.244050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.492 qpair failed and we were unable to recover it. 00:34:16.492 [2024-07-14 05:48:23.244243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.492 [2024-07-14 05:48:23.244288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.492 qpair failed and we were unable to recover it. 00:34:16.492 [2024-07-14 05:48:23.244580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.492 [2024-07-14 05:48:23.244637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.492 qpair failed and we were unable to recover it. 00:34:16.492 [2024-07-14 05:48:23.244875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.492 [2024-07-14 05:48:23.244908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.492 qpair failed and we were unable to recover it. 00:34:16.492 [2024-07-14 05:48:23.245134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.492 [2024-07-14 05:48:23.245161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.492 qpair failed and we were unable to recover it. 00:34:16.492 [2024-07-14 05:48:23.245362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.492 [2024-07-14 05:48:23.245409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.492 qpair failed and we were unable to recover it. 00:34:16.492 [2024-07-14 05:48:23.245616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.492 [2024-07-14 05:48:23.245660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.492 qpair failed and we were unable to recover it. 00:34:16.492 [2024-07-14 05:48:23.245853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.492 [2024-07-14 05:48:23.245892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.492 qpair failed and we were unable to recover it. 00:34:16.492 [2024-07-14 05:48:23.246079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.492 [2024-07-14 05:48:23.246105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.492 qpair failed and we were unable to recover it. 00:34:16.492 [2024-07-14 05:48:23.246320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.492 [2024-07-14 05:48:23.246350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.492 qpair failed and we were unable to recover it. 00:34:16.492 [2024-07-14 05:48:23.246684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.492 [2024-07-14 05:48:23.246737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.492 qpair failed and we were unable to recover it. 00:34:16.492 [2024-07-14 05:48:23.246937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.492 [2024-07-14 05:48:23.246963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.492 qpair failed and we were unable to recover it. 00:34:16.492 [2024-07-14 05:48:23.247192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.492 [2024-07-14 05:48:23.247242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.492 qpair failed and we were unable to recover it. 00:34:16.492 [2024-07-14 05:48:23.247465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.492 [2024-07-14 05:48:23.247509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.492 qpair failed and we were unable to recover it. 00:34:16.492 [2024-07-14 05:48:23.247696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.492 [2024-07-14 05:48:23.247722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.492 qpair failed and we were unable to recover it. 00:34:16.492 [2024-07-14 05:48:23.247929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.492 [2024-07-14 05:48:23.247973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.492 qpair failed and we were unable to recover it. 00:34:16.492 [2024-07-14 05:48:23.248207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.492 [2024-07-14 05:48:23.248249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.492 qpair failed and we were unable to recover it. 00:34:16.492 [2024-07-14 05:48:23.248451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.492 [2024-07-14 05:48:23.248482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.492 qpair failed and we were unable to recover it. 00:34:16.492 [2024-07-14 05:48:23.248711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.492 [2024-07-14 05:48:23.248737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.492 qpair failed and we were unable to recover it. 00:34:16.492 [2024-07-14 05:48:23.248938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.492 [2024-07-14 05:48:23.248966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.492 qpair failed and we were unable to recover it. 00:34:16.492 [2024-07-14 05:48:23.249159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.492 [2024-07-14 05:48:23.249204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.492 qpair failed and we were unable to recover it. 00:34:16.492 [2024-07-14 05:48:23.249412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.492 [2024-07-14 05:48:23.249455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.492 qpair failed and we were unable to recover it. 00:34:16.492 [2024-07-14 05:48:23.249655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.492 [2024-07-14 05:48:23.249699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.492 qpair failed and we were unable to recover it. 00:34:16.492 [2024-07-14 05:48:23.249894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.492 [2024-07-14 05:48:23.249921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.492 qpair failed and we were unable to recover it. 00:34:16.492 [2024-07-14 05:48:23.250106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.492 [2024-07-14 05:48:23.250149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.492 qpair failed and we were unable to recover it. 00:34:16.492 [2024-07-14 05:48:23.250405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.492 [2024-07-14 05:48:23.250449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.492 qpair failed and we were unable to recover it. 00:34:16.492 [2024-07-14 05:48:23.250748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.492 [2024-07-14 05:48:23.250797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.492 qpair failed and we were unable to recover it. 00:34:16.492 [2024-07-14 05:48:23.250996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.492 [2024-07-14 05:48:23.251023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.492 qpair failed and we were unable to recover it. 00:34:16.492 [2024-07-14 05:48:23.251264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.492 [2024-07-14 05:48:23.251307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.492 qpair failed and we were unable to recover it. 00:34:16.492 [2024-07-14 05:48:23.251583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.492 [2024-07-14 05:48:23.251627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.492 qpair failed and we were unable to recover it. 00:34:16.492 [2024-07-14 05:48:23.251836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.492 [2024-07-14 05:48:23.251863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.492 qpair failed and we were unable to recover it. 00:34:16.492 [2024-07-14 05:48:23.252056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.492 [2024-07-14 05:48:23.252082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.492 qpair failed and we were unable to recover it. 00:34:16.492 [2024-07-14 05:48:23.252306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.492 [2024-07-14 05:48:23.252351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.492 qpair failed and we were unable to recover it. 00:34:16.492 [2024-07-14 05:48:23.252599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.492 [2024-07-14 05:48:23.252643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.492 qpair failed and we were unable to recover it. 00:34:16.492 [2024-07-14 05:48:23.252798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.492 [2024-07-14 05:48:23.252824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.492 qpair failed and we were unable to recover it. 00:34:16.492 [2024-07-14 05:48:23.253011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.492 [2024-07-14 05:48:23.253038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.492 qpair failed and we were unable to recover it. 00:34:16.493 [2024-07-14 05:48:23.253228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.493 [2024-07-14 05:48:23.253273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.493 qpair failed and we were unable to recover it. 00:34:16.493 [2024-07-14 05:48:23.253559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.493 [2024-07-14 05:48:23.253607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.493 qpair failed and we were unable to recover it. 00:34:16.493 [2024-07-14 05:48:23.253787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.493 [2024-07-14 05:48:23.253813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.493 qpair failed and we were unable to recover it. 00:34:16.493 [2024-07-14 05:48:23.253996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.493 [2024-07-14 05:48:23.254023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.493 qpair failed and we were unable to recover it. 00:34:16.493 [2024-07-14 05:48:23.254210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.493 [2024-07-14 05:48:23.254253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.493 qpair failed and we were unable to recover it. 00:34:16.493 [2024-07-14 05:48:23.254465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.493 [2024-07-14 05:48:23.254518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.493 qpair failed and we were unable to recover it. 00:34:16.493 [2024-07-14 05:48:23.254695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.493 [2024-07-14 05:48:23.254722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.493 qpair failed and we were unable to recover it. 00:34:16.493 [2024-07-14 05:48:23.254993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.493 [2024-07-14 05:48:23.255038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.493 qpair failed and we were unable to recover it. 00:34:16.493 [2024-07-14 05:48:23.255243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.493 [2024-07-14 05:48:23.255286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.493 qpair failed and we were unable to recover it. 00:34:16.493 [2024-07-14 05:48:23.255505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.493 [2024-07-14 05:48:23.255551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.493 qpair failed and we were unable to recover it. 00:34:16.493 [2024-07-14 05:48:23.255728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.493 [2024-07-14 05:48:23.255757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.493 qpair failed and we were unable to recover it. 00:34:16.493 [2024-07-14 05:48:23.255929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.493 [2024-07-14 05:48:23.255959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.493 qpair failed and we were unable to recover it. 00:34:16.493 [2024-07-14 05:48:23.256148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.493 [2024-07-14 05:48:23.256192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.493 qpair failed and we were unable to recover it. 00:34:16.493 [2024-07-14 05:48:23.256404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.493 [2024-07-14 05:48:23.256447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.493 qpair failed and we were unable to recover it. 00:34:16.493 [2024-07-14 05:48:23.256658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.493 [2024-07-14 05:48:23.256684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.493 qpair failed and we were unable to recover it. 00:34:16.493 [2024-07-14 05:48:23.256880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.493 [2024-07-14 05:48:23.256908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.493 qpair failed and we were unable to recover it. 00:34:16.493 [2024-07-14 05:48:23.257117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.493 [2024-07-14 05:48:23.257161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.493 qpair failed and we were unable to recover it. 00:34:16.493 [2024-07-14 05:48:23.257381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.493 [2024-07-14 05:48:23.257425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.493 qpair failed and we were unable to recover it. 00:34:16.493 [2024-07-14 05:48:23.257646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.493 [2024-07-14 05:48:23.257692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.493 qpair failed and we were unable to recover it. 00:34:16.493 [2024-07-14 05:48:23.257908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.493 [2024-07-14 05:48:23.257935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.493 qpair failed and we were unable to recover it. 00:34:16.493 [2024-07-14 05:48:23.258128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.493 [2024-07-14 05:48:23.258153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.493 qpair failed and we were unable to recover it. 00:34:16.493 [2024-07-14 05:48:23.258408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.493 [2024-07-14 05:48:23.258454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.493 qpair failed and we were unable to recover it. 00:34:16.493 [2024-07-14 05:48:23.258664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.493 [2024-07-14 05:48:23.258708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.493 qpair failed and we were unable to recover it. 00:34:16.493 [2024-07-14 05:48:23.258896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.493 [2024-07-14 05:48:23.258923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.493 qpair failed and we were unable to recover it. 00:34:16.493 [2024-07-14 05:48:23.259122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.493 [2024-07-14 05:48:23.259149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.493 qpair failed and we were unable to recover it. 00:34:16.493 [2024-07-14 05:48:23.259388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.493 [2024-07-14 05:48:23.259431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.493 qpair failed and we were unable to recover it. 00:34:16.493 [2024-07-14 05:48:23.259778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.493 [2024-07-14 05:48:23.259832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.493 qpair failed and we were unable to recover it. 00:34:16.493 [2024-07-14 05:48:23.260029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.493 [2024-07-14 05:48:23.260056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.493 qpair failed and we were unable to recover it. 00:34:16.493 [2024-07-14 05:48:23.260268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.493 [2024-07-14 05:48:23.260314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.493 qpair failed and we were unable to recover it. 00:34:16.493 [2024-07-14 05:48:23.260523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.493 [2024-07-14 05:48:23.260567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.493 qpair failed and we were unable to recover it. 00:34:16.493 [2024-07-14 05:48:23.260794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.493 [2024-07-14 05:48:23.260822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.493 qpair failed and we were unable to recover it. 00:34:16.493 [2024-07-14 05:48:23.260997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.493 [2024-07-14 05:48:23.261024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.493 qpair failed and we were unable to recover it. 00:34:16.493 [2024-07-14 05:48:23.261231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.493 [2024-07-14 05:48:23.261283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.493 qpair failed and we were unable to recover it. 00:34:16.493 [2024-07-14 05:48:23.261480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.493 [2024-07-14 05:48:23.261525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.493 qpair failed and we were unable to recover it. 00:34:16.493 [2024-07-14 05:48:23.261718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.493 [2024-07-14 05:48:23.261746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.493 qpair failed and we were unable to recover it. 00:34:16.493 [2024-07-14 05:48:23.261931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.493 [2024-07-14 05:48:23.261963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.493 qpair failed and we were unable to recover it. 00:34:16.493 [2024-07-14 05:48:23.262193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.493 [2024-07-14 05:48:23.262247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.493 qpair failed and we were unable to recover it. 00:34:16.493 [2024-07-14 05:48:23.262466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.493 [2024-07-14 05:48:23.262511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.493 qpair failed and we were unable to recover it. 00:34:16.493 [2024-07-14 05:48:23.262689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.493 [2024-07-14 05:48:23.262715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.493 qpair failed and we were unable to recover it. 00:34:16.493 [2024-07-14 05:48:23.262886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.493 [2024-07-14 05:48:23.262913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.493 qpair failed and we were unable to recover it. 00:34:16.493 [2024-07-14 05:48:23.263162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.493 [2024-07-14 05:48:23.263207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.493 qpair failed and we were unable to recover it. 00:34:16.494 [2024-07-14 05:48:23.263441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.494 [2024-07-14 05:48:23.263490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.494 qpair failed and we were unable to recover it. 00:34:16.494 [2024-07-14 05:48:23.263774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.494 [2024-07-14 05:48:23.263800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.494 qpair failed and we were unable to recover it. 00:34:16.494 [2024-07-14 05:48:23.264020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.494 [2024-07-14 05:48:23.264072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.494 qpair failed and we were unable to recover it. 00:34:16.494 [2024-07-14 05:48:23.264283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.494 [2024-07-14 05:48:23.264327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.494 qpair failed and we were unable to recover it. 00:34:16.494 [2024-07-14 05:48:23.264545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.494 [2024-07-14 05:48:23.264589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.494 qpair failed and we were unable to recover it. 00:34:16.494 [2024-07-14 05:48:23.264825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.494 [2024-07-14 05:48:23.264852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.494 qpair failed and we were unable to recover it. 00:34:16.494 [2024-07-14 05:48:23.265023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.494 [2024-07-14 05:48:23.265059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.494 qpair failed and we were unable to recover it. 00:34:16.494 [2024-07-14 05:48:23.265290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.494 [2024-07-14 05:48:23.265338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.494 qpair failed and we were unable to recover it. 00:34:16.494 [2024-07-14 05:48:23.265582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.494 [2024-07-14 05:48:23.265628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.494 qpair failed and we were unable to recover it. 00:34:16.494 [2024-07-14 05:48:23.265842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.494 [2024-07-14 05:48:23.265893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.494 qpair failed and we were unable to recover it. 00:34:16.494 [2024-07-14 05:48:23.266060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.494 [2024-07-14 05:48:23.266087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.494 qpair failed and we were unable to recover it. 00:34:16.494 [2024-07-14 05:48:23.266305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.494 [2024-07-14 05:48:23.266350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.494 qpair failed and we were unable to recover it. 00:34:16.494 [2024-07-14 05:48:23.266682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.494 [2024-07-14 05:48:23.266731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.494 qpair failed and we were unable to recover it. 00:34:16.494 [2024-07-14 05:48:23.266958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.494 [2024-07-14 05:48:23.266992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.494 qpair failed and we were unable to recover it. 00:34:16.494 [2024-07-14 05:48:23.267235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.494 [2024-07-14 05:48:23.267279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.494 qpair failed and we were unable to recover it. 00:34:16.494 [2024-07-14 05:48:23.267529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.494 [2024-07-14 05:48:23.267574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.494 qpair failed and we were unable to recover it. 00:34:16.494 [2024-07-14 05:48:23.267769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.494 [2024-07-14 05:48:23.267798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.494 qpair failed and we were unable to recover it. 00:34:16.494 [2024-07-14 05:48:23.267993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.494 [2024-07-14 05:48:23.268022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.494 qpair failed and we were unable to recover it. 00:34:16.494 [2024-07-14 05:48:23.268233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.494 [2024-07-14 05:48:23.268277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.494 qpair failed and we were unable to recover it. 00:34:16.494 [2024-07-14 05:48:23.268483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.494 [2024-07-14 05:48:23.268527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.494 qpair failed and we were unable to recover it. 00:34:16.494 [2024-07-14 05:48:23.268747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.494 [2024-07-14 05:48:23.268773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.494 qpair failed and we were unable to recover it. 00:34:16.494 [2024-07-14 05:48:23.268945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.494 [2024-07-14 05:48:23.268979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.494 qpair failed and we were unable to recover it. 00:34:16.494 [2024-07-14 05:48:23.269234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.494 [2024-07-14 05:48:23.269279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.494 qpair failed and we were unable to recover it. 00:34:16.494 [2024-07-14 05:48:23.269457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.494 [2024-07-14 05:48:23.269501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.494 qpair failed and we were unable to recover it. 00:34:16.494 [2024-07-14 05:48:23.269875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.494 [2024-07-14 05:48:23.269933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.494 qpair failed and we were unable to recover it. 00:34:16.494 [2024-07-14 05:48:23.270173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.494 [2024-07-14 05:48:23.270226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.494 qpair failed and we were unable to recover it. 00:34:16.494 [2024-07-14 05:48:23.270467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.494 [2024-07-14 05:48:23.270509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.494 qpair failed and we were unable to recover it. 00:34:16.494 [2024-07-14 05:48:23.270671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.494 [2024-07-14 05:48:23.270699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.494 qpair failed and we were unable to recover it. 00:34:16.494 [2024-07-14 05:48:23.270905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.494 [2024-07-14 05:48:23.270936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.494 qpair failed and we were unable to recover it. 00:34:16.494 [2024-07-14 05:48:23.271151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.494 [2024-07-14 05:48:23.271178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.494 qpair failed and we were unable to recover it. 00:34:16.494 [2024-07-14 05:48:23.271413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.494 [2024-07-14 05:48:23.271457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.494 qpair failed and we were unable to recover it. 00:34:16.494 [2024-07-14 05:48:23.271700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.494 [2024-07-14 05:48:23.271745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.494 qpair failed and we were unable to recover it. 00:34:16.494 [2024-07-14 05:48:23.271990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.494 [2024-07-14 05:48:23.272017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.494 qpair failed and we were unable to recover it. 00:34:16.494 [2024-07-14 05:48:23.272196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.494 [2024-07-14 05:48:23.272241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.494 qpair failed and we were unable to recover it. 00:34:16.494 [2024-07-14 05:48:23.272448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.494 [2024-07-14 05:48:23.272491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.494 qpair failed and we were unable to recover it. 00:34:16.494 [2024-07-14 05:48:23.272723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.494 [2024-07-14 05:48:23.272767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.494 qpair failed and we were unable to recover it. 00:34:16.494 [2024-07-14 05:48:23.273004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.494 [2024-07-14 05:48:23.273032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.494 qpair failed and we were unable to recover it. 00:34:16.494 [2024-07-14 05:48:23.273220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.494 [2024-07-14 05:48:23.273266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.494 qpair failed and we were unable to recover it. 00:34:16.494 [2024-07-14 05:48:23.273507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.494 [2024-07-14 05:48:23.273553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.494 qpair failed and we were unable to recover it. 00:34:16.494 [2024-07-14 05:48:23.273737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.494 [2024-07-14 05:48:23.273763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.494 qpair failed and we were unable to recover it. 00:34:16.494 [2024-07-14 05:48:23.273999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.494 [2024-07-14 05:48:23.274043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.494 qpair failed and we were unable to recover it. 00:34:16.495 [2024-07-14 05:48:23.274265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.495 [2024-07-14 05:48:23.274307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.495 qpair failed and we were unable to recover it. 00:34:16.495 [2024-07-14 05:48:23.274520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.495 [2024-07-14 05:48:23.274569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.495 qpair failed and we were unable to recover it. 00:34:16.495 [2024-07-14 05:48:23.274756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.495 [2024-07-14 05:48:23.274782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.495 qpair failed and we were unable to recover it. 00:34:16.495 [2024-07-14 05:48:23.275008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.495 [2024-07-14 05:48:23.275053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.495 qpair failed and we were unable to recover it. 00:34:16.495 [2024-07-14 05:48:23.275278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.495 [2024-07-14 05:48:23.275321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.495 qpair failed and we were unable to recover it. 00:34:16.495 [2024-07-14 05:48:23.275529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.495 [2024-07-14 05:48:23.275575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.495 qpair failed and we were unable to recover it. 00:34:16.495 [2024-07-14 05:48:23.275846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.495 [2024-07-14 05:48:23.275878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.495 qpair failed and we were unable to recover it. 00:34:16.495 [2024-07-14 05:48:23.276088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.495 [2024-07-14 05:48:23.276115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.495 qpair failed and we were unable to recover it. 00:34:16.495 [2024-07-14 05:48:23.276363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.495 [2024-07-14 05:48:23.276406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.495 qpair failed and we were unable to recover it. 00:34:16.495 [2024-07-14 05:48:23.276642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.495 [2024-07-14 05:48:23.276696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.495 qpair failed and we were unable to recover it. 00:34:16.495 [2024-07-14 05:48:23.276886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.495 [2024-07-14 05:48:23.276912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.495 qpair failed and we were unable to recover it. 00:34:16.495 [2024-07-14 05:48:23.277068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.495 [2024-07-14 05:48:23.277094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.495 qpair failed and we were unable to recover it. 00:34:16.495 [2024-07-14 05:48:23.277303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.495 [2024-07-14 05:48:23.277347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.495 qpair failed and we were unable to recover it. 00:34:16.495 [2024-07-14 05:48:23.277591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.495 [2024-07-14 05:48:23.277635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.495 qpair failed and we were unable to recover it. 00:34:16.495 [2024-07-14 05:48:23.277810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.495 [2024-07-14 05:48:23.277838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.495 qpair failed and we were unable to recover it. 00:34:16.495 [2024-07-14 05:48:23.278014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.495 [2024-07-14 05:48:23.278041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.495 qpair failed and we were unable to recover it. 00:34:16.495 [2024-07-14 05:48:23.278237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.495 [2024-07-14 05:48:23.278281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.495 qpair failed and we were unable to recover it. 00:34:16.495 [2024-07-14 05:48:23.278465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.495 [2024-07-14 05:48:23.278510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.495 qpair failed and we were unable to recover it. 00:34:16.495 [2024-07-14 05:48:23.278750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.495 [2024-07-14 05:48:23.278794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.495 qpair failed and we were unable to recover it. 00:34:16.495 [2024-07-14 05:48:23.278975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.495 [2024-07-14 05:48:23.279020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.495 qpair failed and we were unable to recover it. 00:34:16.495 [2024-07-14 05:48:23.279236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.495 [2024-07-14 05:48:23.279278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.495 qpair failed and we were unable to recover it. 00:34:16.495 [2024-07-14 05:48:23.279530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.495 [2024-07-14 05:48:23.279574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.495 qpair failed and we were unable to recover it. 00:34:16.495 [2024-07-14 05:48:23.279735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.495 [2024-07-14 05:48:23.279762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.495 qpair failed and we were unable to recover it. 00:34:16.495 [2024-07-14 05:48:23.280007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.495 [2024-07-14 05:48:23.280060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.495 qpair failed and we were unable to recover it. 00:34:16.495 [2024-07-14 05:48:23.280278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.495 [2024-07-14 05:48:23.280321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.495 qpair failed and we were unable to recover it. 00:34:16.495 [2024-07-14 05:48:23.280561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.495 [2024-07-14 05:48:23.280606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.495 qpair failed and we were unable to recover it. 00:34:16.495 [2024-07-14 05:48:23.280838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.495 [2024-07-14 05:48:23.280878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.495 qpair failed and we were unable to recover it. 00:34:16.495 [2024-07-14 05:48:23.281075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.495 [2024-07-14 05:48:23.281105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.495 qpair failed and we were unable to recover it. 00:34:16.495 [2024-07-14 05:48:23.281355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.495 [2024-07-14 05:48:23.281402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.495 qpair failed and we were unable to recover it. 00:34:16.495 [2024-07-14 05:48:23.281613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.495 [2024-07-14 05:48:23.281656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.495 qpair failed and we were unable to recover it. 00:34:16.495 [2024-07-14 05:48:23.281871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.495 [2024-07-14 05:48:23.281899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.495 qpair failed and we were unable to recover it. 00:34:16.495 [2024-07-14 05:48:23.282082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.495 [2024-07-14 05:48:23.282108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.495 qpair failed and we were unable to recover it. 00:34:16.495 [2024-07-14 05:48:23.282339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.495 [2024-07-14 05:48:23.282384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.495 qpair failed and we were unable to recover it. 00:34:16.495 [2024-07-14 05:48:23.282573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.495 [2024-07-14 05:48:23.282620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.495 qpair failed and we were unable to recover it. 00:34:16.495 [2024-07-14 05:48:23.282831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.495 [2024-07-14 05:48:23.282858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.495 qpair failed and we were unable to recover it. 00:34:16.495 [2024-07-14 05:48:23.283090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.495 [2024-07-14 05:48:23.283121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.495 qpair failed and we were unable to recover it. 00:34:16.496 [2024-07-14 05:48:23.283304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.496 [2024-07-14 05:48:23.283351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.496 qpair failed and we were unable to recover it. 00:34:16.496 [2024-07-14 05:48:23.283564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.496 [2024-07-14 05:48:23.283608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.496 qpair failed and we were unable to recover it. 00:34:16.496 [2024-07-14 05:48:23.283791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.496 [2024-07-14 05:48:23.283818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.496 qpair failed and we were unable to recover it. 00:34:16.496 [2024-07-14 05:48:23.284017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.496 [2024-07-14 05:48:23.284043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.496 qpair failed and we were unable to recover it. 00:34:16.496 [2024-07-14 05:48:23.284287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.496 [2024-07-14 05:48:23.284330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.496 qpair failed and we were unable to recover it. 00:34:16.496 [2024-07-14 05:48:23.284607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.496 [2024-07-14 05:48:23.284655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.496 qpair failed and we were unable to recover it. 00:34:16.496 [2024-07-14 05:48:23.284841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.496 [2024-07-14 05:48:23.284875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.496 qpair failed and we were unable to recover it. 00:34:16.496 [2024-07-14 05:48:23.285065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.496 [2024-07-14 05:48:23.285092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.496 qpair failed and we were unable to recover it. 00:34:16.496 [2024-07-14 05:48:23.285314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.496 [2024-07-14 05:48:23.285341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.496 qpair failed and we were unable to recover it. 00:34:16.496 [2024-07-14 05:48:23.285577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.496 [2024-07-14 05:48:23.285621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.496 qpair failed and we were unable to recover it. 00:34:16.496 [2024-07-14 05:48:23.285832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.496 [2024-07-14 05:48:23.285858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.496 qpair failed and we were unable to recover it. 00:34:16.496 [2024-07-14 05:48:23.286050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.496 [2024-07-14 05:48:23.286077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.496 qpair failed and we were unable to recover it. 00:34:16.496 [2024-07-14 05:48:23.286286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.496 [2024-07-14 05:48:23.286329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.496 qpair failed and we were unable to recover it. 00:34:16.496 [2024-07-14 05:48:23.286539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.496 [2024-07-14 05:48:23.286584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.496 qpair failed and we were unable to recover it. 00:34:16.496 [2024-07-14 05:48:23.286792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.496 [2024-07-14 05:48:23.286818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.496 qpair failed and we were unable to recover it. 00:34:16.496 [2024-07-14 05:48:23.286981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.496 [2024-07-14 05:48:23.287007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.496 qpair failed and we were unable to recover it. 00:34:16.496 [2024-07-14 05:48:23.287215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.496 [2024-07-14 05:48:23.287258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.496 qpair failed and we were unable to recover it. 00:34:16.496 [2024-07-14 05:48:23.287494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.496 [2024-07-14 05:48:23.287536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.496 qpair failed and we were unable to recover it. 00:34:16.496 [2024-07-14 05:48:23.287725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.496 [2024-07-14 05:48:23.287750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.496 qpair failed and we were unable to recover it. 00:34:16.496 [2024-07-14 05:48:23.288034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.496 [2024-07-14 05:48:23.288076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.496 qpair failed and we were unable to recover it. 00:34:16.496 [2024-07-14 05:48:23.288295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.496 [2024-07-14 05:48:23.288322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.496 qpair failed and we were unable to recover it. 00:34:16.496 [2024-07-14 05:48:23.288503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.496 [2024-07-14 05:48:23.288547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.496 qpair failed and we were unable to recover it. 00:34:16.496 [2024-07-14 05:48:23.288757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.496 [2024-07-14 05:48:23.288783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.496 qpair failed and we were unable to recover it. 00:34:16.496 [2024-07-14 05:48:23.288994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.496 [2024-07-14 05:48:23.289023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.496 qpair failed and we were unable to recover it. 00:34:16.496 [2024-07-14 05:48:23.289278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.496 [2024-07-14 05:48:23.289320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.496 qpair failed and we were unable to recover it. 00:34:16.496 [2024-07-14 05:48:23.289543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.496 [2024-07-14 05:48:23.289570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.496 qpair failed and we were unable to recover it. 00:34:16.496 [2024-07-14 05:48:23.289780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.496 [2024-07-14 05:48:23.289806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.496 qpair failed and we were unable to recover it. 00:34:16.496 [2024-07-14 05:48:23.290025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.496 [2024-07-14 05:48:23.290054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.496 qpair failed and we were unable to recover it. 00:34:16.496 [2024-07-14 05:48:23.290320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.496 [2024-07-14 05:48:23.290362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.496 qpair failed and we were unable to recover it. 00:34:16.496 [2024-07-14 05:48:23.290601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.496 [2024-07-14 05:48:23.290643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.496 qpair failed and we were unable to recover it. 00:34:16.496 [2024-07-14 05:48:23.290799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.496 [2024-07-14 05:48:23.290826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.496 qpair failed and we were unable to recover it. 00:34:16.496 [2024-07-14 05:48:23.291044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.496 [2024-07-14 05:48:23.291087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.496 qpair failed and we were unable to recover it. 00:34:16.496 [2024-07-14 05:48:23.291319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.496 [2024-07-14 05:48:23.291363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.496 qpair failed and we were unable to recover it. 00:34:16.496 [2024-07-14 05:48:23.291575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.496 [2024-07-14 05:48:23.291617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.496 qpair failed and we were unable to recover it. 00:34:16.496 [2024-07-14 05:48:23.291830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.496 [2024-07-14 05:48:23.291855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.496 qpair failed and we were unable to recover it. 00:34:16.496 [2024-07-14 05:48:23.292075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.496 [2024-07-14 05:48:23.292118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.496 qpair failed and we were unable to recover it. 00:34:16.496 [2024-07-14 05:48:23.292331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.496 [2024-07-14 05:48:23.292374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.496 qpair failed and we were unable to recover it. 00:34:16.496 [2024-07-14 05:48:23.292603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.496 [2024-07-14 05:48:23.292646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.496 qpair failed and we were unable to recover it. 00:34:16.496 [2024-07-14 05:48:23.292932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.496 [2024-07-14 05:48:23.292980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.496 qpair failed and we were unable to recover it. 00:34:16.496 [2024-07-14 05:48:23.293189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.496 [2024-07-14 05:48:23.293233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.496 qpair failed and we were unable to recover it. 00:34:16.496 [2024-07-14 05:48:23.293467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.496 [2024-07-14 05:48:23.293510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.497 qpair failed and we were unable to recover it. 00:34:16.497 [2024-07-14 05:48:23.293685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.497 [2024-07-14 05:48:23.293728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.497 qpair failed and we were unable to recover it. 00:34:16.497 [2024-07-14 05:48:23.293889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.497 [2024-07-14 05:48:23.293916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.497 qpair failed and we were unable to recover it. 00:34:16.497 [2024-07-14 05:48:23.294152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.497 [2024-07-14 05:48:23.294196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.497 qpair failed and we were unable to recover it. 00:34:16.497 [2024-07-14 05:48:23.294473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.497 [2024-07-14 05:48:23.294516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.497 qpair failed and we were unable to recover it. 00:34:16.497 [2024-07-14 05:48:23.294732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.497 [2024-07-14 05:48:23.294779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.497 qpair failed and we were unable to recover it. 00:34:16.497 [2024-07-14 05:48:23.294975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.497 [2024-07-14 05:48:23.295002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.497 qpair failed and we were unable to recover it. 00:34:16.497 [2024-07-14 05:48:23.295239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.497 [2024-07-14 05:48:23.295283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.497 qpair failed and we were unable to recover it. 00:34:16.497 [2024-07-14 05:48:23.295462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.497 [2024-07-14 05:48:23.295504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.497 qpair failed and we were unable to recover it. 00:34:16.497 [2024-07-14 05:48:23.295711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.497 [2024-07-14 05:48:23.295739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.497 qpair failed and we were unable to recover it. 00:34:16.497 [2024-07-14 05:48:23.296002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.497 [2024-07-14 05:48:23.296046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.497 qpair failed and we were unable to recover it. 00:34:16.497 [2024-07-14 05:48:23.296268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.497 [2024-07-14 05:48:23.296310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.497 qpair failed and we were unable to recover it. 00:34:16.497 [2024-07-14 05:48:23.296497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.497 [2024-07-14 05:48:23.296541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.497 qpair failed and we were unable to recover it. 00:34:16.497 [2024-07-14 05:48:23.296723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.497 [2024-07-14 05:48:23.296750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.497 qpair failed and we were unable to recover it. 00:34:16.497 [2024-07-14 05:48:23.296988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.497 [2024-07-14 05:48:23.297032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.497 qpair failed and we were unable to recover it. 00:34:16.497 [2024-07-14 05:48:23.297310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.497 [2024-07-14 05:48:23.297353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.497 qpair failed and we were unable to recover it. 00:34:16.497 [2024-07-14 05:48:23.297620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.497 [2024-07-14 05:48:23.297663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.497 qpair failed and we were unable to recover it. 00:34:16.497 [2024-07-14 05:48:23.297853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.497 [2024-07-14 05:48:23.297883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.497 qpair failed and we were unable to recover it. 00:34:16.497 [2024-07-14 05:48:23.298155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.497 [2024-07-14 05:48:23.298197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.497 qpair failed and we were unable to recover it. 00:34:16.497 [2024-07-14 05:48:23.298424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.497 [2024-07-14 05:48:23.298467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.497 qpair failed and we were unable to recover it. 00:34:16.497 [2024-07-14 05:48:23.298654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.497 [2024-07-14 05:48:23.298697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.497 qpair failed and we were unable to recover it. 00:34:16.497 [2024-07-14 05:48:23.298880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.497 [2024-07-14 05:48:23.298906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.497 qpair failed and we were unable to recover it. 00:34:16.497 [2024-07-14 05:48:23.299126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.497 [2024-07-14 05:48:23.299170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.497 qpair failed and we were unable to recover it. 00:34:16.497 [2024-07-14 05:48:23.299407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.497 [2024-07-14 05:48:23.299451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.497 qpair failed and we were unable to recover it. 00:34:16.497 [2024-07-14 05:48:23.299625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.497 [2024-07-14 05:48:23.299668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.497 qpair failed and we were unable to recover it. 00:34:16.497 [2024-07-14 05:48:23.299853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.497 [2024-07-14 05:48:23.299884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.497 qpair failed and we were unable to recover it. 00:34:16.497 [2024-07-14 05:48:23.300095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.497 [2024-07-14 05:48:23.300138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.497 qpair failed and we were unable to recover it. 00:34:16.497 [2024-07-14 05:48:23.300385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.497 [2024-07-14 05:48:23.300427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.497 qpair failed and we were unable to recover it. 00:34:16.497 [2024-07-14 05:48:23.300600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.497 [2024-07-14 05:48:23.300643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.497 qpair failed and we were unable to recover it. 00:34:16.497 [2024-07-14 05:48:23.300829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.497 [2024-07-14 05:48:23.300854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.497 qpair failed and we were unable to recover it. 00:34:16.497 [2024-07-14 05:48:23.301044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.497 [2024-07-14 05:48:23.301070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.497 qpair failed and we were unable to recover it. 00:34:16.497 [2024-07-14 05:48:23.301313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.497 [2024-07-14 05:48:23.301355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.497 qpair failed and we were unable to recover it. 00:34:16.497 [2024-07-14 05:48:23.301552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.497 [2024-07-14 05:48:23.301596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.497 qpair failed and we were unable to recover it. 00:34:16.497 [2024-07-14 05:48:23.301774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.497 [2024-07-14 05:48:23.301800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.497 qpair failed and we were unable to recover it. 00:34:16.497 [2024-07-14 05:48:23.302022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.497 [2024-07-14 05:48:23.302065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.497 qpair failed and we were unable to recover it. 00:34:16.497 [2024-07-14 05:48:23.302268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.497 [2024-07-14 05:48:23.302296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.497 qpair failed and we were unable to recover it. 00:34:16.497 [2024-07-14 05:48:23.302492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.497 [2024-07-14 05:48:23.302536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.497 qpair failed and we were unable to recover it. 00:34:16.497 [2024-07-14 05:48:23.302746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.497 [2024-07-14 05:48:23.302772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.497 qpair failed and we were unable to recover it. 00:34:16.497 [2024-07-14 05:48:23.302960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.497 [2024-07-14 05:48:23.302987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.497 qpair failed and we were unable to recover it. 00:34:16.497 [2024-07-14 05:48:23.303193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.497 [2024-07-14 05:48:23.303236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.497 qpair failed and we were unable to recover it. 00:34:16.497 [2024-07-14 05:48:23.303447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.497 [2024-07-14 05:48:23.303491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.497 qpair failed and we were unable to recover it. 00:34:16.497 [2024-07-14 05:48:23.303715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.497 [2024-07-14 05:48:23.303741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.497 qpair failed and we were unable to recover it. 00:34:16.497 [2024-07-14 05:48:23.303943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.497 [2024-07-14 05:48:23.303986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.497 qpair failed and we were unable to recover it. 00:34:16.497 [2024-07-14 05:48:23.304193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.497 [2024-07-14 05:48:23.304237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.497 qpair failed and we were unable to recover it. 00:34:16.498 [2024-07-14 05:48:23.304439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.498 [2024-07-14 05:48:23.304482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.498 qpair failed and we were unable to recover it. 00:34:16.498 [2024-07-14 05:48:23.304694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.498 [2024-07-14 05:48:23.304725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.498 qpair failed and we were unable to recover it. 00:34:16.498 [2024-07-14 05:48:23.304959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.498 [2024-07-14 05:48:23.305003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.498 qpair failed and we were unable to recover it. 00:34:16.498 [2024-07-14 05:48:23.305215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.498 [2024-07-14 05:48:23.305258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.498 qpair failed and we were unable to recover it. 00:34:16.498 [2024-07-14 05:48:23.305456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.498 [2024-07-14 05:48:23.305484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.498 qpair failed and we were unable to recover it. 00:34:16.498 [2024-07-14 05:48:23.305701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.498 [2024-07-14 05:48:23.305727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.498 qpair failed and we were unable to recover it. 00:34:16.498 [2024-07-14 05:48:23.305965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.498 [2024-07-14 05:48:23.306008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.498 qpair failed and we were unable to recover it. 00:34:16.498 [2024-07-14 05:48:23.306227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.498 [2024-07-14 05:48:23.306269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.498 qpair failed and we were unable to recover it. 00:34:16.498 [2024-07-14 05:48:23.306516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.498 [2024-07-14 05:48:23.306559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.498 qpair failed and we were unable to recover it. 00:34:16.498 [2024-07-14 05:48:23.306770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.498 [2024-07-14 05:48:23.306795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.498 qpair failed and we were unable to recover it. 00:34:16.498 [2024-07-14 05:48:23.307022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.498 [2024-07-14 05:48:23.307066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.498 qpair failed and we were unable to recover it. 00:34:16.498 [2024-07-14 05:48:23.307292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.498 [2024-07-14 05:48:23.307336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.498 qpair failed and we were unable to recover it. 00:34:16.498 [2024-07-14 05:48:23.307523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.498 [2024-07-14 05:48:23.307566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.498 qpair failed and we were unable to recover it. 00:34:16.498 [2024-07-14 05:48:23.307725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.498 [2024-07-14 05:48:23.307752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.498 qpair failed and we were unable to recover it. 00:34:16.498 [2024-07-14 05:48:23.307953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.498 [2024-07-14 05:48:23.307996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.498 qpair failed and we were unable to recover it. 00:34:16.498 [2024-07-14 05:48:23.308235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.498 [2024-07-14 05:48:23.308279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.498 qpair failed and we were unable to recover it. 00:34:16.498 [2024-07-14 05:48:23.308555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.498 [2024-07-14 05:48:23.308598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.498 qpair failed and we were unable to recover it. 00:34:16.498 [2024-07-14 05:48:23.308778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.498 [2024-07-14 05:48:23.308804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.498 qpair failed and we were unable to recover it. 00:34:16.498 [2024-07-14 05:48:23.309045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.498 [2024-07-14 05:48:23.309088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.498 qpair failed and we were unable to recover it. 00:34:16.498 [2024-07-14 05:48:23.309333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.498 [2024-07-14 05:48:23.309376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.498 qpair failed and we were unable to recover it. 00:34:16.498 [2024-07-14 05:48:23.309557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.498 [2024-07-14 05:48:23.309599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.498 qpair failed and we were unable to recover it. 00:34:16.498 [2024-07-14 05:48:23.309782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.498 [2024-07-14 05:48:23.309808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.498 qpair failed and we were unable to recover it. 00:34:16.498 [2024-07-14 05:48:23.310087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.498 [2024-07-14 05:48:23.310131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.498 qpair failed and we were unable to recover it. 00:34:16.498 [2024-07-14 05:48:23.310364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.498 [2024-07-14 05:48:23.310407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.498 qpair failed and we were unable to recover it. 00:34:16.498 [2024-07-14 05:48:23.310685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.498 [2024-07-14 05:48:23.310729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.498 qpair failed and we were unable to recover it. 00:34:16.498 [2024-07-14 05:48:23.310955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.498 [2024-07-14 05:48:23.310982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.498 qpair failed and we were unable to recover it. 00:34:16.498 [2024-07-14 05:48:23.311196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.498 [2024-07-14 05:48:23.311239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.498 qpair failed and we were unable to recover it. 00:34:16.498 [2024-07-14 05:48:23.311478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.498 [2024-07-14 05:48:23.311521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.498 qpair failed and we were unable to recover it. 00:34:16.498 [2024-07-14 05:48:23.311711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.498 [2024-07-14 05:48:23.311736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.498 qpair failed and we were unable to recover it. 00:34:16.498 [2024-07-14 05:48:23.311964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.498 [2024-07-14 05:48:23.312008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.498 qpair failed and we were unable to recover it. 00:34:16.498 [2024-07-14 05:48:23.312213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.498 [2024-07-14 05:48:23.312258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.498 qpair failed and we were unable to recover it. 00:34:16.498 [2024-07-14 05:48:23.312461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.498 [2024-07-14 05:48:23.312505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.498 qpair failed and we were unable to recover it. 00:34:16.498 [2024-07-14 05:48:23.312690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.498 [2024-07-14 05:48:23.312716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.498 qpair failed and we were unable to recover it. 00:34:16.498 [2024-07-14 05:48:23.312875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.498 [2024-07-14 05:48:23.312901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.498 qpair failed and we were unable to recover it. 00:34:16.498 [2024-07-14 05:48:23.313079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.498 [2024-07-14 05:48:23.313105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.498 qpair failed and we were unable to recover it. 00:34:16.498 [2024-07-14 05:48:23.313315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.498 [2024-07-14 05:48:23.313358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.498 qpair failed and we were unable to recover it. 00:34:16.498 [2024-07-14 05:48:23.313564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.498 [2024-07-14 05:48:23.313609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.498 qpair failed and we were unable to recover it. 00:34:16.498 [2024-07-14 05:48:23.313792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.498 [2024-07-14 05:48:23.313819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.498 qpair failed and we were unable to recover it. 00:34:16.498 [2024-07-14 05:48:23.314028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.498 [2024-07-14 05:48:23.314072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.498 qpair failed and we were unable to recover it. 00:34:16.498 [2024-07-14 05:48:23.314276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.498 [2024-07-14 05:48:23.314318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.498 qpair failed and we were unable to recover it. 00:34:16.498 [2024-07-14 05:48:23.314523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.498 [2024-07-14 05:48:23.314566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.498 qpair failed and we were unable to recover it. 00:34:16.498 [2024-07-14 05:48:23.314749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.498 [2024-07-14 05:48:23.314778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.498 qpair failed and we were unable to recover it. 00:34:16.498 [2024-07-14 05:48:23.314984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.498 [2024-07-14 05:48:23.315013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.498 qpair failed and we were unable to recover it. 00:34:16.498 [2024-07-14 05:48:23.315237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.498 [2024-07-14 05:48:23.315280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.498 qpair failed and we were unable to recover it. 00:34:16.498 [2024-07-14 05:48:23.315453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.499 [2024-07-14 05:48:23.315497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.499 qpair failed and we were unable to recover it. 00:34:16.499 [2024-07-14 05:48:23.315692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.499 [2024-07-14 05:48:23.315718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.499 qpair failed and we were unable to recover it. 00:34:16.499 [2024-07-14 05:48:23.315935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.499 [2024-07-14 05:48:23.315978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.499 qpair failed and we were unable to recover it. 00:34:16.499 [2024-07-14 05:48:23.316173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.499 [2024-07-14 05:48:23.316216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.499 qpair failed and we were unable to recover it. 00:34:16.499 [2024-07-14 05:48:23.316390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.499 [2024-07-14 05:48:23.316433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.499 qpair failed and we were unable to recover it. 00:34:16.499 [2024-07-14 05:48:23.316593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.499 [2024-07-14 05:48:23.316619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.499 qpair failed and we were unable to recover it. 00:34:16.499 [2024-07-14 05:48:23.316826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.499 [2024-07-14 05:48:23.316851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.499 qpair failed and we were unable to recover it. 00:34:16.499 [2024-07-14 05:48:23.317066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.499 [2024-07-14 05:48:23.317094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.499 qpair failed and we were unable to recover it. 00:34:16.499 [2024-07-14 05:48:23.317315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.499 [2024-07-14 05:48:23.317358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.499 qpair failed and we were unable to recover it. 00:34:16.499 [2024-07-14 05:48:23.317543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.499 [2024-07-14 05:48:23.317586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.499 qpair failed and we were unable to recover it. 00:34:16.499 [2024-07-14 05:48:23.317792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.499 [2024-07-14 05:48:23.317817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.499 qpair failed and we were unable to recover it. 00:34:16.499 [2024-07-14 05:48:23.318057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.499 [2024-07-14 05:48:23.318100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.499 qpair failed and we were unable to recover it. 00:34:16.499 [2024-07-14 05:48:23.318343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.499 [2024-07-14 05:48:23.318386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.499 qpair failed and we were unable to recover it. 00:34:16.499 [2024-07-14 05:48:23.318621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.499 [2024-07-14 05:48:23.318664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.499 qpair failed and we were unable to recover it. 00:34:16.499 [2024-07-14 05:48:23.318821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.499 [2024-07-14 05:48:23.318846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.499 qpair failed and we were unable to recover it. 00:34:16.499 [2024-07-14 05:48:23.319131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.499 [2024-07-14 05:48:23.319176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.499 qpair failed and we were unable to recover it. 00:34:16.499 [2024-07-14 05:48:23.319359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.499 [2024-07-14 05:48:23.319400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.499 qpair failed and we were unable to recover it. 00:34:16.499 [2024-07-14 05:48:23.319611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.499 [2024-07-14 05:48:23.319653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.499 qpair failed and we were unable to recover it. 00:34:16.499 [2024-07-14 05:48:23.319862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.499 [2024-07-14 05:48:23.319894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.499 qpair failed and we were unable to recover it. 00:34:16.499 [2024-07-14 05:48:23.320082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.499 [2024-07-14 05:48:23.320107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.499 qpair failed and we were unable to recover it. 00:34:16.499 [2024-07-14 05:48:23.320309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.499 [2024-07-14 05:48:23.320352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.499 qpair failed and we were unable to recover it. 00:34:16.499 [2024-07-14 05:48:23.320589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.499 [2024-07-14 05:48:23.320631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.499 qpair failed and we were unable to recover it. 00:34:16.499 [2024-07-14 05:48:23.320823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.499 [2024-07-14 05:48:23.320849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.499 qpair failed and we were unable to recover it. 00:34:16.499 [2024-07-14 05:48:23.321064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.499 [2024-07-14 05:48:23.321090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.499 qpair failed and we were unable to recover it. 00:34:16.499 [2024-07-14 05:48:23.321280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.499 [2024-07-14 05:48:23.321322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.499 qpair failed and we were unable to recover it. 00:34:16.499 [2024-07-14 05:48:23.321558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.499 [2024-07-14 05:48:23.321601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.499 qpair failed and we were unable to recover it. 00:34:16.499 [2024-07-14 05:48:23.321759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.499 [2024-07-14 05:48:23.321785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.499 qpair failed and we were unable to recover it. 00:34:16.499 [2024-07-14 05:48:23.321972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.499 [2024-07-14 05:48:23.321998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.499 qpair failed and we were unable to recover it. 00:34:16.499 [2024-07-14 05:48:23.322212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.499 [2024-07-14 05:48:23.322255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.499 qpair failed and we were unable to recover it. 00:34:16.499 [2024-07-14 05:48:23.322491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.499 [2024-07-14 05:48:23.322534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.499 qpair failed and we were unable to recover it. 00:34:16.499 [2024-07-14 05:48:23.322808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.499 [2024-07-14 05:48:23.322858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.499 qpair failed and we were unable to recover it. 00:34:16.499 [2024-07-14 05:48:23.323051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.499 [2024-07-14 05:48:23.323076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.499 qpair failed and we were unable to recover it. 00:34:16.499 [2024-07-14 05:48:23.323247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.499 [2024-07-14 05:48:23.323289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.499 qpair failed and we were unable to recover it. 00:34:16.499 [2024-07-14 05:48:23.323472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.499 [2024-07-14 05:48:23.323515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.499 qpair failed and we were unable to recover it. 00:34:16.499 [2024-07-14 05:48:23.323756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.499 [2024-07-14 05:48:23.323799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.499 qpair failed and we were unable to recover it. 00:34:16.499 [2024-07-14 05:48:23.323984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.499 [2024-07-14 05:48:23.324010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.499 qpair failed and we were unable to recover it. 00:34:16.499 [2024-07-14 05:48:23.324287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.499 [2024-07-14 05:48:23.324330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.499 qpair failed and we were unable to recover it. 00:34:16.499 [2024-07-14 05:48:23.324563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.499 [2024-07-14 05:48:23.324610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.499 qpair failed and we were unable to recover it. 00:34:16.499 [2024-07-14 05:48:23.324822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.499 [2024-07-14 05:48:23.324848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.499 qpair failed and we were unable to recover it. 00:34:16.499 [2024-07-14 05:48:23.325065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.500 [2024-07-14 05:48:23.325091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.500 qpair failed and we were unable to recover it. 00:34:16.500 [2024-07-14 05:48:23.325279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.500 [2024-07-14 05:48:23.325323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.500 qpair failed and we were unable to recover it. 00:34:16.500 [2024-07-14 05:48:23.325534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.500 [2024-07-14 05:48:23.325576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.500 qpair failed and we were unable to recover it. 00:34:16.500 [2024-07-14 05:48:23.325761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.500 [2024-07-14 05:48:23.325787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.500 qpair failed and we were unable to recover it. 00:34:16.500 [2024-07-14 05:48:23.325976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.500 [2024-07-14 05:48:23.326003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.500 qpair failed and we were unable to recover it. 00:34:16.500 [2024-07-14 05:48:23.326205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.500 [2024-07-14 05:48:23.326249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.500 qpair failed and we were unable to recover it. 00:34:16.500 [2024-07-14 05:48:23.326470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.500 [2024-07-14 05:48:23.326497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.500 qpair failed and we were unable to recover it. 00:34:16.500 [2024-07-14 05:48:23.326686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.500 [2024-07-14 05:48:23.326711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.500 qpair failed and we were unable to recover it. 00:34:16.500 [2024-07-14 05:48:23.326922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.500 [2024-07-14 05:48:23.326948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.500 qpair failed and we were unable to recover it. 00:34:16.500 [2024-07-14 05:48:23.327123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.500 [2024-07-14 05:48:23.327166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.500 qpair failed and we were unable to recover it. 00:34:16.500 [2024-07-14 05:48:23.327388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.500 [2024-07-14 05:48:23.327431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.500 qpair failed and we were unable to recover it. 00:34:16.500 [2024-07-14 05:48:23.327627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.500 [2024-07-14 05:48:23.327654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.500 qpair failed and we were unable to recover it. 00:34:16.500 [2024-07-14 05:48:23.327845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.500 [2024-07-14 05:48:23.327876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.500 qpair failed and we were unable to recover it. 00:34:16.500 [2024-07-14 05:48:23.328090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.500 [2024-07-14 05:48:23.328133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.500 qpair failed and we were unable to recover it. 00:34:16.500 [2024-07-14 05:48:23.328340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.500 [2024-07-14 05:48:23.328382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.500 qpair failed and we were unable to recover it. 00:34:16.500 [2024-07-14 05:48:23.328620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.500 [2024-07-14 05:48:23.328663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.500 qpair failed and we were unable to recover it. 00:34:16.500 [2024-07-14 05:48:23.328848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.500 [2024-07-14 05:48:23.328878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.500 qpair failed and we were unable to recover it. 00:34:16.500 [2024-07-14 05:48:23.329071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.500 [2024-07-14 05:48:23.329096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.500 qpair failed and we were unable to recover it. 00:34:16.500 [2024-07-14 05:48:23.329279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.500 [2024-07-14 05:48:23.329322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.500 qpair failed and we were unable to recover it. 00:34:16.500 [2024-07-14 05:48:23.329529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.500 [2024-07-14 05:48:23.329573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.500 qpair failed and we were unable to recover it. 00:34:16.500 [2024-07-14 05:48:23.329762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.500 [2024-07-14 05:48:23.329788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.500 qpair failed and we were unable to recover it. 00:34:16.500 [2024-07-14 05:48:23.329951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.500 [2024-07-14 05:48:23.329977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.500 qpair failed and we were unable to recover it. 00:34:16.500 [2024-07-14 05:48:23.330212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.500 [2024-07-14 05:48:23.330255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.500 qpair failed and we were unable to recover it. 00:34:16.500 [2024-07-14 05:48:23.330461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.500 [2024-07-14 05:48:23.330504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.500 qpair failed and we were unable to recover it. 00:34:16.500 [2024-07-14 05:48:23.330736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.500 [2024-07-14 05:48:23.330778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.500 qpair failed and we were unable to recover it. 00:34:16.500 [2024-07-14 05:48:23.330995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.500 [2024-07-14 05:48:23.331021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.500 qpair failed and we were unable to recover it. 00:34:16.500 [2024-07-14 05:48:23.331173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.500 [2024-07-14 05:48:23.331200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.500 qpair failed and we were unable to recover it. 00:34:16.500 [2024-07-14 05:48:23.331413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.500 [2024-07-14 05:48:23.331456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.500 qpair failed and we were unable to recover it. 00:34:16.500 [2024-07-14 05:48:23.331643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.500 [2024-07-14 05:48:23.331686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.500 qpair failed and we were unable to recover it. 00:34:16.500 [2024-07-14 05:48:23.331896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.500 [2024-07-14 05:48:23.331922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.500 qpair failed and we were unable to recover it. 00:34:16.500 [2024-07-14 05:48:23.332102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.500 [2024-07-14 05:48:23.332145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.500 qpair failed and we were unable to recover it. 00:34:16.500 [2024-07-14 05:48:23.332358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.500 [2024-07-14 05:48:23.332401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.500 qpair failed and we were unable to recover it. 00:34:16.500 [2024-07-14 05:48:23.332604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.500 [2024-07-14 05:48:23.332646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.500 qpair failed and we were unable to recover it. 00:34:16.500 [2024-07-14 05:48:23.332825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.500 [2024-07-14 05:48:23.332851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.501 qpair failed and we were unable to recover it. 00:34:16.501 [2024-07-14 05:48:23.333067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.501 [2024-07-14 05:48:23.333093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.501 qpair failed and we were unable to recover it. 00:34:16.501 [2024-07-14 05:48:23.333304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.501 [2024-07-14 05:48:23.333346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.501 qpair failed and we were unable to recover it. 00:34:16.501 [2024-07-14 05:48:23.333561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.501 [2024-07-14 05:48:23.333603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.501 qpair failed and we were unable to recover it. 00:34:16.501 [2024-07-14 05:48:23.333791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.501 [2024-07-14 05:48:23.333817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.501 qpair failed and we were unable to recover it. 00:34:16.501 [2024-07-14 05:48:23.334000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.501 [2024-07-14 05:48:23.334029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.501 qpair failed and we were unable to recover it. 00:34:16.501 [2024-07-14 05:48:23.334243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.501 [2024-07-14 05:48:23.334287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.501 qpair failed and we were unable to recover it. 00:34:16.501 [2024-07-14 05:48:23.334498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.501 [2024-07-14 05:48:23.334540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.501 qpair failed and we were unable to recover it. 00:34:16.501 [2024-07-14 05:48:23.334725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.501 [2024-07-14 05:48:23.334751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.501 qpair failed and we were unable to recover it. 00:34:16.501 [2024-07-14 05:48:23.334975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.501 [2024-07-14 05:48:23.335019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.501 qpair failed and we were unable to recover it. 00:34:16.501 [2024-07-14 05:48:23.335257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.501 [2024-07-14 05:48:23.335300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.501 qpair failed and we were unable to recover it. 00:34:16.501 [2024-07-14 05:48:23.335511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.501 [2024-07-14 05:48:23.335554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.501 qpair failed and we were unable to recover it. 00:34:16.501 [2024-07-14 05:48:23.335741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.501 [2024-07-14 05:48:23.335766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.501 qpair failed and we were unable to recover it. 00:34:16.501 [2024-07-14 05:48:23.335977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.501 [2024-07-14 05:48:23.336020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.501 qpair failed and we were unable to recover it. 00:34:16.501 [2024-07-14 05:48:23.336215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.501 [2024-07-14 05:48:23.336244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.501 qpair failed and we were unable to recover it. 00:34:16.501 [2024-07-14 05:48:23.336491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.501 [2024-07-14 05:48:23.336535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.501 qpair failed and we were unable to recover it. 00:34:16.501 [2024-07-14 05:48:23.336728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.501 [2024-07-14 05:48:23.336754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.501 qpair failed and we were unable to recover it. 00:34:16.501 [2024-07-14 05:48:23.336925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.501 [2024-07-14 05:48:23.336955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.501 qpair failed and we were unable to recover it. 00:34:16.501 [2024-07-14 05:48:23.337173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.501 [2024-07-14 05:48:23.337216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.501 qpair failed and we were unable to recover it. 00:34:16.501 [2024-07-14 05:48:23.337437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.501 [2024-07-14 05:48:23.337464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.501 qpair failed and we were unable to recover it. 00:34:16.501 [2024-07-14 05:48:23.337666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.501 [2024-07-14 05:48:23.337692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.501 qpair failed and we were unable to recover it. 00:34:16.501 [2024-07-14 05:48:23.337877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.501 [2024-07-14 05:48:23.337904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.501 qpair failed and we were unable to recover it. 00:34:16.501 [2024-07-14 05:48:23.338138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.501 [2024-07-14 05:48:23.338180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.501 qpair failed and we were unable to recover it. 00:34:16.501 [2024-07-14 05:48:23.338377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.501 [2024-07-14 05:48:23.338405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.501 qpair failed and we were unable to recover it. 00:34:16.501 [2024-07-14 05:48:23.338625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.501 [2024-07-14 05:48:23.338668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.501 qpair failed and we were unable to recover it. 00:34:16.501 [2024-07-14 05:48:23.338881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.501 [2024-07-14 05:48:23.338907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.501 qpair failed and we were unable to recover it. 00:34:16.501 [2024-07-14 05:48:23.339125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.501 [2024-07-14 05:48:23.339152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.501 qpair failed and we were unable to recover it. 00:34:16.501 [2024-07-14 05:48:23.339388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.501 [2024-07-14 05:48:23.339431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.501 qpair failed and we were unable to recover it. 00:34:16.501 [2024-07-14 05:48:23.339621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.501 [2024-07-14 05:48:23.339664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.501 qpair failed and we were unable to recover it. 00:34:16.501 [2024-07-14 05:48:23.339875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.501 [2024-07-14 05:48:23.339901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.501 qpair failed and we were unable to recover it. 00:34:16.501 [2024-07-14 05:48:23.340114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.501 [2024-07-14 05:48:23.340140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.501 qpair failed and we were unable to recover it. 00:34:16.501 [2024-07-14 05:48:23.340349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.501 [2024-07-14 05:48:23.340392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:16.501 qpair failed and we were unable to recover it. 00:34:16.501 [2024-07-14 05:48:23.340651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.501 [2024-07-14 05:48:23.340694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.501 qpair failed and we were unable to recover it. 00:34:16.501 [2024-07-14 05:48:23.340917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.501 [2024-07-14 05:48:23.340945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.501 qpair failed and we were unable to recover it. 00:34:16.501 [2024-07-14 05:48:23.341174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.501 [2024-07-14 05:48:23.341202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.501 qpair failed and we were unable to recover it. 00:34:16.501 [2024-07-14 05:48:23.341402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.501 [2024-07-14 05:48:23.341431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.501 qpair failed and we were unable to recover it. 00:34:16.501 [2024-07-14 05:48:23.341661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.501 [2024-07-14 05:48:23.341690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.501 qpair failed and we were unable to recover it. 00:34:16.501 [2024-07-14 05:48:23.341889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.501 [2024-07-14 05:48:23.341930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.501 qpair failed and we were unable to recover it. 00:34:16.501 [2024-07-14 05:48:23.342116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.501 [2024-07-14 05:48:23.342141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.501 qpair failed and we were unable to recover it. 00:34:16.501 [2024-07-14 05:48:23.342325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.501 [2024-07-14 05:48:23.342355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.501 qpair failed and we were unable to recover it. 00:34:16.501 [2024-07-14 05:48:23.342583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.501 [2024-07-14 05:48:23.342611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.501 qpair failed and we were unable to recover it. 00:34:16.501 [2024-07-14 05:48:23.342778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.502 [2024-07-14 05:48:23.342806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.502 qpair failed and we were unable to recover it. 00:34:16.502 [2024-07-14 05:48:23.342985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.502 [2024-07-14 05:48:23.343012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.502 qpair failed and we were unable to recover it. 00:34:16.502 [2024-07-14 05:48:23.343176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.502 [2024-07-14 05:48:23.343202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.502 qpair failed and we were unable to recover it. 00:34:16.502 [2024-07-14 05:48:23.343570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.502 [2024-07-14 05:48:23.343629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.502 qpair failed and we were unable to recover it. 00:34:16.502 [2024-07-14 05:48:23.343837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.502 [2024-07-14 05:48:23.343871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.502 qpair failed and we were unable to recover it. 00:34:16.502 [2024-07-14 05:48:23.344120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.502 [2024-07-14 05:48:23.344163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.502 qpair failed and we were unable to recover it. 00:34:16.502 [2024-07-14 05:48:23.344448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.502 [2024-07-14 05:48:23.344498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.502 qpair failed and we were unable to recover it. 00:34:16.502 [2024-07-14 05:48:23.344705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.502 [2024-07-14 05:48:23.344733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.502 qpair failed and we were unable to recover it. 00:34:16.502 [2024-07-14 05:48:23.344926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.502 [2024-07-14 05:48:23.344952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.502 qpair failed and we were unable to recover it. 00:34:16.502 [2024-07-14 05:48:23.345117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.502 [2024-07-14 05:48:23.345142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.502 qpair failed and we were unable to recover it. 00:34:16.502 [2024-07-14 05:48:23.345346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.502 [2024-07-14 05:48:23.345374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.502 qpair failed and we were unable to recover it. 00:34:16.502 [2024-07-14 05:48:23.345551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.502 [2024-07-14 05:48:23.345578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.502 qpair failed and we were unable to recover it. 00:34:16.502 [2024-07-14 05:48:23.345770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.502 [2024-07-14 05:48:23.345810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.502 qpair failed and we were unable to recover it. 00:34:16.502 [2024-07-14 05:48:23.346041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.502 [2024-07-14 05:48:23.346067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.502 qpair failed and we were unable to recover it. 00:34:16.502 [2024-07-14 05:48:23.346278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.502 [2024-07-14 05:48:23.346305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.502 qpair failed and we were unable to recover it. 00:34:16.502 [2024-07-14 05:48:23.346513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.502 [2024-07-14 05:48:23.346541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.502 qpair failed and we were unable to recover it. 00:34:16.502 [2024-07-14 05:48:23.346715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.502 [2024-07-14 05:48:23.346743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.502 qpair failed and we were unable to recover it. 00:34:16.502 [2024-07-14 05:48:23.346954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.502 [2024-07-14 05:48:23.346980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.502 qpair failed and we were unable to recover it. 00:34:16.502 [2024-07-14 05:48:23.347136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.502 [2024-07-14 05:48:23.347165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.502 qpair failed and we were unable to recover it. 00:34:16.502 [2024-07-14 05:48:23.347373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.502 [2024-07-14 05:48:23.347401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.502 qpair failed and we were unable to recover it. 00:34:16.502 [2024-07-14 05:48:23.347602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.502 [2024-07-14 05:48:23.347630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.502 qpair failed and we were unable to recover it. 00:34:16.502 [2024-07-14 05:48:23.347834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.502 [2024-07-14 05:48:23.347858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.502 qpair failed and we were unable to recover it. 00:34:16.502 [2024-07-14 05:48:23.348014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.502 [2024-07-14 05:48:23.348039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.502 qpair failed and we were unable to recover it. 00:34:16.502 [2024-07-14 05:48:23.348250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.502 [2024-07-14 05:48:23.348277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.502 qpair failed and we were unable to recover it. 00:34:16.502 [2024-07-14 05:48:23.348505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.502 [2024-07-14 05:48:23.348533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.502 qpair failed and we were unable to recover it. 00:34:16.502 [2024-07-14 05:48:23.348727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.502 [2024-07-14 05:48:23.348755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.502 qpair failed and we were unable to recover it. 00:34:16.502 [2024-07-14 05:48:23.348990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.502 [2024-07-14 05:48:23.349016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.502 qpair failed and we were unable to recover it. 00:34:16.502 [2024-07-14 05:48:23.349200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.502 [2024-07-14 05:48:23.349225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.502 qpair failed and we were unable to recover it. 00:34:16.502 [2024-07-14 05:48:23.349407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.502 [2024-07-14 05:48:23.349435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.502 qpair failed and we were unable to recover it. 00:34:16.502 [2024-07-14 05:48:23.349626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.502 [2024-07-14 05:48:23.349653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.502 qpair failed and we were unable to recover it. 00:34:16.502 [2024-07-14 05:48:23.349839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.502 [2024-07-14 05:48:23.349863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.502 qpair failed and we were unable to recover it. 00:34:16.502 [2024-07-14 05:48:23.350058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.502 [2024-07-14 05:48:23.350083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.502 qpair failed and we were unable to recover it. 00:34:16.502 [2024-07-14 05:48:23.350315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.502 [2024-07-14 05:48:23.350343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.502 qpair failed and we were unable to recover it. 00:34:16.502 [2024-07-14 05:48:23.350634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.502 [2024-07-14 05:48:23.350684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.502 qpair failed and we were unable to recover it. 00:34:16.502 [2024-07-14 05:48:23.350876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.502 [2024-07-14 05:48:23.350904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.502 qpair failed and we were unable to recover it. 00:34:16.502 [2024-07-14 05:48:23.351066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.502 [2024-07-14 05:48:23.351091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.502 qpair failed and we were unable to recover it. 00:34:16.502 [2024-07-14 05:48:23.351296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.502 [2024-07-14 05:48:23.351321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.502 qpair failed and we were unable to recover it. 00:34:16.502 [2024-07-14 05:48:23.351481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.502 [2024-07-14 05:48:23.351507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.502 qpair failed and we were unable to recover it. 00:34:16.502 [2024-07-14 05:48:23.351693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.502 [2024-07-14 05:48:23.351717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.502 qpair failed and we were unable to recover it. 00:34:16.502 [2024-07-14 05:48:23.351894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.502 [2024-07-14 05:48:23.351920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.502 qpair failed and we were unable to recover it. 00:34:16.502 [2024-07-14 05:48:23.352104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.502 [2024-07-14 05:48:23.352132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.502 qpair failed and we were unable to recover it. 00:34:16.502 [2024-07-14 05:48:23.352309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.502 [2024-07-14 05:48:23.352338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.503 qpair failed and we were unable to recover it. 00:34:16.503 [2024-07-14 05:48:23.352544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.503 [2024-07-14 05:48:23.352569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.503 qpair failed and we were unable to recover it. 00:34:16.503 [2024-07-14 05:48:23.352750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.503 [2024-07-14 05:48:23.352777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.503 qpair failed and we were unable to recover it. 00:34:16.503 [2024-07-14 05:48:23.353006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.503 [2024-07-14 05:48:23.353035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.503 qpair failed and we were unable to recover it. 00:34:16.503 [2024-07-14 05:48:23.353218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.503 [2024-07-14 05:48:23.353246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.503 qpair failed and we were unable to recover it. 00:34:16.503 [2024-07-14 05:48:23.353478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.503 [2024-07-14 05:48:23.353506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.503 qpair failed and we were unable to recover it. 00:34:16.503 [2024-07-14 05:48:23.353718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.503 [2024-07-14 05:48:23.353743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.503 qpair failed and we were unable to recover it. 00:34:16.503 [2024-07-14 05:48:23.353929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.503 [2024-07-14 05:48:23.353955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.503 qpair failed and we were unable to recover it. 00:34:16.503 [2024-07-14 05:48:23.354188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.503 [2024-07-14 05:48:23.354216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.503 qpair failed and we were unable to recover it. 00:34:16.503 [2024-07-14 05:48:23.354396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.503 [2024-07-14 05:48:23.354424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.503 qpair failed and we were unable to recover it. 00:34:16.503 [2024-07-14 05:48:23.354626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.503 [2024-07-14 05:48:23.354651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.503 qpair failed and we were unable to recover it. 00:34:16.503 [2024-07-14 05:48:23.354878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.503 [2024-07-14 05:48:23.354907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.503 qpair failed and we were unable to recover it. 00:34:16.503 [2024-07-14 05:48:23.355108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.503 [2024-07-14 05:48:23.355135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.503 qpair failed and we were unable to recover it. 00:34:16.503 [2024-07-14 05:48:23.355314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.503 [2024-07-14 05:48:23.355339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.503 qpair failed and we were unable to recover it. 00:34:16.503 [2024-07-14 05:48:23.355535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.503 [2024-07-14 05:48:23.355563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.503 qpair failed and we were unable to recover it. 00:34:16.503 [2024-07-14 05:48:23.355769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.503 [2024-07-14 05:48:23.355797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.503 qpair failed and we were unable to recover it. 00:34:16.503 [2024-07-14 05:48:23.355966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.503 [2024-07-14 05:48:23.355992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.503 qpair failed and we were unable to recover it. 00:34:16.503 [2024-07-14 05:48:23.356188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.503 [2024-07-14 05:48:23.356216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.503 qpair failed and we were unable to recover it. 00:34:16.503 [2024-07-14 05:48:23.356450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.503 [2024-07-14 05:48:23.356476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.503 qpair failed and we were unable to recover it. 00:34:16.503 [2024-07-14 05:48:23.356656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.503 [2024-07-14 05:48:23.356681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.503 qpair failed and we were unable to recover it. 00:34:16.503 [2024-07-14 05:48:23.356886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.503 [2024-07-14 05:48:23.356913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.503 qpair failed and we were unable to recover it. 00:34:16.503 [2024-07-14 05:48:23.357111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.503 [2024-07-14 05:48:23.357139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.503 qpair failed and we were unable to recover it. 00:34:16.503 [2024-07-14 05:48:23.357367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.503 [2024-07-14 05:48:23.357392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.503 qpair failed and we were unable to recover it. 00:34:16.503 [2024-07-14 05:48:23.357593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.503 [2024-07-14 05:48:23.357621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.503 qpair failed and we were unable to recover it. 00:34:16.503 [2024-07-14 05:48:23.357827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.503 [2024-07-14 05:48:23.357854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.503 qpair failed and we were unable to recover it. 00:34:16.503 [2024-07-14 05:48:23.358039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.503 [2024-07-14 05:48:23.358064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.503 qpair failed and we were unable to recover it. 00:34:16.503 [2024-07-14 05:48:23.358306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.503 [2024-07-14 05:48:23.358334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.503 qpair failed and we were unable to recover it. 00:34:16.503 [2024-07-14 05:48:23.358519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.503 [2024-07-14 05:48:23.358543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.503 qpair failed and we were unable to recover it. 00:34:16.503 [2024-07-14 05:48:23.358735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.503 [2024-07-14 05:48:23.358761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.503 qpair failed and we were unable to recover it. 00:34:16.503 [2024-07-14 05:48:23.358961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.503 [2024-07-14 05:48:23.358989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.503 qpair failed and we were unable to recover it. 00:34:16.503 [2024-07-14 05:48:23.359189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.503 [2024-07-14 05:48:23.359217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.503 qpair failed and we were unable to recover it. 00:34:16.503 [2024-07-14 05:48:23.359395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.503 [2024-07-14 05:48:23.359424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.503 qpair failed and we were unable to recover it. 00:34:16.503 [2024-07-14 05:48:23.359624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.503 [2024-07-14 05:48:23.359651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.503 qpair failed and we were unable to recover it. 00:34:16.503 [2024-07-14 05:48:23.359828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.503 [2024-07-14 05:48:23.359856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.503 qpair failed and we were unable to recover it. 00:34:16.503 [2024-07-14 05:48:23.360075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.503 [2024-07-14 05:48:23.360102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.503 qpair failed and we were unable to recover it. 00:34:16.503 [2024-07-14 05:48:23.360315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.503 [2024-07-14 05:48:23.360343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.503 qpair failed and we were unable to recover it. 00:34:16.503 [2024-07-14 05:48:23.360528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.503 [2024-07-14 05:48:23.360553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.503 qpair failed and we were unable to recover it. 00:34:16.503 [2024-07-14 05:48:23.360707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.503 [2024-07-14 05:48:23.360731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.503 qpair failed and we were unable to recover it. 00:34:16.503 [2024-07-14 05:48:23.360932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.503 [2024-07-14 05:48:23.360960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.503 qpair failed and we were unable to recover it. 00:34:16.503 [2024-07-14 05:48:23.361158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.503 [2024-07-14 05:48:23.361185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.503 qpair failed and we were unable to recover it. 00:34:16.503 [2024-07-14 05:48:23.361385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.503 [2024-07-14 05:48:23.361411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.503 qpair failed and we were unable to recover it. 00:34:16.503 [2024-07-14 05:48:23.361621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.503 [2024-07-14 05:48:23.361648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.503 qpair failed and we were unable to recover it. 00:34:16.504 [2024-07-14 05:48:23.361888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.504 [2024-07-14 05:48:23.361913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.504 qpair failed and we were unable to recover it. 00:34:16.504 [2024-07-14 05:48:23.362061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.504 [2024-07-14 05:48:23.362086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.504 qpair failed and we were unable to recover it. 00:34:16.504 [2024-07-14 05:48:23.362306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.504 [2024-07-14 05:48:23.362334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.504 qpair failed and we were unable to recover it. 00:34:16.504 [2024-07-14 05:48:23.362514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.504 [2024-07-14 05:48:23.362541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.504 qpair failed and we were unable to recover it. 00:34:16.504 [2024-07-14 05:48:23.362770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.504 [2024-07-14 05:48:23.362796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.504 qpair failed and we were unable to recover it. 00:34:16.504 [2024-07-14 05:48:23.362980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.504 [2024-07-14 05:48:23.363008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.504 qpair failed and we were unable to recover it. 00:34:16.504 [2024-07-14 05:48:23.363215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.504 [2024-07-14 05:48:23.363240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.504 qpair failed and we were unable to recover it. 00:34:16.504 [2024-07-14 05:48:23.363420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.504 [2024-07-14 05:48:23.363445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.504 qpair failed and we were unable to recover it. 00:34:16.504 [2024-07-14 05:48:23.363617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.504 [2024-07-14 05:48:23.363641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.504 qpair failed and we were unable to recover it. 00:34:16.504 [2024-07-14 05:48:23.363854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.504 [2024-07-14 05:48:23.363887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-07-14 05:48:23.364072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-07-14 05:48:23.364097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-07-14 05:48:23.364328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-07-14 05:48:23.364356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-07-14 05:48:23.364534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-07-14 05:48:23.364562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-07-14 05:48:23.364729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-07-14 05:48:23.364754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-07-14 05:48:23.364914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-07-14 05:48:23.364938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-07-14 05:48:23.365144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-07-14 05:48:23.365172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-07-14 05:48:23.365357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-07-14 05:48:23.365382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-07-14 05:48:23.365573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-07-14 05:48:23.365598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-07-14 05:48:23.365834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-07-14 05:48:23.365861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-07-14 05:48:23.366061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-07-14 05:48:23.366086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-07-14 05:48:23.366279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-07-14 05:48:23.366307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-07-14 05:48:23.366506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-07-14 05:48:23.366534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-07-14 05:48:23.366714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-07-14 05:48:23.366739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-07-14 05:48:23.366963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-07-14 05:48:23.366991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-07-14 05:48:23.367193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-07-14 05:48:23.367220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-07-14 05:48:23.367433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-07-14 05:48:23.367458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-07-14 05:48:23.367669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-07-14 05:48:23.367709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-07-14 05:48:23.367878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-07-14 05:48:23.367906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-07-14 05:48:23.368082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-07-14 05:48:23.368107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-07-14 05:48:23.368293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-07-14 05:48:23.368318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-07-14 05:48:23.368555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-07-14 05:48:23.368583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-07-14 05:48:23.368784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-07-14 05:48:23.368810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-07-14 05:48:23.369021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-07-14 05:48:23.369050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-07-14 05:48:23.369247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-07-14 05:48:23.369274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-07-14 05:48:23.369438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-07-14 05:48:23.369462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-07-14 05:48:23.369668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-07-14 05:48:23.369693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-07-14 05:48:23.369878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-07-14 05:48:23.369907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-07-14 05:48:23.370091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-07-14 05:48:23.370116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-07-14 05:48:23.370359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-07-14 05:48:23.370387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-07-14 05:48:23.370560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-07-14 05:48:23.370588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-07-14 05:48:23.370795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-07-14 05:48:23.370819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-07-14 05:48:23.371032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-07-14 05:48:23.371060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-07-14 05:48:23.371289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-07-14 05:48:23.371316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-07-14 05:48:23.371521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-07-14 05:48:23.371546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-07-14 05:48:23.371753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-07-14 05:48:23.371780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-07-14 05:48:23.371980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-07-14 05:48:23.372009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-07-14 05:48:23.372186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-07-14 05:48:23.372212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-07-14 05:48:23.372387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-07-14 05:48:23.372415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-07-14 05:48:23.372615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-07-14 05:48:23.372642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-07-14 05:48:23.372847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-07-14 05:48:23.372879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.505 [2024-07-14 05:48:23.373066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.505 [2024-07-14 05:48:23.373093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.505 qpair failed and we were unable to recover it. 00:34:16.506 [2024-07-14 05:48:23.373291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-07-14 05:48:23.373319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-07-14 05:48:23.373498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-07-14 05:48:23.373523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-07-14 05:48:23.373784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-07-14 05:48:23.373837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-07-14 05:48:23.374056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-07-14 05:48:23.374081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-07-14 05:48:23.374240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-07-14 05:48:23.374266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-07-14 05:48:23.374471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-07-14 05:48:23.374496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-07-14 05:48:23.374715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-07-14 05:48:23.374748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-07-14 05:48:23.374948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-07-14 05:48:23.374974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-07-14 05:48:23.375153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-07-14 05:48:23.375180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-07-14 05:48:23.375353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-07-14 05:48:23.375380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-07-14 05:48:23.375622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-07-14 05:48:23.375647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-07-14 05:48:23.375863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-07-14 05:48:23.375898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-07-14 05:48:23.376126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-07-14 05:48:23.376154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-07-14 05:48:23.376390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-07-14 05:48:23.376415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-07-14 05:48:23.376596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-07-14 05:48:23.376621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-07-14 05:48:23.376785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-07-14 05:48:23.376810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-07-14 05:48:23.376996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-07-14 05:48:23.377021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-07-14 05:48:23.377221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-07-14 05:48:23.377249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-07-14 05:48:23.377453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-07-14 05:48:23.377478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-07-14 05:48:23.377686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-07-14 05:48:23.377711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-07-14 05:48:23.377932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-07-14 05:48:23.377960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-07-14 05:48:23.378195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-07-14 05:48:23.378223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-07-14 05:48:23.378406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-07-14 05:48:23.378431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-07-14 05:48:23.378626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-07-14 05:48:23.378653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-07-14 05:48:23.378842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-07-14 05:48:23.378871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-07-14 05:48:23.379052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-07-14 05:48:23.379077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-07-14 05:48:23.379276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-07-14 05:48:23.379304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-07-14 05:48:23.379472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-07-14 05:48:23.379500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-07-14 05:48:23.379683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-07-14 05:48:23.379709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-07-14 05:48:23.379894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-07-14 05:48:23.379928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-07-14 05:48:23.380170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-07-14 05:48:23.380197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-07-14 05:48:23.380404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-07-14 05:48:23.380429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-07-14 05:48:23.380611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-07-14 05:48:23.380639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-07-14 05:48:23.380834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-07-14 05:48:23.380872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-07-14 05:48:23.381096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-07-14 05:48:23.381123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-07-14 05:48:23.381299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-07-14 05:48:23.381327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-07-14 05:48:23.381537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-07-14 05:48:23.381562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-07-14 05:48:23.381744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-07-14 05:48:23.381769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-07-14 05:48:23.381954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-07-14 05:48:23.381981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-07-14 05:48:23.382158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-07-14 05:48:23.382185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-07-14 05:48:23.382352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.506 [2024-07-14 05:48:23.382377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.506 qpair failed and we were unable to recover it. 00:34:16.506 [2024-07-14 05:48:23.382544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-07-14 05:48:23.382569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-07-14 05:48:23.382722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-07-14 05:48:23.382746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-07-14 05:48:23.382949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-07-14 05:48:23.382974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-07-14 05:48:23.383141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-07-14 05:48:23.383170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-07-14 05:48:23.383331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-07-14 05:48:23.383358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-07-14 05:48:23.383565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-07-14 05:48:23.383590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-07-14 05:48:23.383756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-07-14 05:48:23.383782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-07-14 05:48:23.383985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-07-14 05:48:23.384013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-07-14 05:48:23.384239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-07-14 05:48:23.384264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-07-14 05:48:23.384467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-07-14 05:48:23.384495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-07-14 05:48:23.384722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-07-14 05:48:23.384750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-07-14 05:48:23.384950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-07-14 05:48:23.384975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-07-14 05:48:23.385137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-07-14 05:48:23.385161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-07-14 05:48:23.385342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-07-14 05:48:23.385367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-07-14 05:48:23.385590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-07-14 05:48:23.385615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-07-14 05:48:23.385811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-07-14 05:48:23.385839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-07-14 05:48:23.386042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-07-14 05:48:23.386067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-07-14 05:48:23.386259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-07-14 05:48:23.386284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-07-14 05:48:23.386460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-07-14 05:48:23.386487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-07-14 05:48:23.386685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-07-14 05:48:23.386713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-07-14 05:48:23.386918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-07-14 05:48:23.386943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-07-14 05:48:23.387147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-07-14 05:48:23.387175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-07-14 05:48:23.387402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-07-14 05:48:23.387430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-07-14 05:48:23.387670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-07-14 05:48:23.387696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-07-14 05:48:23.387935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-07-14 05:48:23.387961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-07-14 05:48:23.388200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-07-14 05:48:23.388228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-07-14 05:48:23.388429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-07-14 05:48:23.388454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-07-14 05:48:23.388676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-07-14 05:48:23.388703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-07-14 05:48:23.388898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-07-14 05:48:23.388927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-07-14 05:48:23.389136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-07-14 05:48:23.389161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-07-14 05:48:23.389358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-07-14 05:48:23.389386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-07-14 05:48:23.389600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-07-14 05:48:23.389628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-07-14 05:48:23.389802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-07-14 05:48:23.389826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-07-14 05:48:23.390020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-07-14 05:48:23.390046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-07-14 05:48:23.390232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-07-14 05:48:23.390257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-07-14 05:48:23.390436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-07-14 05:48:23.390461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-07-14 05:48:23.390661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-07-14 05:48:23.390690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-07-14 05:48:23.390889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-07-14 05:48:23.390917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-07-14 05:48:23.391125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-07-14 05:48:23.391149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-07-14 05:48:23.391310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-07-14 05:48:23.391334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-07-14 05:48:23.391532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.507 [2024-07-14 05:48:23.391560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.507 qpair failed and we were unable to recover it. 00:34:16.507 [2024-07-14 05:48:23.391764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.508 [2024-07-14 05:48:23.391789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-07-14 05:48:23.391966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.508 [2024-07-14 05:48:23.391995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-07-14 05:48:23.392185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.508 [2024-07-14 05:48:23.392210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-07-14 05:48:23.392414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.508 [2024-07-14 05:48:23.392439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-07-14 05:48:23.392622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.508 [2024-07-14 05:48:23.392650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-07-14 05:48:23.392881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.508 [2024-07-14 05:48:23.392909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-07-14 05:48:23.393113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.508 [2024-07-14 05:48:23.393137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-07-14 05:48:23.393351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.508 [2024-07-14 05:48:23.393378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-07-14 05:48:23.393577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.508 [2024-07-14 05:48:23.393605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-07-14 05:48:23.393795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.508 [2024-07-14 05:48:23.393820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-07-14 05:48:23.394006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.508 [2024-07-14 05:48:23.394035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-07-14 05:48:23.394237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.508 [2024-07-14 05:48:23.394265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-07-14 05:48:23.394492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.508 [2024-07-14 05:48:23.394518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-07-14 05:48:23.394727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.508 [2024-07-14 05:48:23.394755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-07-14 05:48:23.394975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.508 [2024-07-14 05:48:23.395000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-07-14 05:48:23.395183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.508 [2024-07-14 05:48:23.395208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-07-14 05:48:23.395415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.508 [2024-07-14 05:48:23.395443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-07-14 05:48:23.395654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.508 [2024-07-14 05:48:23.395681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-07-14 05:48:23.395887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.508 [2024-07-14 05:48:23.395913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-07-14 05:48:23.396117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.508 [2024-07-14 05:48:23.396160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-07-14 05:48:23.396364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.508 [2024-07-14 05:48:23.396391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-07-14 05:48:23.396621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.508 [2024-07-14 05:48:23.396646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-07-14 05:48:23.396850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.508 [2024-07-14 05:48:23.396882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-07-14 05:48:23.397071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.508 [2024-07-14 05:48:23.397096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-07-14 05:48:23.397300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.508 [2024-07-14 05:48:23.397325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-07-14 05:48:23.397519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.508 [2024-07-14 05:48:23.397547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-07-14 05:48:23.397748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.508 [2024-07-14 05:48:23.397776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-07-14 05:48:23.398006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.508 [2024-07-14 05:48:23.398031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-07-14 05:48:23.398244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.508 [2024-07-14 05:48:23.398272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-07-14 05:48:23.398435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.508 [2024-07-14 05:48:23.398463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-07-14 05:48:23.398637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.508 [2024-07-14 05:48:23.398662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-07-14 05:48:23.398860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.508 [2024-07-14 05:48:23.398892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-07-14 05:48:23.399103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.508 [2024-07-14 05:48:23.399131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-07-14 05:48:23.399337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.508 [2024-07-14 05:48:23.399361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-07-14 05:48:23.399524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.508 [2024-07-14 05:48:23.399552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-07-14 05:48:23.399781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.508 [2024-07-14 05:48:23.399808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-07-14 05:48:23.399994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.508 [2024-07-14 05:48:23.400019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-07-14 05:48:23.400182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.508 [2024-07-14 05:48:23.400207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-07-14 05:48:23.400388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.508 [2024-07-14 05:48:23.400413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-07-14 05:48:23.400568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.508 [2024-07-14 05:48:23.400593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-07-14 05:48:23.400765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.508 [2024-07-14 05:48:23.400793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-07-14 05:48:23.400991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.509 [2024-07-14 05:48:23.401019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-07-14 05:48:23.401198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.509 [2024-07-14 05:48:23.401223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-07-14 05:48:23.401432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.509 [2024-07-14 05:48:23.401460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-07-14 05:48:23.401644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.509 [2024-07-14 05:48:23.401669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-07-14 05:48:23.401875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.509 [2024-07-14 05:48:23.401900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-07-14 05:48:23.402107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.509 [2024-07-14 05:48:23.402138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-07-14 05:48:23.402319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.509 [2024-07-14 05:48:23.402346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-07-14 05:48:23.402574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.509 [2024-07-14 05:48:23.402600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-07-14 05:48:23.402814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.509 [2024-07-14 05:48:23.402842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-07-14 05:48:23.403054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.509 [2024-07-14 05:48:23.403079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-07-14 05:48:23.403286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.509 [2024-07-14 05:48:23.403311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-07-14 05:48:23.403501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.509 [2024-07-14 05:48:23.403528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-07-14 05:48:23.403781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.509 [2024-07-14 05:48:23.403832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-07-14 05:48:23.404040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.509 [2024-07-14 05:48:23.404066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-07-14 05:48:23.404290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.509 [2024-07-14 05:48:23.404318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-07-14 05:48:23.404547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.509 [2024-07-14 05:48:23.404572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-07-14 05:48:23.404756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.509 [2024-07-14 05:48:23.404781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-07-14 05:48:23.405008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.509 [2024-07-14 05:48:23.405038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-07-14 05:48:23.405239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.509 [2024-07-14 05:48:23.405267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-07-14 05:48:23.405497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.509 [2024-07-14 05:48:23.405521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-07-14 05:48:23.405706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.509 [2024-07-14 05:48:23.405731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-07-14 05:48:23.405959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.509 [2024-07-14 05:48:23.405986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-07-14 05:48:23.406190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.509 [2024-07-14 05:48:23.406216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-07-14 05:48:23.406393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.509 [2024-07-14 05:48:23.406422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-07-14 05:48:23.406608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.509 [2024-07-14 05:48:23.406633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-07-14 05:48:23.406838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.509 [2024-07-14 05:48:23.406863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-07-14 05:48:23.407114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.509 [2024-07-14 05:48:23.407142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-07-14 05:48:23.407336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.509 [2024-07-14 05:48:23.407364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-07-14 05:48:23.407591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.509 [2024-07-14 05:48:23.407616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-07-14 05:48:23.407791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.509 [2024-07-14 05:48:23.407819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-07-14 05:48:23.408067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.509 [2024-07-14 05:48:23.408092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-07-14 05:48:23.408272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.509 [2024-07-14 05:48:23.408297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-07-14 05:48:23.408482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.509 [2024-07-14 05:48:23.408513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-07-14 05:48:23.408739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.509 [2024-07-14 05:48:23.408764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-07-14 05:48:23.408943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.509 [2024-07-14 05:48:23.408969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-07-14 05:48:23.409130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.509 [2024-07-14 05:48:23.409155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.510 [2024-07-14 05:48:23.409331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-07-14 05:48:23.409356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-07-14 05:48:23.409511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-07-14 05:48:23.409536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-07-14 05:48:23.409774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-07-14 05:48:23.409802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-07-14 05:48:23.410008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-07-14 05:48:23.410037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-07-14 05:48:23.410214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-07-14 05:48:23.410239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-07-14 05:48:23.410444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-07-14 05:48:23.410485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-07-14 05:48:23.410681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-07-14 05:48:23.410709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-07-14 05:48:23.410910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-07-14 05:48:23.410936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-07-14 05:48:23.411137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-07-14 05:48:23.411166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-07-14 05:48:23.411340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-07-14 05:48:23.411368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-07-14 05:48:23.411579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-07-14 05:48:23.411605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-07-14 05:48:23.411815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-07-14 05:48:23.411843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-07-14 05:48:23.412056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-07-14 05:48:23.412081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-07-14 05:48:23.412239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-07-14 05:48:23.412264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-07-14 05:48:23.412469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-07-14 05:48:23.412496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-07-14 05:48:23.412667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-07-14 05:48:23.412695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-07-14 05:48:23.412900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-07-14 05:48:23.412925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-07-14 05:48:23.413137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-07-14 05:48:23.413164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-07-14 05:48:23.413387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-07-14 05:48:23.413415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-07-14 05:48:23.413596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-07-14 05:48:23.413621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-07-14 05:48:23.413777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-07-14 05:48:23.413801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-07-14 05:48:23.413976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-07-14 05:48:23.414004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-07-14 05:48:23.414202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-07-14 05:48:23.414229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-07-14 05:48:23.414412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-07-14 05:48:23.414437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-07-14 05:48:23.414617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-07-14 05:48:23.414645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-07-14 05:48:23.414849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-07-14 05:48:23.414879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-07-14 05:48:23.415051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-07-14 05:48:23.415079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-07-14 05:48:23.415283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-07-14 05:48:23.415311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-07-14 05:48:23.415543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-07-14 05:48:23.415568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-07-14 05:48:23.415788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-07-14 05:48:23.415813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-07-14 05:48:23.416014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-07-14 05:48:23.416043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-07-14 05:48:23.416247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-07-14 05:48:23.416272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-07-14 05:48:23.416504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-07-14 05:48:23.416532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-07-14 05:48:23.416760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-07-14 05:48:23.416788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-07-14 05:48:23.417041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-07-14 05:48:23.417066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-07-14 05:48:23.417304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-07-14 05:48:23.417332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-07-14 05:48:23.417538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-07-14 05:48:23.417566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-07-14 05:48:23.417742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-07-14 05:48:23.417767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-07-14 05:48:23.417930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-07-14 05:48:23.417955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-07-14 05:48:23.418136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-07-14 05:48:23.418164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-07-14 05:48:23.418389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.510 [2024-07-14 05:48:23.418414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.510 qpair failed and we were unable to recover it. 00:34:16.510 [2024-07-14 05:48:23.418612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-07-14 05:48:23.418641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-07-14 05:48:23.418890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-07-14 05:48:23.418916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-07-14 05:48:23.419071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-07-14 05:48:23.419096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-07-14 05:48:23.419276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-07-14 05:48:23.419301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-07-14 05:48:23.419505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-07-14 05:48:23.419530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-07-14 05:48:23.419728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-07-14 05:48:23.419753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-07-14 05:48:23.419936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-07-14 05:48:23.419962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-07-14 05:48:23.420160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-07-14 05:48:23.420188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-07-14 05:48:23.420395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-07-14 05:48:23.420420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-07-14 05:48:23.420624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-07-14 05:48:23.420651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-07-14 05:48:23.420858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-07-14 05:48:23.420889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-07-14 05:48:23.421067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-07-14 05:48:23.421091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-07-14 05:48:23.421279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-07-14 05:48:23.421304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-07-14 05:48:23.421476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-07-14 05:48:23.421501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-07-14 05:48:23.421684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-07-14 05:48:23.421713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-07-14 05:48:23.421899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-07-14 05:48:23.421928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-07-14 05:48:23.422143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-07-14 05:48:23.422171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-07-14 05:48:23.422342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-07-14 05:48:23.422366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-07-14 05:48:23.422536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-07-14 05:48:23.422565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-07-14 05:48:23.422790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-07-14 05:48:23.422818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-07-14 05:48:23.423001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-07-14 05:48:23.423028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-07-14 05:48:23.423205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-07-14 05:48:23.423234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-07-14 05:48:23.423432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-07-14 05:48:23.423460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-07-14 05:48:23.423666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-07-14 05:48:23.423694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-07-14 05:48:23.423899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-07-14 05:48:23.423928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-07-14 05:48:23.424129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-07-14 05:48:23.424157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-07-14 05:48:23.424387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-07-14 05:48:23.424411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-07-14 05:48:23.424620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-07-14 05:48:23.424647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-07-14 05:48:23.424825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-07-14 05:48:23.424854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-07-14 05:48:23.425074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-07-14 05:48:23.425100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-07-14 05:48:23.425309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-07-14 05:48:23.425337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-07-14 05:48:23.425543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-07-14 05:48:23.425571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-07-14 05:48:23.425776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-07-14 05:48:23.425801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-07-14 05:48:23.426009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-07-14 05:48:23.426037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-07-14 05:48:23.426237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-07-14 05:48:23.426264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-07-14 05:48:23.426466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-07-14 05:48:23.426490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-07-14 05:48:23.426695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-07-14 05:48:23.426723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-07-14 05:48:23.426995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-07-14 05:48:23.427023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-07-14 05:48:23.427200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-07-14 05:48:23.427226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-07-14 05:48:23.427456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-07-14 05:48:23.427484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-07-14 05:48:23.427686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-07-14 05:48:23.427714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.511 [2024-07-14 05:48:23.427891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-07-14 05:48:23.427918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.511 qpair failed and we were unable to recover it. 00:34:16.512 [2024-07-14 05:48:23.428100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-07-14 05:48:23.428129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-07-14 05:48:23.428327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-07-14 05:48:23.428354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-07-14 05:48:23.428577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-07-14 05:48:23.428602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-07-14 05:48:23.428812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-07-14 05:48:23.428839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-07-14 05:48:23.429122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-07-14 05:48:23.429148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-07-14 05:48:23.429311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-07-14 05:48:23.429335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-07-14 05:48:23.429543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-07-14 05:48:23.429584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-07-14 05:48:23.429789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-07-14 05:48:23.429818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-07-14 05:48:23.430031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-07-14 05:48:23.430059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-07-14 05:48:23.430237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-07-14 05:48:23.430265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-07-14 05:48:23.430468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-07-14 05:48:23.430496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-07-14 05:48:23.430704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-07-14 05:48:23.430730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-07-14 05:48:23.430890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-07-14 05:48:23.430915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-07-14 05:48:23.431141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-07-14 05:48:23.431168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-07-14 05:48:23.431375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-07-14 05:48:23.431399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-07-14 05:48:23.431583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-07-14 05:48:23.431607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-07-14 05:48:23.431778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-07-14 05:48:23.431806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-07-14 05:48:23.432012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-07-14 05:48:23.432038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-07-14 05:48:23.432242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-07-14 05:48:23.432271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-07-14 05:48:23.432442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-07-14 05:48:23.432470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-07-14 05:48:23.432649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-07-14 05:48:23.432673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-07-14 05:48:23.432854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-07-14 05:48:23.432884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-07-14 05:48:23.433093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-07-14 05:48:23.433122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-07-14 05:48:23.433297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-07-14 05:48:23.433322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-07-14 05:48:23.433490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-07-14 05:48:23.433518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-07-14 05:48:23.433721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-07-14 05:48:23.433748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-07-14 05:48:23.433949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-07-14 05:48:23.433973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-07-14 05:48:23.434147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-07-14 05:48:23.434174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-07-14 05:48:23.434359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-07-14 05:48:23.434386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-07-14 05:48:23.434558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-07-14 05:48:23.434583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-07-14 05:48:23.434762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-07-14 05:48:23.434787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-07-14 05:48:23.435018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-07-14 05:48:23.435047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-07-14 05:48:23.435234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-07-14 05:48:23.435259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-07-14 05:48:23.435436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-07-14 05:48:23.435463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-07-14 05:48:23.435691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-07-14 05:48:23.435719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-07-14 05:48:23.435927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-07-14 05:48:23.435953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-07-14 05:48:23.436168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-07-14 05:48:23.436196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-07-14 05:48:23.436394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-07-14 05:48:23.436422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-07-14 05:48:23.436657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-07-14 05:48:23.436683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-07-14 05:48:23.436858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-07-14 05:48:23.436891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-07-14 05:48:23.437097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.512 [2024-07-14 05:48:23.437124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.512 qpair failed and we were unable to recover it. 00:34:16.512 [2024-07-14 05:48:23.437308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.513 [2024-07-14 05:48:23.437332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.513 qpair failed and we were unable to recover it. 00:34:16.513 [2024-07-14 05:48:23.437538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.513 [2024-07-14 05:48:23.437566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.513 qpair failed and we were unable to recover it. 00:34:16.513 [2024-07-14 05:48:23.437800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.513 [2024-07-14 05:48:23.437825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.513 qpair failed and we were unable to recover it. 00:34:16.513 [2024-07-14 05:48:23.438016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.513 [2024-07-14 05:48:23.438041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.513 qpair failed and we were unable to recover it. 00:34:16.513 [2024-07-14 05:48:23.438242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.513 [2024-07-14 05:48:23.438267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.513 qpair failed and we were unable to recover it. 00:34:16.513 [2024-07-14 05:48:23.438445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.513 [2024-07-14 05:48:23.438472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.513 qpair failed and we were unable to recover it. 00:34:16.513 [2024-07-14 05:48:23.438674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.513 [2024-07-14 05:48:23.438699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.513 qpair failed and we were unable to recover it. 00:34:16.513 [2024-07-14 05:48:23.438879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.513 [2024-07-14 05:48:23.438907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.513 qpair failed and we were unable to recover it. 00:34:16.513 [2024-07-14 05:48:23.439080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.513 [2024-07-14 05:48:23.439108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.513 qpair failed and we were unable to recover it. 00:34:16.513 [2024-07-14 05:48:23.439278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.513 [2024-07-14 05:48:23.439303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.513 qpair failed and we were unable to recover it. 00:34:16.513 [2024-07-14 05:48:23.439505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.513 [2024-07-14 05:48:23.439534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.513 qpair failed and we were unable to recover it. 00:34:16.513 [2024-07-14 05:48:23.439749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.513 [2024-07-14 05:48:23.439774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.513 qpair failed and we were unable to recover it. 00:34:16.513 [2024-07-14 05:48:23.439962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.513 [2024-07-14 05:48:23.439988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.513 qpair failed and we were unable to recover it. 00:34:16.513 [2024-07-14 05:48:23.440163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.513 [2024-07-14 05:48:23.440188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.513 qpair failed and we were unable to recover it. 00:34:16.513 [2024-07-14 05:48:23.440358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.513 [2024-07-14 05:48:23.440385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.513 qpair failed and we were unable to recover it. 00:34:16.513 [2024-07-14 05:48:23.440597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.513 [2024-07-14 05:48:23.440622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.513 qpair failed and we were unable to recover it. 00:34:16.513 [2024-07-14 05:48:23.440802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.513 [2024-07-14 05:48:23.440827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.513 qpair failed and we were unable to recover it. 00:34:16.513 [2024-07-14 05:48:23.441016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.513 [2024-07-14 05:48:23.441041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.513 qpair failed and we were unable to recover it. 00:34:16.513 [2024-07-14 05:48:23.441236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.513 [2024-07-14 05:48:23.441262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.513 qpair failed and we were unable to recover it. 00:34:16.513 [2024-07-14 05:48:23.441448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.513 [2024-07-14 05:48:23.441473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.513 qpair failed and we were unable to recover it. 00:34:16.513 [2024-07-14 05:48:23.441632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.513 [2024-07-14 05:48:23.441656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.513 qpair failed and we were unable to recover it. 00:34:16.513 [2024-07-14 05:48:23.441821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.513 [2024-07-14 05:48:23.441847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.513 qpair failed and we were unable to recover it. 00:34:16.513 [2024-07-14 05:48:23.442029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.513 [2024-07-14 05:48:23.442055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.513 qpair failed and we were unable to recover it. 00:34:16.513 [2024-07-14 05:48:23.442237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.513 [2024-07-14 05:48:23.442262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.513 qpair failed and we were unable to recover it. 00:34:16.513 [2024-07-14 05:48:23.442456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.513 [2024-07-14 05:48:23.442480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.513 qpair failed and we were unable to recover it. 00:34:16.513 [2024-07-14 05:48:23.442669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.513 [2024-07-14 05:48:23.442694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.513 qpair failed and we were unable to recover it. 00:34:16.513 [2024-07-14 05:48:23.442877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.513 [2024-07-14 05:48:23.442919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.513 qpair failed and we were unable to recover it. 00:34:16.513 [2024-07-14 05:48:23.443097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.513 [2024-07-14 05:48:23.443122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.513 qpair failed and we were unable to recover it. 00:34:16.513 [2024-07-14 05:48:23.443295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.513 [2024-07-14 05:48:23.443321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.513 qpair failed and we were unable to recover it. 00:34:16.513 [2024-07-14 05:48:23.443500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.513 [2024-07-14 05:48:23.443524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.513 qpair failed and we were unable to recover it. 00:34:16.513 [2024-07-14 05:48:23.443718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.513 [2024-07-14 05:48:23.443743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.513 qpair failed and we were unable to recover it. 00:34:16.513 [2024-07-14 05:48:23.443947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.513 [2024-07-14 05:48:23.443973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.513 qpair failed and we were unable to recover it. 00:34:16.513 [2024-07-14 05:48:23.444152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.513 [2024-07-14 05:48:23.444177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.513 qpair failed and we were unable to recover it. 00:34:16.513 [2024-07-14 05:48:23.444337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.513 [2024-07-14 05:48:23.444362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.513 qpair failed and we were unable to recover it. 00:34:16.513 [2024-07-14 05:48:23.444586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.513 [2024-07-14 05:48:23.444613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.513 qpair failed and we were unable to recover it. 00:34:16.513 [2024-07-14 05:48:23.444813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.513 [2024-07-14 05:48:23.444844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.513 qpair failed and we were unable to recover it. 00:34:16.513 [2024-07-14 05:48:23.445020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-07-14 05:48:23.445045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-07-14 05:48:23.445230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-07-14 05:48:23.445255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-07-14 05:48:23.445413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-07-14 05:48:23.445438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-07-14 05:48:23.445619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-07-14 05:48:23.445645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-07-14 05:48:23.445861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-07-14 05:48:23.445890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-07-14 05:48:23.446076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-07-14 05:48:23.446100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-07-14 05:48:23.446306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-07-14 05:48:23.446330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-07-14 05:48:23.446532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-07-14 05:48:23.446557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-07-14 05:48:23.446711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-07-14 05:48:23.446736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-07-14 05:48:23.446948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-07-14 05:48:23.446974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-07-14 05:48:23.447195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-07-14 05:48:23.447220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-07-14 05:48:23.447381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-07-14 05:48:23.447406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-07-14 05:48:23.447644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-07-14 05:48:23.447671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-07-14 05:48:23.447855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-07-14 05:48:23.447887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-07-14 05:48:23.448052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-07-14 05:48:23.448077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-07-14 05:48:23.448262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-07-14 05:48:23.448287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-07-14 05:48:23.448494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-07-14 05:48:23.448518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-07-14 05:48:23.448711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-07-14 05:48:23.448735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-07-14 05:48:23.448900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-07-14 05:48:23.448926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-07-14 05:48:23.449113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-07-14 05:48:23.449139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-07-14 05:48:23.449348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-07-14 05:48:23.449373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-07-14 05:48:23.449583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-07-14 05:48:23.449609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-07-14 05:48:23.449761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-07-14 05:48:23.449785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-07-14 05:48:23.449973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-07-14 05:48:23.450000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-07-14 05:48:23.450185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-07-14 05:48:23.450210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-07-14 05:48:23.450395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-07-14 05:48:23.450420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-07-14 05:48:23.450597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-07-14 05:48:23.450625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-07-14 05:48:23.450789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-07-14 05:48:23.450813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-07-14 05:48:23.450993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-07-14 05:48:23.451020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-07-14 05:48:23.451199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-07-14 05:48:23.451226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-07-14 05:48:23.451429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-07-14 05:48:23.451454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-07-14 05:48:23.451660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-07-14 05:48:23.451685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-07-14 05:48:23.451873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-07-14 05:48:23.451898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-07-14 05:48:23.452084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-07-14 05:48:23.452109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-07-14 05:48:23.452296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-07-14 05:48:23.452321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-07-14 05:48:23.452505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-07-14 05:48:23.452551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-07-14 05:48:23.452753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-07-14 05:48:23.452779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-07-14 05:48:23.452960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-07-14 05:48:23.452986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-07-14 05:48:23.453168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-07-14 05:48:23.453193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-07-14 05:48:23.453358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-07-14 05:48:23.453383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-07-14 05:48:23.453569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-07-14 05:48:23.453594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.514 qpair failed and we were unable to recover it. 00:34:16.514 [2024-07-14 05:48:23.453771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-07-14 05:48:23.453796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-07-14 05:48:23.453980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-07-14 05:48:23.454005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-07-14 05:48:23.454185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-07-14 05:48:23.454210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-07-14 05:48:23.454400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-07-14 05:48:23.454425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-07-14 05:48:23.454607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-07-14 05:48:23.454632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-07-14 05:48:23.454838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-07-14 05:48:23.454863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-07-14 05:48:23.455101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-07-14 05:48:23.455126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-07-14 05:48:23.455351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-07-14 05:48:23.455376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-07-14 05:48:23.455533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-07-14 05:48:23.455558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-07-14 05:48:23.455748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-07-14 05:48:23.455773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-07-14 05:48:23.455977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-07-14 05:48:23.456002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-07-14 05:48:23.456174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-07-14 05:48:23.456199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-07-14 05:48:23.456411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-07-14 05:48:23.456440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-07-14 05:48:23.456603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-07-14 05:48:23.456628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-07-14 05:48:23.456914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-07-14 05:48:23.456942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-07-14 05:48:23.457106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-07-14 05:48:23.457131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-07-14 05:48:23.457324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-07-14 05:48:23.457349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-07-14 05:48:23.457523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-07-14 05:48:23.457549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-07-14 05:48:23.457707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-07-14 05:48:23.457732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-07-14 05:48:23.457949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-07-14 05:48:23.457974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-07-14 05:48:23.458171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-07-14 05:48:23.458195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-07-14 05:48:23.458405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-07-14 05:48:23.458431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-07-14 05:48:23.458610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-07-14 05:48:23.458636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-07-14 05:48:23.458811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-07-14 05:48:23.458836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-07-14 05:48:23.459048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-07-14 05:48:23.459074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-07-14 05:48:23.459296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-07-14 05:48:23.459322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-07-14 05:48:23.459515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-07-14 05:48:23.459540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-07-14 05:48:23.459723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-07-14 05:48:23.459748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-07-14 05:48:23.459950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-07-14 05:48:23.459976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-07-14 05:48:23.460141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-07-14 05:48:23.460168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-07-14 05:48:23.460356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-07-14 05:48:23.460381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-07-14 05:48:23.460541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-07-14 05:48:23.460566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-07-14 05:48:23.460783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-07-14 05:48:23.460811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-07-14 05:48:23.461028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-07-14 05:48:23.461054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-07-14 05:48:23.461235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-07-14 05:48:23.461260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-07-14 05:48:23.461468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-07-14 05:48:23.461494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-07-14 05:48:23.461705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-07-14 05:48:23.461730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-07-14 05:48:23.461913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-07-14 05:48:23.461939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-07-14 05:48:23.462090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-07-14 05:48:23.462115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-07-14 05:48:23.462278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-07-14 05:48:23.462303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-07-14 05:48:23.462515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-07-14 05:48:23.462540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-07-14 05:48:23.462700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-07-14 05:48:23.462725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.515 qpair failed and we were unable to recover it. 00:34:16.515 [2024-07-14 05:48:23.462889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-07-14 05:48:23.462916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-07-14 05:48:23.463105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-07-14 05:48:23.463131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-07-14 05:48:23.463301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-07-14 05:48:23.463326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-07-14 05:48:23.463507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-07-14 05:48:23.463532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-07-14 05:48:23.463678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-07-14 05:48:23.463703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-07-14 05:48:23.463913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-07-14 05:48:23.463939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-07-14 05:48:23.464114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-07-14 05:48:23.464139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-07-14 05:48:23.464295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-07-14 05:48:23.464320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-07-14 05:48:23.464526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-07-14 05:48:23.464551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-07-14 05:48:23.464714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-07-14 05:48:23.464739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-07-14 05:48:23.464919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-07-14 05:48:23.464945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-07-14 05:48:23.465155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-07-14 05:48:23.465180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-07-14 05:48:23.465350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-07-14 05:48:23.465376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-07-14 05:48:23.465557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-07-14 05:48:23.465582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-07-14 05:48:23.465756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-07-14 05:48:23.465781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-07-14 05:48:23.465946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-07-14 05:48:23.465971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-07-14 05:48:23.466133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-07-14 05:48:23.466158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-07-14 05:48:23.466325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-07-14 05:48:23.466351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-07-14 05:48:23.466539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-07-14 05:48:23.466564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-07-14 05:48:23.466741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-07-14 05:48:23.466766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-07-14 05:48:23.466917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-07-14 05:48:23.466943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-07-14 05:48:23.467104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-07-14 05:48:23.467129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-07-14 05:48:23.467307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-07-14 05:48:23.467332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-07-14 05:48:23.467490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-07-14 05:48:23.467516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-07-14 05:48:23.467671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-07-14 05:48:23.467696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-07-14 05:48:23.467859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-07-14 05:48:23.467889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-07-14 05:48:23.468076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-07-14 05:48:23.468101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-07-14 05:48:23.468320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-07-14 05:48:23.468345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-07-14 05:48:23.468529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-07-14 05:48:23.468554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-07-14 05:48:23.468740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-07-14 05:48:23.468766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-07-14 05:48:23.468928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-07-14 05:48:23.468954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-07-14 05:48:23.469132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-07-14 05:48:23.469156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-07-14 05:48:23.469324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-07-14 05:48:23.469350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-07-14 05:48:23.469564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-07-14 05:48:23.469589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-07-14 05:48:23.469762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-07-14 05:48:23.469787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-07-14 05:48:23.469947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-07-14 05:48:23.469971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-07-14 05:48:23.470177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-07-14 05:48:23.470205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-07-14 05:48:23.470405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-07-14 05:48:23.470430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-07-14 05:48:23.470608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-07-14 05:48:23.470636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-07-14 05:48:23.470824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-07-14 05:48:23.470849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-07-14 05:48:23.471072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-07-14 05:48:23.471097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.516 qpair failed and we were unable to recover it. 00:34:16.516 [2024-07-14 05:48:23.471259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-07-14 05:48:23.471285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.517 qpair failed and we were unable to recover it. 00:34:16.517 [2024-07-14 05:48:23.471462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.517 [2024-07-14 05:48:23.471487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.517 qpair failed and we were unable to recover it. 00:34:16.517 [2024-07-14 05:48:23.471667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.517 [2024-07-14 05:48:23.471692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.517 qpair failed and we were unable to recover it. 00:34:16.517 [2024-07-14 05:48:23.471871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.517 [2024-07-14 05:48:23.471897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.517 qpair failed and we were unable to recover it. 00:34:16.517 [2024-07-14 05:48:23.472077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.517 [2024-07-14 05:48:23.472102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.517 qpair failed and we were unable to recover it. 00:34:16.517 [2024-07-14 05:48:23.472309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.517 [2024-07-14 05:48:23.472333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.517 qpair failed and we were unable to recover it. 00:34:16.517 [2024-07-14 05:48:23.472505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.517 [2024-07-14 05:48:23.472529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.517 qpair failed and we were unable to recover it. 00:34:16.517 [2024-07-14 05:48:23.472704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.517 [2024-07-14 05:48:23.472729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.517 qpair failed and we were unable to recover it. 00:34:16.517 [2024-07-14 05:48:23.472888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.517 [2024-07-14 05:48:23.472914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.517 qpair failed and we were unable to recover it. 00:34:16.517 [2024-07-14 05:48:23.473080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.517 [2024-07-14 05:48:23.473104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.517 qpair failed and we were unable to recover it. 00:34:16.517 [2024-07-14 05:48:23.473317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.517 [2024-07-14 05:48:23.473342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.517 qpair failed and we were unable to recover it. 00:34:16.517 [2024-07-14 05:48:23.473531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.517 [2024-07-14 05:48:23.473556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.517 qpair failed and we were unable to recover it. 00:34:16.517 [2024-07-14 05:48:23.473750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.517 [2024-07-14 05:48:23.473776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.517 qpair failed and we were unable to recover it. 00:34:16.517 [2024-07-14 05:48:23.473956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.517 [2024-07-14 05:48:23.473984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.517 qpair failed and we were unable to recover it. 00:34:16.517 [2024-07-14 05:48:23.474162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.517 [2024-07-14 05:48:23.474187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.517 qpair failed and we were unable to recover it. 00:34:16.517 [2024-07-14 05:48:23.474352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.517 [2024-07-14 05:48:23.474376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.517 qpair failed and we were unable to recover it. 00:34:16.517 [2024-07-14 05:48:23.474530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.517 [2024-07-14 05:48:23.474554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.517 qpair failed and we were unable to recover it. 00:34:16.517 [2024-07-14 05:48:23.474743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.517 [2024-07-14 05:48:23.474768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.517 qpair failed and we were unable to recover it. 00:34:16.517 [2024-07-14 05:48:23.474934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.517 [2024-07-14 05:48:23.474959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.517 qpair failed and we were unable to recover it. 00:34:16.517 [2024-07-14 05:48:23.475142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.517 [2024-07-14 05:48:23.475167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.517 qpair failed and we were unable to recover it. 00:34:16.517 [2024-07-14 05:48:23.475377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.517 [2024-07-14 05:48:23.475401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.517 qpair failed and we were unable to recover it. 00:34:16.517 [2024-07-14 05:48:23.475558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.517 [2024-07-14 05:48:23.475583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.517 qpair failed and we were unable to recover it. 00:34:16.517 [2024-07-14 05:48:23.475796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.517 [2024-07-14 05:48:23.475821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.517 qpair failed and we were unable to recover it. 00:34:16.517 [2024-07-14 05:48:23.476004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.517 [2024-07-14 05:48:23.476029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.517 qpair failed and we were unable to recover it. 00:34:16.517 [2024-07-14 05:48:23.476231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.517 [2024-07-14 05:48:23.476263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.517 qpair failed and we were unable to recover it. 00:34:16.517 [2024-07-14 05:48:23.476428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.517 [2024-07-14 05:48:23.476455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.517 qpair failed and we were unable to recover it. 00:34:16.517 [2024-07-14 05:48:23.476626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.517 [2024-07-14 05:48:23.476650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.517 qpair failed and we were unable to recover it. 00:34:16.517 [2024-07-14 05:48:23.476829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.517 [2024-07-14 05:48:23.476854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.517 qpair failed and we were unable to recover it. 00:34:16.517 [2024-07-14 05:48:23.477025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.517 [2024-07-14 05:48:23.477050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.517 qpair failed and we were unable to recover it. 00:34:16.517 [2024-07-14 05:48:23.477260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.517 [2024-07-14 05:48:23.477285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.517 qpair failed and we were unable to recover it. 00:34:16.517 [2024-07-14 05:48:23.477464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.517 [2024-07-14 05:48:23.477488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.517 qpair failed and we were unable to recover it. 00:34:16.517 [2024-07-14 05:48:23.477653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.517 [2024-07-14 05:48:23.477677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.517 qpair failed and we were unable to recover it. 00:34:16.517 [2024-07-14 05:48:23.477830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.517 [2024-07-14 05:48:23.477855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.517 qpair failed and we were unable to recover it. 00:34:16.517 [2024-07-14 05:48:23.478049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.517 [2024-07-14 05:48:23.478074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.517 qpair failed and we were unable to recover it. 00:34:16.517 [2024-07-14 05:48:23.478319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.517 [2024-07-14 05:48:23.478347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.517 qpair failed and we were unable to recover it. 00:34:16.517 [2024-07-14 05:48:23.478546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.517 [2024-07-14 05:48:23.478573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.517 qpair failed and we were unable to recover it. 00:34:16.517 [2024-07-14 05:48:23.478756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.517 [2024-07-14 05:48:23.478781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.517 qpair failed and we were unable to recover it. 00:34:16.518 [2024-07-14 05:48:23.478972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-07-14 05:48:23.478997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-07-14 05:48:23.479191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-07-14 05:48:23.479215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-07-14 05:48:23.479397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-07-14 05:48:23.479422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-07-14 05:48:23.479587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-07-14 05:48:23.479613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-07-14 05:48:23.479802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-07-14 05:48:23.479827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-07-14 05:48:23.480028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-07-14 05:48:23.480053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-07-14 05:48:23.480261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-07-14 05:48:23.480285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-07-14 05:48:23.480490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-07-14 05:48:23.480516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-07-14 05:48:23.480698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-07-14 05:48:23.480723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-07-14 05:48:23.480913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-07-14 05:48:23.480938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-07-14 05:48:23.481097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-07-14 05:48:23.481122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-07-14 05:48:23.481274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-07-14 05:48:23.481298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-07-14 05:48:23.481478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-07-14 05:48:23.481503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-07-14 05:48:23.481652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-07-14 05:48:23.481677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-07-14 05:48:23.481862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-07-14 05:48:23.481898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-07-14 05:48:23.482087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-07-14 05:48:23.482112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-07-14 05:48:23.482310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-07-14 05:48:23.482334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-07-14 05:48:23.482489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-07-14 05:48:23.482524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-07-14 05:48:23.482726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-07-14 05:48:23.482751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-07-14 05:48:23.482918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-07-14 05:48:23.482944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-07-14 05:48:23.483120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-07-14 05:48:23.483144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-07-14 05:48:23.483335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-07-14 05:48:23.483360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-07-14 05:48:23.483550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-07-14 05:48:23.483575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-07-14 05:48:23.483755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-07-14 05:48:23.483781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-07-14 05:48:23.483942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-07-14 05:48:23.483977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-07-14 05:48:23.484158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-07-14 05:48:23.484182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-07-14 05:48:23.484366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-07-14 05:48:23.484391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-07-14 05:48:23.484552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-07-14 05:48:23.484577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-07-14 05:48:23.484790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-07-14 05:48:23.484818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-07-14 05:48:23.485068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-07-14 05:48:23.485094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-07-14 05:48:23.485283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-07-14 05:48:23.485308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-07-14 05:48:23.485528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-07-14 05:48:23.485553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-07-14 05:48:23.485736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-07-14 05:48:23.485760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-07-14 05:48:23.485957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-07-14 05:48:23.485981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-07-14 05:48:23.486176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-07-14 05:48:23.486200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-07-14 05:48:23.486359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-07-14 05:48:23.486384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-07-14 05:48:23.486567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-07-14 05:48:23.486592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-07-14 05:48:23.486774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-07-14 05:48:23.486798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-07-14 05:48:23.486988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-07-14 05:48:23.487013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-07-14 05:48:23.487189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-07-14 05:48:23.487214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-07-14 05:48:23.487378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.518 [2024-07-14 05:48:23.487403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.518 qpair failed and we were unable to recover it. 00:34:16.518 [2024-07-14 05:48:23.487607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-07-14 05:48:23.487632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-07-14 05:48:23.487794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-07-14 05:48:23.487818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-07-14 05:48:23.488007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-07-14 05:48:23.488032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-07-14 05:48:23.488189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-07-14 05:48:23.488214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-07-14 05:48:23.488390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-07-14 05:48:23.488415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-07-14 05:48:23.488596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-07-14 05:48:23.488621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-07-14 05:48:23.488775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-07-14 05:48:23.488800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-07-14 05:48:23.489026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-07-14 05:48:23.489051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-07-14 05:48:23.489213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-07-14 05:48:23.489239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-07-14 05:48:23.489433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-07-14 05:48:23.489458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-07-14 05:48:23.489631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-07-14 05:48:23.489656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-07-14 05:48:23.489840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-07-14 05:48:23.489869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-07-14 05:48:23.490032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-07-14 05:48:23.490057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-07-14 05:48:23.490241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-07-14 05:48:23.490266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-07-14 05:48:23.490423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-07-14 05:48:23.490452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-07-14 05:48:23.490615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-07-14 05:48:23.490640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-07-14 05:48:23.490826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-07-14 05:48:23.490851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-07-14 05:48:23.491021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-07-14 05:48:23.491046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-07-14 05:48:23.491230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-07-14 05:48:23.491255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-07-14 05:48:23.491438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-07-14 05:48:23.491463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-07-14 05:48:23.491638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-07-14 05:48:23.491663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-07-14 05:48:23.491816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-07-14 05:48:23.491841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-07-14 05:48:23.492054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-07-14 05:48:23.492079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-07-14 05:48:23.492280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-07-14 05:48:23.492304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-07-14 05:48:23.492482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-07-14 05:48:23.492506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-07-14 05:48:23.492691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-07-14 05:48:23.492716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-07-14 05:48:23.492912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-07-14 05:48:23.492937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-07-14 05:48:23.493094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-07-14 05:48:23.493120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-07-14 05:48:23.493306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-07-14 05:48:23.493330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-07-14 05:48:23.493515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-07-14 05:48:23.493539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-07-14 05:48:23.493690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-07-14 05:48:23.493714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-07-14 05:48:23.493957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-07-14 05:48:23.493985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-07-14 05:48:23.494175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-07-14 05:48:23.494200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-07-14 05:48:23.494384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-07-14 05:48:23.494408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-07-14 05:48:23.494614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-07-14 05:48:23.494638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-07-14 05:48:23.494821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-07-14 05:48:23.494845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-07-14 05:48:23.495020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-07-14 05:48:23.495046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-07-14 05:48:23.495232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-07-14 05:48:23.495257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-07-14 05:48:23.495410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-07-14 05:48:23.495434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-07-14 05:48:23.495608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-07-14 05:48:23.495633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-07-14 05:48:23.495797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-07-14 05:48:23.495838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.519 qpair failed and we were unable to recover it. 00:34:16.519 [2024-07-14 05:48:23.496063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.519 [2024-07-14 05:48:23.496092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-07-14 05:48:23.496300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-07-14 05:48:23.496328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-07-14 05:48:23.496555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-07-14 05:48:23.496583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-07-14 05:48:23.496762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-07-14 05:48:23.496787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-07-14 05:48:23.496954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-07-14 05:48:23.496980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-07-14 05:48:23.497179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-07-14 05:48:23.497207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-07-14 05:48:23.497389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-07-14 05:48:23.497414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-07-14 05:48:23.497613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-07-14 05:48:23.497640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-07-14 05:48:23.497851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-07-14 05:48:23.497880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-07-14 05:48:23.498063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-07-14 05:48:23.498087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-07-14 05:48:23.498329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-07-14 05:48:23.498357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-07-14 05:48:23.498533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-07-14 05:48:23.498560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-07-14 05:48:23.498786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-07-14 05:48:23.498811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-07-14 05:48:23.498997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-07-14 05:48:23.499025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-07-14 05:48:23.499226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-07-14 05:48:23.499254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-07-14 05:48:23.499452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-07-14 05:48:23.499477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-07-14 05:48:23.499655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-07-14 05:48:23.499682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-07-14 05:48:23.499876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-07-14 05:48:23.499902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-07-14 05:48:23.500111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-07-14 05:48:23.500136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-07-14 05:48:23.500346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-07-14 05:48:23.500374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-07-14 05:48:23.500580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-07-14 05:48:23.500605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-07-14 05:48:23.500787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-07-14 05:48:23.500811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-07-14 05:48:23.500979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-07-14 05:48:23.501004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-07-14 05:48:23.501210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-07-14 05:48:23.501238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-07-14 05:48:23.501445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-07-14 05:48:23.501470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-07-14 05:48:23.501621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-07-14 05:48:23.501646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-07-14 05:48:23.501820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-07-14 05:48:23.501847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-07-14 05:48:23.502062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-07-14 05:48:23.502091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-07-14 05:48:23.502269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-07-14 05:48:23.502297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-07-14 05:48:23.502482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-07-14 05:48:23.502510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-07-14 05:48:23.502704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-07-14 05:48:23.502728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-07-14 05:48:23.502894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-07-14 05:48:23.502920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-07-14 05:48:23.503131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-07-14 05:48:23.503171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-07-14 05:48:23.503412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-07-14 05:48:23.503437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-07-14 05:48:23.503649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-07-14 05:48:23.503674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-07-14 05:48:23.503863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-07-14 05:48:23.503892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-07-14 05:48:23.504081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-07-14 05:48:23.504106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-07-14 05:48:23.504257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-07-14 05:48:23.504281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-07-14 05:48:23.504484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-07-14 05:48:23.504509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-07-14 05:48:23.504717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-07-14 05:48:23.504742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-07-14 05:48:23.504946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-07-14 05:48:23.504975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-07-14 05:48:23.505181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.520 [2024-07-14 05:48:23.505209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.520 qpair failed and we were unable to recover it. 00:34:16.520 [2024-07-14 05:48:23.505408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.521 [2024-07-14 05:48:23.505433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.521 qpair failed and we were unable to recover it. 00:34:16.521 [2024-07-14 05:48:23.505635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.521 [2024-07-14 05:48:23.505662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.521 qpair failed and we were unable to recover it. 00:34:16.521 [2024-07-14 05:48:23.505882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.521 [2024-07-14 05:48:23.505909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.521 qpair failed and we were unable to recover it. 00:34:16.521 [2024-07-14 05:48:23.506115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.521 [2024-07-14 05:48:23.506140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.521 qpair failed and we were unable to recover it. 00:34:16.521 [2024-07-14 05:48:23.506359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.521 [2024-07-14 05:48:23.506387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.521 qpair failed and we were unable to recover it. 00:34:16.521 [2024-07-14 05:48:23.506578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.521 [2024-07-14 05:48:23.506605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.521 qpair failed and we were unable to recover it. 00:34:16.521 [2024-07-14 05:48:23.506816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.521 [2024-07-14 05:48:23.506842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.521 qpair failed and we were unable to recover it. 00:34:16.521 [2024-07-14 05:48:23.507104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.521 [2024-07-14 05:48:23.507130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.521 qpair failed and we were unable to recover it. 00:34:16.521 [2024-07-14 05:48:23.507310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.521 [2024-07-14 05:48:23.507335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.521 qpair failed and we were unable to recover it. 00:34:16.521 [2024-07-14 05:48:23.507554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.521 [2024-07-14 05:48:23.507579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.521 qpair failed and we were unable to recover it. 00:34:16.521 [2024-07-14 05:48:23.507737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.521 [2024-07-14 05:48:23.507762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.521 qpair failed and we were unable to recover it. 00:34:16.521 [2024-07-14 05:48:23.507942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.521 [2024-07-14 05:48:23.507970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.521 qpair failed and we were unable to recover it. 00:34:16.521 [2024-07-14 05:48:23.508171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.521 [2024-07-14 05:48:23.508196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.521 qpair failed and we were unable to recover it. 00:34:16.521 [2024-07-14 05:48:23.508379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.521 [2024-07-14 05:48:23.508407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.521 qpair failed and we were unable to recover it. 00:34:16.521 [2024-07-14 05:48:23.508608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.521 [2024-07-14 05:48:23.508635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.521 qpair failed and we were unable to recover it. 00:34:16.521 [2024-07-14 05:48:23.508846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.521 [2024-07-14 05:48:23.508875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.521 qpair failed and we were unable to recover it. 00:34:16.521 [2024-07-14 05:48:23.509115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.521 [2024-07-14 05:48:23.509143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.521 qpair failed and we were unable to recover it. 00:34:16.521 [2024-07-14 05:48:23.509326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.521 [2024-07-14 05:48:23.509351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.521 qpair failed and we were unable to recover it. 00:34:16.521 [2024-07-14 05:48:23.509559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.521 [2024-07-14 05:48:23.509584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.521 qpair failed and we were unable to recover it. 00:34:16.521 [2024-07-14 05:48:23.509786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.521 [2024-07-14 05:48:23.509814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.521 qpair failed and we were unable to recover it. 00:34:16.521 [2024-07-14 05:48:23.510003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.521 [2024-07-14 05:48:23.510028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.521 qpair failed and we were unable to recover it. 00:34:16.521 [2024-07-14 05:48:23.510222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.521 [2024-07-14 05:48:23.510246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.521 qpair failed and we were unable to recover it. 00:34:16.521 [2024-07-14 05:48:23.510459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.521 [2024-07-14 05:48:23.510487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.521 qpair failed and we were unable to recover it. 00:34:16.521 [2024-07-14 05:48:23.510696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.521 [2024-07-14 05:48:23.510720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.521 qpair failed and we were unable to recover it. 00:34:16.521 [2024-07-14 05:48:23.510883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.521 [2024-07-14 05:48:23.510908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.521 qpair failed and we were unable to recover it. 00:34:16.521 [2024-07-14 05:48:23.511114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.521 [2024-07-14 05:48:23.511142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.521 qpair failed and we were unable to recover it. 00:34:16.521 [2024-07-14 05:48:23.511316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.521 [2024-07-14 05:48:23.511343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.521 qpair failed and we were unable to recover it. 00:34:16.521 [2024-07-14 05:48:23.511528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.521 [2024-07-14 05:48:23.511552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.521 qpair failed and we were unable to recover it. 00:34:16.521 [2024-07-14 05:48:23.511731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.521 [2024-07-14 05:48:23.511755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.521 qpair failed and we were unable to recover it. 00:34:16.521 [2024-07-14 05:48:23.512004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.521 [2024-07-14 05:48:23.512029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.521 qpair failed and we were unable to recover it. 00:34:16.521 [2024-07-14 05:48:23.512206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.521 [2024-07-14 05:48:23.512231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.521 qpair failed and we were unable to recover it. 00:34:16.521 [2024-07-14 05:48:23.512410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.521 [2024-07-14 05:48:23.512438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.521 qpair failed and we were unable to recover it. 00:34:16.521 [2024-07-14 05:48:23.512658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.521 [2024-07-14 05:48:23.512685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.521 qpair failed and we were unable to recover it. 00:34:16.521 [2024-07-14 05:48:23.512926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.521 [2024-07-14 05:48:23.512951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.521 qpair failed and we were unable to recover it. 00:34:16.521 [2024-07-14 05:48:23.513131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.521 [2024-07-14 05:48:23.513159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.521 qpair failed and we were unable to recover it. 00:34:16.521 [2024-07-14 05:48:23.513338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.521 [2024-07-14 05:48:23.513365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.521 qpair failed and we were unable to recover it. 00:34:16.522 [2024-07-14 05:48:23.513572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.522 [2024-07-14 05:48:23.513598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.522 qpair failed and we were unable to recover it. 00:34:16.522 [2024-07-14 05:48:23.513807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.522 [2024-07-14 05:48:23.513835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.522 qpair failed and we were unable to recover it. 00:34:16.522 [2024-07-14 05:48:23.514025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.522 [2024-07-14 05:48:23.514052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.522 qpair failed and we were unable to recover it. 00:34:16.522 [2024-07-14 05:48:23.514267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.522 [2024-07-14 05:48:23.514292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.522 qpair failed and we were unable to recover it. 00:34:16.522 [2024-07-14 05:48:23.514506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.522 [2024-07-14 05:48:23.514535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.522 qpair failed and we were unable to recover it. 00:34:16.522 [2024-07-14 05:48:23.514717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.522 [2024-07-14 05:48:23.514745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.522 qpair failed and we were unable to recover it. 00:34:16.522 [2024-07-14 05:48:23.514928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.522 [2024-07-14 05:48:23.514954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.522 qpair failed and we were unable to recover it. 00:34:16.522 [2024-07-14 05:48:23.515110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.522 [2024-07-14 05:48:23.515134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.522 qpair failed and we were unable to recover it. 00:34:16.522 [2024-07-14 05:48:23.515339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.522 [2024-07-14 05:48:23.515367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.522 qpair failed and we were unable to recover it. 00:34:16.522 [2024-07-14 05:48:23.515578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.522 [2024-07-14 05:48:23.515603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.522 qpair failed and we were unable to recover it. 00:34:16.522 [2024-07-14 05:48:23.515764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.522 [2024-07-14 05:48:23.515789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.522 qpair failed and we were unable to recover it. 00:34:16.522 [2024-07-14 05:48:23.516023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.522 [2024-07-14 05:48:23.516052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.522 qpair failed and we were unable to recover it. 00:34:16.522 [2024-07-14 05:48:23.516227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.522 [2024-07-14 05:48:23.516252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.522 qpair failed and we were unable to recover it. 00:34:16.522 [2024-07-14 05:48:23.516418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.522 [2024-07-14 05:48:23.516445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.522 qpair failed and we were unable to recover it. 00:34:16.522 [2024-07-14 05:48:23.516680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.522 [2024-07-14 05:48:23.516708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.522 qpair failed and we were unable to recover it. 00:34:16.522 [2024-07-14 05:48:23.516895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.522 [2024-07-14 05:48:23.516920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.522 qpair failed and we were unable to recover it. 00:34:16.522 [2024-07-14 05:48:23.517150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.522 [2024-07-14 05:48:23.517178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.522 qpair failed and we were unable to recover it. 00:34:16.522 [2024-07-14 05:48:23.517398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.522 [2024-07-14 05:48:23.517430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.522 qpair failed and we were unable to recover it. 00:34:16.522 [2024-07-14 05:48:23.517618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.522 [2024-07-14 05:48:23.517643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.522 qpair failed and we were unable to recover it. 00:34:16.522 [2024-07-14 05:48:23.517848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.522 [2024-07-14 05:48:23.517881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.522 qpair failed and we were unable to recover it. 00:34:16.522 [2024-07-14 05:48:23.518097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.522 [2024-07-14 05:48:23.518125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.522 qpair failed and we were unable to recover it. 00:34:16.522 [2024-07-14 05:48:23.518340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.522 [2024-07-14 05:48:23.518365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.522 qpair failed and we were unable to recover it. 00:34:16.522 [2024-07-14 05:48:23.518545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.522 [2024-07-14 05:48:23.518572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.522 qpair failed and we were unable to recover it. 00:34:16.522 [2024-07-14 05:48:23.518745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.522 [2024-07-14 05:48:23.518772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.522 qpair failed and we were unable to recover it. 00:34:16.522 [2024-07-14 05:48:23.518960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.522 [2024-07-14 05:48:23.518985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.522 qpair failed and we were unable to recover it. 00:34:16.522 [2024-07-14 05:48:23.519133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.522 [2024-07-14 05:48:23.519161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.522 qpair failed and we were unable to recover it. 00:34:16.522 [2024-07-14 05:48:23.519333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.522 [2024-07-14 05:48:23.519361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.522 qpair failed and we were unable to recover it. 00:34:16.522 [2024-07-14 05:48:23.519567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.522 [2024-07-14 05:48:23.519592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.522 qpair failed and we were unable to recover it. 00:34:16.522 [2024-07-14 05:48:23.519766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.522 [2024-07-14 05:48:23.519794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.522 qpair failed and we were unable to recover it. 00:34:16.522 [2024-07-14 05:48:23.519957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.522 [2024-07-14 05:48:23.519985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.522 qpair failed and we were unable to recover it. 00:34:16.522 [2024-07-14 05:48:23.520192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.522 [2024-07-14 05:48:23.520216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.522 qpair failed and we were unable to recover it. 00:34:16.522 [2024-07-14 05:48:23.520397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.522 [2024-07-14 05:48:23.520425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.522 qpair failed and we were unable to recover it. 00:34:16.522 [2024-07-14 05:48:23.520632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.522 [2024-07-14 05:48:23.520660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.522 qpair failed and we were unable to recover it. 00:34:16.522 [2024-07-14 05:48:23.520885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.522 [2024-07-14 05:48:23.520928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.522 qpair failed and we were unable to recover it. 00:34:16.522 [2024-07-14 05:48:23.521091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.522 [2024-07-14 05:48:23.521117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.522 qpair failed and we were unable to recover it. 00:34:16.522 [2024-07-14 05:48:23.521322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.522 [2024-07-14 05:48:23.521348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.522 qpair failed and we were unable to recover it. 00:34:16.522 [2024-07-14 05:48:23.521500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.522 [2024-07-14 05:48:23.521525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.522 qpair failed and we were unable to recover it. 00:34:16.522 [2024-07-14 05:48:23.521699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.522 [2024-07-14 05:48:23.521727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.522 qpair failed and we were unable to recover it. 00:34:16.522 [2024-07-14 05:48:23.521930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.522 [2024-07-14 05:48:23.521959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.522 qpair failed and we were unable to recover it. 00:34:16.522 [2024-07-14 05:48:23.522168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.522 [2024-07-14 05:48:23.522194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.522 qpair failed and we were unable to recover it. 00:34:16.522 [2024-07-14 05:48:23.522404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.522 [2024-07-14 05:48:23.522431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.522 qpair failed and we were unable to recover it. 00:34:16.522 [2024-07-14 05:48:23.522614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-07-14 05:48:23.522641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-07-14 05:48:23.522822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-07-14 05:48:23.522848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-07-14 05:48:23.523057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-07-14 05:48:23.523085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-07-14 05:48:23.523308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-07-14 05:48:23.523339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-07-14 05:48:23.523550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-07-14 05:48:23.523575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-07-14 05:48:23.523788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-07-14 05:48:23.523815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-07-14 05:48:23.524002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-07-14 05:48:23.524027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-07-14 05:48:23.524236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-07-14 05:48:23.524261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-07-14 05:48:23.524463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-07-14 05:48:23.524490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-07-14 05:48:23.524690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-07-14 05:48:23.524717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-07-14 05:48:23.524948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-07-14 05:48:23.524973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-07-14 05:48:23.525137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-07-14 05:48:23.525177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-07-14 05:48:23.525376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-07-14 05:48:23.525403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-07-14 05:48:23.525578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-07-14 05:48:23.525603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-07-14 05:48:23.525786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-07-14 05:48:23.525811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-07-14 05:48:23.526012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-07-14 05:48:23.526036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-07-14 05:48:23.526222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-07-14 05:48:23.526246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-07-14 05:48:23.526441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-07-14 05:48:23.526466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-07-14 05:48:23.526705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-07-14 05:48:23.526733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-07-14 05:48:23.526939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-07-14 05:48:23.526965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-07-14 05:48:23.527177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-07-14 05:48:23.527206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-07-14 05:48:23.527443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-07-14 05:48:23.527469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-07-14 05:48:23.527654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-07-14 05:48:23.527679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-07-14 05:48:23.527838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-07-14 05:48:23.527862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-07-14 05:48:23.528021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-07-14 05:48:23.528045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-07-14 05:48:23.528243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-07-14 05:48:23.528269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-07-14 05:48:23.528491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-07-14 05:48:23.528516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-07-14 05:48:23.528698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-07-14 05:48:23.528726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-07-14 05:48:23.528900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-07-14 05:48:23.528925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-07-14 05:48:23.529111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-07-14 05:48:23.529135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-07-14 05:48:23.529348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-07-14 05:48:23.529376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-07-14 05:48:23.529611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-07-14 05:48:23.529636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-07-14 05:48:23.529831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-07-14 05:48:23.529862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-07-14 05:48:23.530041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-07-14 05:48:23.530069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-07-14 05:48:23.530255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-07-14 05:48:23.530280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-07-14 05:48:23.530497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-07-14 05:48:23.530525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-07-14 05:48:23.530712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-07-14 05:48:23.530740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-07-14 05:48:23.530952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-07-14 05:48:23.530976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-07-14 05:48:23.531203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-07-14 05:48:23.531238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-07-14 05:48:23.531480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-07-14 05:48:23.531506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-07-14 05:48:23.531682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-07-14 05:48:23.531707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.523 [2024-07-14 05:48:23.531909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.523 [2024-07-14 05:48:23.531938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.523 qpair failed and we were unable to recover it. 00:34:16.524 [2024-07-14 05:48:23.533953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.524 [2024-07-14 05:48:23.534004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.524 qpair failed and we were unable to recover it. 00:34:16.524 [2024-07-14 05:48:23.534252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.524 [2024-07-14 05:48:23.534281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.524 qpair failed and we were unable to recover it. 00:34:16.524 [2024-07-14 05:48:23.534540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.524 [2024-07-14 05:48:23.534572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.524 qpair failed and we were unable to recover it. 00:34:16.524 [2024-07-14 05:48:23.534831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.524 [2024-07-14 05:48:23.534859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.524 qpair failed and we were unable to recover it. 00:34:16.524 [2024-07-14 05:48:23.535103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.524 [2024-07-14 05:48:23.535132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.524 qpair failed and we were unable to recover it. 00:34:16.524 [2024-07-14 05:48:23.535315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.524 [2024-07-14 05:48:23.535349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.524 qpair failed and we were unable to recover it. 00:34:16.524 [2024-07-14 05:48:23.535586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.524 [2024-07-14 05:48:23.535618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.524 qpair failed and we were unable to recover it. 00:34:16.524 [2024-07-14 05:48:23.535796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.524 [2024-07-14 05:48:23.535821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.524 qpair failed and we were unable to recover it. 00:34:16.524 [2024-07-14 05:48:23.536044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.524 [2024-07-14 05:48:23.536090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.524 qpair failed and we were unable to recover it. 00:34:16.524 [2024-07-14 05:48:23.536331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.524 [2024-07-14 05:48:23.536362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.524 qpair failed and we were unable to recover it. 00:34:16.524 [2024-07-14 05:48:23.536564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.524 [2024-07-14 05:48:23.536598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.524 qpair failed and we were unable to recover it. 00:34:16.524 [2024-07-14 05:48:23.536809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.524 [2024-07-14 05:48:23.536847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.524 qpair failed and we were unable to recover it. 00:34:16.524 [2024-07-14 05:48:23.537098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.524 [2024-07-14 05:48:23.537124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.524 qpair failed and we were unable to recover it. 00:34:16.524 [2024-07-14 05:48:23.537323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.524 [2024-07-14 05:48:23.537348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.524 qpair failed and we were unable to recover it. 00:34:16.524 [2024-07-14 05:48:23.537561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.524 [2024-07-14 05:48:23.537604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.524 qpair failed and we were unable to recover it. 00:34:16.524 [2024-07-14 05:48:23.537847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.524 [2024-07-14 05:48:23.537883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.524 qpair failed and we were unable to recover it. 00:34:16.524 [2024-07-14 05:48:23.538054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.803 [2024-07-14 05:48:23.538080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.803 qpair failed and we were unable to recover it. 00:34:16.803 [2024-07-14 05:48:23.538273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.803 [2024-07-14 05:48:23.538299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.803 qpair failed and we were unable to recover it. 00:34:16.803 [2024-07-14 05:48:23.538451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.803 [2024-07-14 05:48:23.538476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.803 qpair failed and we were unable to recover it. 00:34:16.803 [2024-07-14 05:48:23.538657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.803 [2024-07-14 05:48:23.538683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.803 qpair failed and we were unable to recover it. 00:34:16.803 [2024-07-14 05:48:23.538826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.803 [2024-07-14 05:48:23.538851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:16.803 qpair failed and we were unable to recover it. 00:34:16.803 [2024-07-14 05:48:23.539095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.803 [2024-07-14 05:48:23.539138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.803 qpair failed and we were unable to recover it. 00:34:16.803 [2024-07-14 05:48:23.539343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.803 [2024-07-14 05:48:23.539370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.803 qpair failed and we were unable to recover it. 00:34:16.803 [2024-07-14 05:48:23.539581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.803 [2024-07-14 05:48:23.539610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.803 qpair failed and we were unable to recover it. 00:34:16.803 [2024-07-14 05:48:23.539816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.803 [2024-07-14 05:48:23.539845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.803 qpair failed and we were unable to recover it. 00:34:16.803 [2024-07-14 05:48:23.540044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.803 [2024-07-14 05:48:23.540070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.803 qpair failed and we were unable to recover it. 00:34:16.803 [2024-07-14 05:48:23.540290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.803 [2024-07-14 05:48:23.540318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.803 qpair failed and we were unable to recover it. 00:34:16.803 [2024-07-14 05:48:23.540531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.803 [2024-07-14 05:48:23.540560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.803 qpair failed and we were unable to recover it. 00:34:16.803 [2024-07-14 05:48:23.540744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.803 [2024-07-14 05:48:23.540770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.803 qpair failed and we were unable to recover it. 00:34:16.803 [2024-07-14 05:48:23.540993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.803 [2024-07-14 05:48:23.541019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.803 qpair failed and we were unable to recover it. 00:34:16.803 [2024-07-14 05:48:23.541196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.803 [2024-07-14 05:48:23.541224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.803 qpair failed and we were unable to recover it. 00:34:16.803 [2024-07-14 05:48:23.541426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.803 [2024-07-14 05:48:23.541452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.803 qpair failed and we were unable to recover it. 00:34:16.803 [2024-07-14 05:48:23.541625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.803 [2024-07-14 05:48:23.541654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.803 qpair failed and we were unable to recover it. 00:34:16.803 [2024-07-14 05:48:23.541829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.803 [2024-07-14 05:48:23.541858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.803 qpair failed and we were unable to recover it. 00:34:16.803 [2024-07-14 05:48:23.542069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.803 [2024-07-14 05:48:23.542095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.803 qpair failed and we were unable to recover it. 00:34:16.803 [2024-07-14 05:48:23.542288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.803 [2024-07-14 05:48:23.542316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.804 qpair failed and we were unable to recover it. 00:34:16.804 [2024-07-14 05:48:23.542490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.804 [2024-07-14 05:48:23.542519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.804 qpair failed and we were unable to recover it. 00:34:16.804 [2024-07-14 05:48:23.542727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.804 [2024-07-14 05:48:23.542753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.804 qpair failed and we were unable to recover it. 00:34:16.804 [2024-07-14 05:48:23.542976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.804 [2024-07-14 05:48:23.543002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.804 qpair failed and we were unable to recover it. 00:34:16.804 [2024-07-14 05:48:23.543230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.804 [2024-07-14 05:48:23.543259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.804 qpair failed and we were unable to recover it. 00:34:16.804 [2024-07-14 05:48:23.543455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.804 [2024-07-14 05:48:23.543481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.804 qpair failed and we were unable to recover it. 00:34:16.804 [2024-07-14 05:48:23.543664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.804 [2024-07-14 05:48:23.543692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.804 qpair failed and we were unable to recover it. 00:34:16.804 [2024-07-14 05:48:23.543936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.804 [2024-07-14 05:48:23.543967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.804 qpair failed and we were unable to recover it. 00:34:16.804 [2024-07-14 05:48:23.544149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.804 [2024-07-14 05:48:23.544174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.804 qpair failed and we were unable to recover it. 00:34:16.804 [2024-07-14 05:48:23.544337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.804 [2024-07-14 05:48:23.544363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.804 qpair failed and we were unable to recover it. 00:34:16.804 [2024-07-14 05:48:23.544543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.804 [2024-07-14 05:48:23.544573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.804 qpair failed and we were unable to recover it. 00:34:16.804 [2024-07-14 05:48:23.544762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.804 [2024-07-14 05:48:23.544787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.804 qpair failed and we were unable to recover it. 00:34:16.804 [2024-07-14 05:48:23.544975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.804 [2024-07-14 05:48:23.545001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.804 qpair failed and we were unable to recover it. 00:34:16.804 [2024-07-14 05:48:23.545202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.804 [2024-07-14 05:48:23.545231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.804 qpair failed and we were unable to recover it. 00:34:16.804 [2024-07-14 05:48:23.545407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.804 [2024-07-14 05:48:23.545432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.804 qpair failed and we were unable to recover it. 00:34:16.804 [2024-07-14 05:48:23.545666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.804 [2024-07-14 05:48:23.545695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.804 qpair failed and we were unable to recover it. 00:34:16.804 [2024-07-14 05:48:23.545908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.804 [2024-07-14 05:48:23.545935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.804 qpair failed and we were unable to recover it. 00:34:16.804 [2024-07-14 05:48:23.546131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.804 [2024-07-14 05:48:23.546157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.804 qpair failed and we were unable to recover it. 00:34:16.804 [2024-07-14 05:48:23.546363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.804 [2024-07-14 05:48:23.546391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.804 qpair failed and we were unable to recover it. 00:34:16.804 [2024-07-14 05:48:23.546634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.804 [2024-07-14 05:48:23.546663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.804 qpair failed and we were unable to recover it. 00:34:16.804 [2024-07-14 05:48:23.546870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.804 [2024-07-14 05:48:23.546897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.804 qpair failed and we were unable to recover it. 00:34:16.804 [2024-07-14 05:48:23.547063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.804 [2024-07-14 05:48:23.547090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.804 qpair failed and we were unable to recover it. 00:34:16.804 [2024-07-14 05:48:23.547299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.804 [2024-07-14 05:48:23.547327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.804 qpair failed and we were unable to recover it. 00:34:16.804 [2024-07-14 05:48:23.547527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.804 [2024-07-14 05:48:23.547553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.804 qpair failed and we were unable to recover it. 00:34:16.804 [2024-07-14 05:48:23.547729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.804 [2024-07-14 05:48:23.547757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.804 qpair failed and we were unable to recover it. 00:34:16.804 [2024-07-14 05:48:23.547970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.804 [2024-07-14 05:48:23.547996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.804 qpair failed and we were unable to recover it. 00:34:16.804 [2024-07-14 05:48:23.548205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.804 [2024-07-14 05:48:23.548231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.804 qpair failed and we were unable to recover it. 00:34:16.804 [2024-07-14 05:48:23.548427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.804 [2024-07-14 05:48:23.548456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.804 qpair failed and we were unable to recover it. 00:34:16.804 [2024-07-14 05:48:23.548703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.804 [2024-07-14 05:48:23.548754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.804 qpair failed and we were unable to recover it. 00:34:16.804 [2024-07-14 05:48:23.548973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.804 [2024-07-14 05:48:23.548999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.804 qpair failed and we were unable to recover it. 00:34:16.804 [2024-07-14 05:48:23.549224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.804 [2024-07-14 05:48:23.549253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.804 qpair failed and we were unable to recover it. 00:34:16.804 [2024-07-14 05:48:23.549459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.804 [2024-07-14 05:48:23.549488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.804 qpair failed and we were unable to recover it. 00:34:16.804 [2024-07-14 05:48:23.549722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.804 [2024-07-14 05:48:23.549748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.804 qpair failed and we were unable to recover it. 00:34:16.804 [2024-07-14 05:48:23.549984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.804 [2024-07-14 05:48:23.550011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.804 qpair failed and we were unable to recover it. 00:34:16.804 [2024-07-14 05:48:23.550239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.804 [2024-07-14 05:48:23.550268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.804 qpair failed and we were unable to recover it. 00:34:16.804 [2024-07-14 05:48:23.550471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.804 [2024-07-14 05:48:23.550497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.804 qpair failed and we were unable to recover it. 00:34:16.804 [2024-07-14 05:48:23.550724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.804 [2024-07-14 05:48:23.550752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.804 qpair failed and we were unable to recover it. 00:34:16.804 [2024-07-14 05:48:23.550974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.804 [2024-07-14 05:48:23.551000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.804 qpair failed and we were unable to recover it. 00:34:16.804 [2024-07-14 05:48:23.551196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.804 [2024-07-14 05:48:23.551221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.804 qpair failed and we were unable to recover it. 00:34:16.804 [2024-07-14 05:48:23.551420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.804 [2024-07-14 05:48:23.551449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.804 qpair failed and we were unable to recover it. 00:34:16.804 [2024-07-14 05:48:23.551768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.804 [2024-07-14 05:48:23.551817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.804 qpair failed and we were unable to recover it. 00:34:16.804 [2024-07-14 05:48:23.552035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.805 [2024-07-14 05:48:23.552061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.805 qpair failed and we were unable to recover it. 00:34:16.805 [2024-07-14 05:48:23.552254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.805 [2024-07-14 05:48:23.552280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.805 qpair failed and we were unable to recover it. 00:34:16.805 [2024-07-14 05:48:23.552517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.805 [2024-07-14 05:48:23.552546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.805 qpair failed and we were unable to recover it. 00:34:16.805 [2024-07-14 05:48:23.552795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.805 [2024-07-14 05:48:23.552820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.805 qpair failed and we were unable to recover it. 00:34:16.805 [2024-07-14 05:48:23.553037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.805 [2024-07-14 05:48:23.553062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.805 qpair failed and we were unable to recover it. 00:34:16.805 [2024-07-14 05:48:23.553275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.805 [2024-07-14 05:48:23.553304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.805 qpair failed and we were unable to recover it. 00:34:16.805 [2024-07-14 05:48:23.553515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.805 [2024-07-14 05:48:23.553544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.805 qpair failed and we were unable to recover it. 00:34:16.805 [2024-07-14 05:48:23.553727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.805 [2024-07-14 05:48:23.553755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.805 qpair failed and we were unable to recover it. 00:34:16.805 [2024-07-14 05:48:23.553999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.805 [2024-07-14 05:48:23.554025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.805 qpair failed and we were unable to recover it. 00:34:16.805 [2024-07-14 05:48:23.554210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.805 [2024-07-14 05:48:23.554235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.805 qpair failed and we were unable to recover it. 00:34:16.805 [2024-07-14 05:48:23.554413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.805 [2024-07-14 05:48:23.554442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.805 qpair failed and we were unable to recover it. 00:34:16.805 [2024-07-14 05:48:23.554784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.805 [2024-07-14 05:48:23.554833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.805 qpair failed and we were unable to recover it. 00:34:16.805 [2024-07-14 05:48:23.555032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.805 [2024-07-14 05:48:23.555058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.805 qpair failed and we were unable to recover it. 00:34:16.805 [2024-07-14 05:48:23.555268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.805 [2024-07-14 05:48:23.555309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.805 qpair failed and we were unable to recover it. 00:34:16.805 [2024-07-14 05:48:23.555522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.805 [2024-07-14 05:48:23.555548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.805 qpair failed and we were unable to recover it. 00:34:16.805 [2024-07-14 05:48:23.555700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.805 [2024-07-14 05:48:23.555725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.805 qpair failed and we were unable to recover it. 00:34:16.805 [2024-07-14 05:48:23.555993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.805 [2024-07-14 05:48:23.556022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.805 qpair failed and we were unable to recover it. 00:34:16.805 [2024-07-14 05:48:23.556250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.805 [2024-07-14 05:48:23.556276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.805 qpair failed and we were unable to recover it. 00:34:16.805 [2024-07-14 05:48:23.556457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.805 [2024-07-14 05:48:23.556483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.805 qpair failed and we were unable to recover it. 00:34:16.805 [2024-07-14 05:48:23.556659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.805 [2024-07-14 05:48:23.556688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.805 qpair failed and we were unable to recover it. 00:34:16.805 [2024-07-14 05:48:23.556893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.805 [2024-07-14 05:48:23.556923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.805 qpair failed and we were unable to recover it. 00:34:16.805 [2024-07-14 05:48:23.557130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.805 [2024-07-14 05:48:23.557156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.805 qpair failed and we were unable to recover it. 00:34:16.805 [2024-07-14 05:48:23.557353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.805 [2024-07-14 05:48:23.557378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.805 qpair failed and we were unable to recover it. 00:34:16.805 [2024-07-14 05:48:23.557563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.805 [2024-07-14 05:48:23.557589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.805 qpair failed and we were unable to recover it. 00:34:16.805 [2024-07-14 05:48:23.557748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.805 [2024-07-14 05:48:23.557774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.805 qpair failed and we were unable to recover it. 00:34:16.805 [2024-07-14 05:48:23.557979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.805 [2024-07-14 05:48:23.558009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.805 qpair failed and we were unable to recover it. 00:34:16.805 [2024-07-14 05:48:23.558235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.805 [2024-07-14 05:48:23.558263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.805 qpair failed and we were unable to recover it. 00:34:16.805 [2024-07-14 05:48:23.558437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.805 [2024-07-14 05:48:23.558464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.805 qpair failed and we were unable to recover it. 00:34:16.805 [2024-07-14 05:48:23.558695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.805 [2024-07-14 05:48:23.558723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.805 qpair failed and we were unable to recover it. 00:34:16.805 [2024-07-14 05:48:23.558924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.805 [2024-07-14 05:48:23.558953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.805 qpair failed and we were unable to recover it. 00:34:16.805 [2024-07-14 05:48:23.559158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.805 [2024-07-14 05:48:23.559184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.805 qpair failed and we were unable to recover it. 00:34:16.805 [2024-07-14 05:48:23.559425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.805 [2024-07-14 05:48:23.559451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.805 qpair failed and we were unable to recover it. 00:34:16.805 [2024-07-14 05:48:23.559644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.805 [2024-07-14 05:48:23.559669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.805 qpair failed and we were unable to recover it. 00:34:16.805 [2024-07-14 05:48:23.559879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.805 [2024-07-14 05:48:23.559906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.805 qpair failed and we were unable to recover it. 00:34:16.805 [2024-07-14 05:48:23.560094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.805 [2024-07-14 05:48:23.560120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.805 qpair failed and we were unable to recover it. 00:34:16.805 [2024-07-14 05:48:23.560312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.805 [2024-07-14 05:48:23.560340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.805 qpair failed and we were unable to recover it. 00:34:16.805 [2024-07-14 05:48:23.560545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.805 [2024-07-14 05:48:23.560571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.805 qpair failed and we were unable to recover it. 00:34:16.805 [2024-07-14 05:48:23.560802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.805 [2024-07-14 05:48:23.560831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.805 qpair failed and we were unable to recover it. 00:34:16.805 [2024-07-14 05:48:23.561020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.805 [2024-07-14 05:48:23.561046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.805 qpair failed and we were unable to recover it. 00:34:16.805 [2024-07-14 05:48:23.561241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.805 [2024-07-14 05:48:23.561266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.805 qpair failed and we were unable to recover it. 00:34:16.805 [2024-07-14 05:48:23.561464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.805 [2024-07-14 05:48:23.561492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.805 qpair failed and we were unable to recover it. 00:34:16.806 [2024-07-14 05:48:23.561678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.806 [2024-07-14 05:48:23.561706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.806 qpair failed and we were unable to recover it. 00:34:16.806 [2024-07-14 05:48:23.561935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.806 [2024-07-14 05:48:23.561961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.806 qpair failed and we were unable to recover it. 00:34:16.806 [2024-07-14 05:48:23.562142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.806 [2024-07-14 05:48:23.562171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.806 qpair failed and we were unable to recover it. 00:34:16.806 [2024-07-14 05:48:23.562412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.806 [2024-07-14 05:48:23.562441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.806 qpair failed and we were unable to recover it. 00:34:16.806 [2024-07-14 05:48:23.562620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.806 [2024-07-14 05:48:23.562645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.806 qpair failed and we were unable to recover it. 00:34:16.806 [2024-07-14 05:48:23.562852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.806 [2024-07-14 05:48:23.562894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.806 qpair failed and we were unable to recover it. 00:34:16.806 [2024-07-14 05:48:23.563059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.806 [2024-07-14 05:48:23.563086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.806 qpair failed and we were unable to recover it. 00:34:16.806 [2024-07-14 05:48:23.563295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.806 [2024-07-14 05:48:23.563320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.806 qpair failed and we were unable to recover it. 00:34:16.806 [2024-07-14 05:48:23.563542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.806 [2024-07-14 05:48:23.563569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.806 qpair failed and we were unable to recover it. 00:34:16.806 [2024-07-14 05:48:23.563732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.806 [2024-07-14 05:48:23.563759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.806 qpair failed and we were unable to recover it. 00:34:16.806 [2024-07-14 05:48:23.563982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.806 [2024-07-14 05:48:23.564009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.806 qpair failed and we were unable to recover it. 00:34:16.806 [2024-07-14 05:48:23.564213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.806 [2024-07-14 05:48:23.564242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.806 qpair failed and we were unable to recover it. 00:34:16.806 [2024-07-14 05:48:23.564446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.806 [2024-07-14 05:48:23.564473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.806 qpair failed and we were unable to recover it. 00:34:16.806 [2024-07-14 05:48:23.564683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.806 [2024-07-14 05:48:23.564709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.806 qpair failed and we were unable to recover it. 00:34:16.806 [2024-07-14 05:48:23.564960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.806 [2024-07-14 05:48:23.564989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.806 qpair failed and we were unable to recover it. 00:34:16.806 [2024-07-14 05:48:23.565176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.806 [2024-07-14 05:48:23.565202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.806 qpair failed and we were unable to recover it. 00:34:16.806 [2024-07-14 05:48:23.565365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.806 [2024-07-14 05:48:23.565391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.806 qpair failed and we were unable to recover it. 00:34:16.806 [2024-07-14 05:48:23.565569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.806 [2024-07-14 05:48:23.565598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.806 qpair failed and we were unable to recover it. 00:34:16.806 [2024-07-14 05:48:23.565796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.806 [2024-07-14 05:48:23.565824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.806 qpair failed and we were unable to recover it. 00:34:16.806 [2024-07-14 05:48:23.566019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.806 [2024-07-14 05:48:23.566045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.806 qpair failed and we were unable to recover it. 00:34:16.806 [2024-07-14 05:48:23.566290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.806 [2024-07-14 05:48:23.566318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.806 qpair failed and we were unable to recover it. 00:34:16.806 [2024-07-14 05:48:23.566528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.806 [2024-07-14 05:48:23.566554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.806 qpair failed and we were unable to recover it. 00:34:16.806 [2024-07-14 05:48:23.566715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.806 [2024-07-14 05:48:23.566740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.806 qpair failed and we were unable to recover it. 00:34:16.806 [2024-07-14 05:48:23.566903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.806 [2024-07-14 05:48:23.566929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.806 qpair failed and we were unable to recover it. 00:34:16.806 [2024-07-14 05:48:23.567131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.806 [2024-07-14 05:48:23.567161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.806 qpair failed and we were unable to recover it. 00:34:16.806 [2024-07-14 05:48:23.567361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.806 [2024-07-14 05:48:23.567387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.806 qpair failed and we were unable to recover it. 00:34:16.806 [2024-07-14 05:48:23.567621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.806 [2024-07-14 05:48:23.567649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.806 qpair failed and we were unable to recover it. 00:34:16.806 [2024-07-14 05:48:23.567884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.806 [2024-07-14 05:48:23.567913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.806 qpair failed and we were unable to recover it. 00:34:16.806 [2024-07-14 05:48:23.568118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.806 [2024-07-14 05:48:23.568145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.806 qpair failed and we were unable to recover it. 00:34:16.806 [2024-07-14 05:48:23.568355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.806 [2024-07-14 05:48:23.568383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.806 qpair failed and we were unable to recover it. 00:34:16.806 [2024-07-14 05:48:23.568585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.806 [2024-07-14 05:48:23.568615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.806 qpair failed and we were unable to recover it. 00:34:16.806 [2024-07-14 05:48:23.568844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.806 [2024-07-14 05:48:23.568874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.806 qpair failed and we were unable to recover it. 00:34:16.806 [2024-07-14 05:48:23.569091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.806 [2024-07-14 05:48:23.569119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.806 qpair failed and we were unable to recover it. 00:34:16.806 [2024-07-14 05:48:23.569331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.806 [2024-07-14 05:48:23.569360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.806 qpair failed and we were unable to recover it. 00:34:16.806 [2024-07-14 05:48:23.569537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.806 [2024-07-14 05:48:23.569564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.806 qpair failed and we were unable to recover it. 00:34:16.806 [2024-07-14 05:48:23.569768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.806 [2024-07-14 05:48:23.569797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.806 qpair failed and we were unable to recover it. 00:34:16.806 [2024-07-14 05:48:23.570003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.806 [2024-07-14 05:48:23.570034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.806 qpair failed and we were unable to recover it. 00:34:16.806 [2024-07-14 05:48:23.570222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.806 [2024-07-14 05:48:23.570247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.806 qpair failed and we were unable to recover it. 00:34:16.806 [2024-07-14 05:48:23.570451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.806 [2024-07-14 05:48:23.570480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.806 qpair failed and we were unable to recover it. 00:34:16.806 [2024-07-14 05:48:23.570708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.806 [2024-07-14 05:48:23.570737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.806 qpair failed and we were unable to recover it. 00:34:16.806 [2024-07-14 05:48:23.570976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.806 [2024-07-14 05:48:23.571003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.806 qpair failed and we were unable to recover it. 00:34:16.807 [2024-07-14 05:48:23.571164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.807 [2024-07-14 05:48:23.571189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.807 qpair failed and we were unable to recover it. 00:34:16.807 [2024-07-14 05:48:23.571369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.807 [2024-07-14 05:48:23.571396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.807 qpair failed and we were unable to recover it. 00:34:16.807 [2024-07-14 05:48:23.571590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.807 [2024-07-14 05:48:23.571616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.807 qpair failed and we were unable to recover it. 00:34:16.807 [2024-07-14 05:48:23.571846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.807 [2024-07-14 05:48:23.571886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.807 qpair failed and we were unable to recover it. 00:34:16.807 [2024-07-14 05:48:23.572100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.807 [2024-07-14 05:48:23.572127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.807 qpair failed and we were unable to recover it. 00:34:16.807 [2024-07-14 05:48:23.572356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.807 [2024-07-14 05:48:23.572382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.807 qpair failed and we were unable to recover it. 00:34:16.807 [2024-07-14 05:48:23.572593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.807 [2024-07-14 05:48:23.572621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.807 qpair failed and we were unable to recover it. 00:34:16.807 [2024-07-14 05:48:23.572802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.807 [2024-07-14 05:48:23.572830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.807 qpair failed and we were unable to recover it. 00:34:16.807 [2024-07-14 05:48:23.573037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.807 [2024-07-14 05:48:23.573063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.807 qpair failed and we were unable to recover it. 00:34:16.807 [2024-07-14 05:48:23.573272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.807 [2024-07-14 05:48:23.573301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.807 qpair failed and we were unable to recover it. 00:34:16.807 [2024-07-14 05:48:23.573482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.807 [2024-07-14 05:48:23.573511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.807 qpair failed and we were unable to recover it. 00:34:16.807 [2024-07-14 05:48:23.573694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.807 [2024-07-14 05:48:23.573719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.807 qpair failed and we were unable to recover it. 00:34:16.807 [2024-07-14 05:48:23.573905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.807 [2024-07-14 05:48:23.573931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.807 qpair failed and we were unable to recover it. 00:34:16.807 [2024-07-14 05:48:23.574135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.807 [2024-07-14 05:48:23.574161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.807 qpair failed and we were unable to recover it. 00:34:16.807 [2024-07-14 05:48:23.574411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.807 [2024-07-14 05:48:23.574436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.807 qpair failed and we were unable to recover it. 00:34:16.807 [2024-07-14 05:48:23.574644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.807 [2024-07-14 05:48:23.574672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.807 qpair failed and we were unable to recover it. 00:34:16.807 [2024-07-14 05:48:23.574875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.807 [2024-07-14 05:48:23.574904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.807 qpair failed and we were unable to recover it. 00:34:16.807 [2024-07-14 05:48:23.575106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.807 [2024-07-14 05:48:23.575131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.807 qpair failed and we were unable to recover it. 00:34:16.807 [2024-07-14 05:48:23.575326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.807 [2024-07-14 05:48:23.575355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.807 qpair failed and we were unable to recover it. 00:34:16.807 [2024-07-14 05:48:23.575584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.807 [2024-07-14 05:48:23.575610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.807 qpair failed and we were unable to recover it. 00:34:16.807 [2024-07-14 05:48:23.575789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.807 [2024-07-14 05:48:23.575814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.807 qpair failed and we were unable to recover it. 00:34:16.807 [2024-07-14 05:48:23.576038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.807 [2024-07-14 05:48:23.576067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.807 qpair failed and we were unable to recover it. 00:34:16.807 [2024-07-14 05:48:23.576268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.807 [2024-07-14 05:48:23.576296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.807 qpair failed and we were unable to recover it. 00:34:16.807 [2024-07-14 05:48:23.576500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.807 [2024-07-14 05:48:23.576526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.807 qpair failed and we were unable to recover it. 00:34:16.807 [2024-07-14 05:48:23.576763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.807 [2024-07-14 05:48:23.576791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.807 qpair failed and we were unable to recover it. 00:34:16.807 [2024-07-14 05:48:23.576998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.807 [2024-07-14 05:48:23.577027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.807 qpair failed and we were unable to recover it. 00:34:16.807 [2024-07-14 05:48:23.577262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.807 [2024-07-14 05:48:23.577288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.807 qpair failed and we were unable to recover it. 00:34:16.807 [2024-07-14 05:48:23.577489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.807 [2024-07-14 05:48:23.577517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.807 qpair failed and we were unable to recover it. 00:34:16.807 [2024-07-14 05:48:23.577716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.807 [2024-07-14 05:48:23.577744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.807 qpair failed and we were unable to recover it. 00:34:16.807 [2024-07-14 05:48:23.577912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.807 [2024-07-14 05:48:23.577938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.807 qpair failed and we were unable to recover it. 00:34:16.807 [2024-07-14 05:48:23.578144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.807 [2024-07-14 05:48:23.578172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.807 qpair failed and we were unable to recover it. 00:34:16.807 [2024-07-14 05:48:23.578378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.807 [2024-07-14 05:48:23.578411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.807 qpair failed and we were unable to recover it. 00:34:16.807 [2024-07-14 05:48:23.578593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.807 [2024-07-14 05:48:23.578619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.808 qpair failed and we were unable to recover it. 00:34:16.808 [2024-07-14 05:48:23.578850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.808 [2024-07-14 05:48:23.578897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.808 qpair failed and we were unable to recover it. 00:34:16.808 [2024-07-14 05:48:23.579106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.808 [2024-07-14 05:48:23.579133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.808 qpair failed and we were unable to recover it. 00:34:16.808 [2024-07-14 05:48:23.579344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.808 [2024-07-14 05:48:23.579370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.808 qpair failed and we were unable to recover it. 00:34:16.808 [2024-07-14 05:48:23.579584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.808 [2024-07-14 05:48:23.579626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.808 qpair failed and we were unable to recover it. 00:34:16.808 [2024-07-14 05:48:23.579850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.808 [2024-07-14 05:48:23.579885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.808 qpair failed and we were unable to recover it. 00:34:16.808 [2024-07-14 05:48:23.580068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.808 [2024-07-14 05:48:23.580095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.808 qpair failed and we were unable to recover it. 00:34:16.808 [2024-07-14 05:48:23.580309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.808 [2024-07-14 05:48:23.580338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.808 qpair failed and we were unable to recover it. 00:34:16.808 [2024-07-14 05:48:23.580573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.808 [2024-07-14 05:48:23.580602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.808 qpair failed and we were unable to recover it. 00:34:16.808 [2024-07-14 05:48:23.580805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.808 [2024-07-14 05:48:23.580831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.808 qpair failed and we were unable to recover it. 00:34:16.808 [2024-07-14 05:48:23.581048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.808 [2024-07-14 05:48:23.581077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.808 qpair failed and we were unable to recover it. 00:34:16.808 [2024-07-14 05:48:23.581285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.808 [2024-07-14 05:48:23.581314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.808 qpair failed and we were unable to recover it. 00:34:16.808 [2024-07-14 05:48:23.581516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.808 [2024-07-14 05:48:23.581542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.808 qpair failed and we were unable to recover it. 00:34:16.808 [2024-07-14 05:48:23.581711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.808 [2024-07-14 05:48:23.581738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.808 qpair failed and we were unable to recover it. 00:34:16.808 [2024-07-14 05:48:23.581914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.808 [2024-07-14 05:48:23.581944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.808 qpair failed and we were unable to recover it. 00:34:16.808 [2024-07-14 05:48:23.582146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.808 [2024-07-14 05:48:23.582171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.808 qpair failed and we were unable to recover it. 00:34:16.808 [2024-07-14 05:48:23.582336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.808 [2024-07-14 05:48:23.582361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.808 qpair failed and we were unable to recover it. 00:34:16.808 [2024-07-14 05:48:23.582549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.808 [2024-07-14 05:48:23.582575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.808 qpair failed and we were unable to recover it. 00:34:16.808 [2024-07-14 05:48:23.582774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.808 [2024-07-14 05:48:23.582799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.808 qpair failed and we were unable to recover it. 00:34:16.808 [2024-07-14 05:48:23.583010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.808 [2024-07-14 05:48:23.583039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.808 qpair failed and we were unable to recover it. 00:34:16.808 [2024-07-14 05:48:23.583246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.808 [2024-07-14 05:48:23.583275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.808 qpair failed and we were unable to recover it. 00:34:16.808 [2024-07-14 05:48:23.583481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.808 [2024-07-14 05:48:23.583507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.808 qpair failed and we were unable to recover it. 00:34:16.808 [2024-07-14 05:48:23.583733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.808 [2024-07-14 05:48:23.583762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.808 qpair failed and we were unable to recover it. 00:34:16.808 [2024-07-14 05:48:23.583987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.808 [2024-07-14 05:48:23.584016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.808 qpair failed and we were unable to recover it. 00:34:16.808 [2024-07-14 05:48:23.584221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.808 [2024-07-14 05:48:23.584248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.808 qpair failed and we were unable to recover it. 00:34:16.808 [2024-07-14 05:48:23.584455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.808 [2024-07-14 05:48:23.584484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.808 qpair failed and we were unable to recover it. 00:34:16.808 [2024-07-14 05:48:23.584688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.808 [2024-07-14 05:48:23.584716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.808 qpair failed and we were unable to recover it. 00:34:16.808 [2024-07-14 05:48:23.584892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.808 [2024-07-14 05:48:23.584918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.808 qpair failed and we were unable to recover it. 00:34:16.808 [2024-07-14 05:48:23.585089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.808 [2024-07-14 05:48:23.585119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.808 qpair failed and we were unable to recover it. 00:34:16.808 [2024-07-14 05:48:23.585320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.808 [2024-07-14 05:48:23.585350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.808 qpair failed and we were unable to recover it. 00:34:16.808 [2024-07-14 05:48:23.585560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.808 [2024-07-14 05:48:23.585585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.808 qpair failed and we were unable to recover it. 00:34:16.808 [2024-07-14 05:48:23.585792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.808 [2024-07-14 05:48:23.585817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.808 qpair failed and we were unable to recover it. 00:34:16.808 [2024-07-14 05:48:23.586050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.808 [2024-07-14 05:48:23.586077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.808 qpair failed and we were unable to recover it. 00:34:16.808 [2024-07-14 05:48:23.586245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.808 [2024-07-14 05:48:23.586271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.808 qpair failed and we were unable to recover it. 00:34:16.808 [2024-07-14 05:48:23.586435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.808 [2024-07-14 05:48:23.586460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.808 qpair failed and we were unable to recover it. 00:34:16.808 [2024-07-14 05:48:23.586638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.808 [2024-07-14 05:48:23.586663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.808 qpair failed and we were unable to recover it. 00:34:16.808 [2024-07-14 05:48:23.586889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.808 [2024-07-14 05:48:23.586916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.808 qpair failed and we were unable to recover it. 00:34:16.808 [2024-07-14 05:48:23.587103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.808 [2024-07-14 05:48:23.587131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.808 qpair failed and we were unable to recover it. 00:34:16.808 [2024-07-14 05:48:23.587327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.808 [2024-07-14 05:48:23.587355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.808 qpair failed and we were unable to recover it. 00:34:16.808 [2024-07-14 05:48:23.587566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.808 [2024-07-14 05:48:23.587596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.808 qpair failed and we were unable to recover it. 00:34:16.808 [2024-07-14 05:48:23.587804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.809 [2024-07-14 05:48:23.587832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.809 qpair failed and we were unable to recover it. 00:34:16.809 [2024-07-14 05:48:23.588080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.809 [2024-07-14 05:48:23.588106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.809 qpair failed and we were unable to recover it. 00:34:16.809 [2024-07-14 05:48:23.588264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.809 [2024-07-14 05:48:23.588290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.809 qpair failed and we were unable to recover it. 00:34:16.809 [2024-07-14 05:48:23.588489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.809 [2024-07-14 05:48:23.588517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.809 qpair failed and we were unable to recover it. 00:34:16.809 [2024-07-14 05:48:23.588710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.809 [2024-07-14 05:48:23.588739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.809 qpair failed and we were unable to recover it. 00:34:16.809 [2024-07-14 05:48:23.588972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.809 [2024-07-14 05:48:23.588998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.809 qpair failed and we were unable to recover it. 00:34:16.809 [2024-07-14 05:48:23.589212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.809 [2024-07-14 05:48:23.589239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.809 qpair failed and we were unable to recover it. 00:34:16.809 [2024-07-14 05:48:23.589428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.809 [2024-07-14 05:48:23.589454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.809 qpair failed and we were unable to recover it. 00:34:16.809 [2024-07-14 05:48:23.589636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.809 [2024-07-14 05:48:23.589662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.809 qpair failed and we were unable to recover it. 00:34:16.809 [2024-07-14 05:48:23.589818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.809 [2024-07-14 05:48:23.589843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.809 qpair failed and we were unable to recover it. 00:34:16.809 [2024-07-14 05:48:23.590013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.809 [2024-07-14 05:48:23.590039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.809 qpair failed and we were unable to recover it. 00:34:16.809 [2024-07-14 05:48:23.590221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.809 [2024-07-14 05:48:23.590247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.809 qpair failed and we were unable to recover it. 00:34:16.809 [2024-07-14 05:48:23.590457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.809 [2024-07-14 05:48:23.590486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.809 qpair failed and we were unable to recover it. 00:34:16.809 [2024-07-14 05:48:23.590721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.809 [2024-07-14 05:48:23.590749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.809 qpair failed and we were unable to recover it. 00:34:16.809 [2024-07-14 05:48:23.590984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.809 [2024-07-14 05:48:23.591010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.809 qpair failed and we were unable to recover it. 00:34:16.809 [2024-07-14 05:48:23.591193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.809 [2024-07-14 05:48:23.591222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.809 qpair failed and we were unable to recover it. 00:34:16.809 [2024-07-14 05:48:23.591461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.809 [2024-07-14 05:48:23.591489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.809 qpair failed and we were unable to recover it. 00:34:16.809 [2024-07-14 05:48:23.591670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.809 [2024-07-14 05:48:23.591697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.809 qpair failed and we were unable to recover it. 00:34:16.809 [2024-07-14 05:48:23.591884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.809 [2024-07-14 05:48:23.591910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.809 qpair failed and we were unable to recover it. 00:34:16.809 [2024-07-14 05:48:23.592139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.809 [2024-07-14 05:48:23.592168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.809 qpair failed and we were unable to recover it. 00:34:16.809 [2024-07-14 05:48:23.592347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.809 [2024-07-14 05:48:23.592372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.809 qpair failed and we were unable to recover it. 00:34:16.809 [2024-07-14 05:48:23.592567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.809 [2024-07-14 05:48:23.592595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.809 qpair failed and we were unable to recover it. 00:34:16.809 [2024-07-14 05:48:23.592816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.809 [2024-07-14 05:48:23.592841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.809 qpair failed and we were unable to recover it. 00:34:16.809 [2024-07-14 05:48:23.593059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.809 [2024-07-14 05:48:23.593084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.809 qpair failed and we were unable to recover it. 00:34:16.809 [2024-07-14 05:48:23.593290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.809 [2024-07-14 05:48:23.593318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.809 qpair failed and we were unable to recover it. 00:34:16.809 [2024-07-14 05:48:23.593523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.809 [2024-07-14 05:48:23.593553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.809 qpair failed and we were unable to recover it. 00:34:16.809 [2024-07-14 05:48:23.593764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.809 [2024-07-14 05:48:23.593791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.809 qpair failed and we were unable to recover it. 00:34:16.809 [2024-07-14 05:48:23.593965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.809 [2024-07-14 05:48:23.593991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.809 qpair failed and we were unable to recover it. 00:34:16.809 [2024-07-14 05:48:23.594198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.809 [2024-07-14 05:48:23.594226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.809 qpair failed and we were unable to recover it. 00:34:16.809 [2024-07-14 05:48:23.594455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.809 [2024-07-14 05:48:23.594481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.809 qpair failed and we were unable to recover it. 00:34:16.809 [2024-07-14 05:48:23.594651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.809 [2024-07-14 05:48:23.594680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.809 qpair failed and we were unable to recover it. 00:34:16.809 [2024-07-14 05:48:23.594891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.809 [2024-07-14 05:48:23.594934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.809 qpair failed and we were unable to recover it. 00:34:16.809 [2024-07-14 05:48:23.595116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.809 [2024-07-14 05:48:23.595141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.809 qpair failed and we were unable to recover it. 00:34:16.809 [2024-07-14 05:48:23.595326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.809 [2024-07-14 05:48:23.595356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.809 qpair failed and we were unable to recover it. 00:34:16.809 [2024-07-14 05:48:23.595585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.809 [2024-07-14 05:48:23.595611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.809 qpair failed and we were unable to recover it. 00:34:16.809 [2024-07-14 05:48:23.595819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.809 [2024-07-14 05:48:23.595845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.809 qpair failed and we were unable to recover it. 00:34:16.809 [2024-07-14 05:48:23.596085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.809 [2024-07-14 05:48:23.596114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.809 qpair failed and we were unable to recover it. 00:34:16.809 [2024-07-14 05:48:23.596286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.809 [2024-07-14 05:48:23.596314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.809 qpair failed and we were unable to recover it. 00:34:16.809 [2024-07-14 05:48:23.596517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.809 [2024-07-14 05:48:23.596543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.809 qpair failed and we were unable to recover it. 00:34:16.809 [2024-07-14 05:48:23.596766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.809 [2024-07-14 05:48:23.596799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.809 qpair failed and we were unable to recover it. 00:34:16.809 [2024-07-14 05:48:23.597006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.809 [2024-07-14 05:48:23.597033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.809 qpair failed and we were unable to recover it. 00:34:16.809 [2024-07-14 05:48:23.597213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.810 [2024-07-14 05:48:23.597239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.810 qpair failed and we were unable to recover it. 00:34:16.810 [2024-07-14 05:48:23.597423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.810 [2024-07-14 05:48:23.597451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.810 qpair failed and we were unable to recover it. 00:34:16.810 [2024-07-14 05:48:23.597651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.810 [2024-07-14 05:48:23.597679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.810 qpair failed and we were unable to recover it. 00:34:16.810 [2024-07-14 05:48:23.597886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.810 [2024-07-14 05:48:23.597912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.810 qpair failed and we were unable to recover it. 00:34:16.810 [2024-07-14 05:48:23.598145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.810 [2024-07-14 05:48:23.598173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.810 qpair failed and we were unable to recover it. 00:34:16.810 [2024-07-14 05:48:23.598337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.810 [2024-07-14 05:48:23.598365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.810 qpair failed and we were unable to recover it. 00:34:16.810 [2024-07-14 05:48:23.598598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.810 [2024-07-14 05:48:23.598623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.810 qpair failed and we were unable to recover it. 00:34:16.810 [2024-07-14 05:48:23.598839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.810 [2024-07-14 05:48:23.598872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.810 qpair failed and we were unable to recover it. 00:34:16.810 [2024-07-14 05:48:23.599050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.810 [2024-07-14 05:48:23.599078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.810 qpair failed and we were unable to recover it. 00:34:16.810 [2024-07-14 05:48:23.599252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.810 [2024-07-14 05:48:23.599277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.810 qpair failed and we were unable to recover it. 00:34:16.810 [2024-07-14 05:48:23.599510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.810 [2024-07-14 05:48:23.599538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.810 qpair failed and we were unable to recover it. 00:34:16.810 [2024-07-14 05:48:23.599741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.810 [2024-07-14 05:48:23.599768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.810 qpair failed and we were unable to recover it. 00:34:16.810 [2024-07-14 05:48:23.599936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.810 [2024-07-14 05:48:23.599962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.810 qpair failed and we were unable to recover it. 00:34:16.810 [2024-07-14 05:48:23.600170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.810 [2024-07-14 05:48:23.600198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.810 qpair failed and we were unable to recover it. 00:34:16.810 [2024-07-14 05:48:23.600435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.810 [2024-07-14 05:48:23.600460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.810 qpair failed and we were unable to recover it. 00:34:16.810 [2024-07-14 05:48:23.600647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.810 [2024-07-14 05:48:23.600673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.810 qpair failed and we were unable to recover it. 00:34:16.810 [2024-07-14 05:48:23.600829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.810 [2024-07-14 05:48:23.600856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.810 qpair failed and we were unable to recover it. 00:34:16.810 [2024-07-14 05:48:23.601040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.810 [2024-07-14 05:48:23.601070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.810 qpair failed and we were unable to recover it. 00:34:16.810 [2024-07-14 05:48:23.601283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.810 [2024-07-14 05:48:23.601308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.810 qpair failed and we were unable to recover it. 00:34:16.810 [2024-07-14 05:48:23.601540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.810 [2024-07-14 05:48:23.601568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.810 qpair failed and we were unable to recover it. 00:34:16.810 [2024-07-14 05:48:23.601780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.810 [2024-07-14 05:48:23.601806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.810 qpair failed and we were unable to recover it. 00:34:16.810 [2024-07-14 05:48:23.601995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.810 [2024-07-14 05:48:23.602021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.810 qpair failed and we were unable to recover it. 00:34:16.810 [2024-07-14 05:48:23.602204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.810 [2024-07-14 05:48:23.602231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.810 qpair failed and we were unable to recover it. 00:34:16.810 [2024-07-14 05:48:23.602439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.810 [2024-07-14 05:48:23.602467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.810 qpair failed and we were unable to recover it. 00:34:16.810 [2024-07-14 05:48:23.602673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.810 [2024-07-14 05:48:23.602698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.810 qpair failed and we were unable to recover it. 00:34:16.810 [2024-07-14 05:48:23.602888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.810 [2024-07-14 05:48:23.602931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.810 qpair failed and we were unable to recover it. 00:34:16.810 [2024-07-14 05:48:23.603122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.810 [2024-07-14 05:48:23.603147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.810 qpair failed and we were unable to recover it. 00:34:16.810 [2024-07-14 05:48:23.603393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.810 [2024-07-14 05:48:23.603418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.810 qpair failed and we were unable to recover it. 00:34:16.810 [2024-07-14 05:48:23.603647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.810 [2024-07-14 05:48:23.603673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.810 qpair failed and we were unable to recover it. 00:34:16.810 [2024-07-14 05:48:23.603959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.810 [2024-07-14 05:48:23.603989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.810 qpair failed and we were unable to recover it. 00:34:16.810 [2024-07-14 05:48:23.604193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.810 [2024-07-14 05:48:23.604219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.810 qpair failed and we were unable to recover it. 00:34:16.810 [2024-07-14 05:48:23.604422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.810 [2024-07-14 05:48:23.604450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.810 qpair failed and we were unable to recover it. 00:34:16.810 [2024-07-14 05:48:23.604619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.810 [2024-07-14 05:48:23.604647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.810 qpair failed and we were unable to recover it. 00:34:16.810 [2024-07-14 05:48:23.604850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.810 [2024-07-14 05:48:23.604880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.810 qpair failed and we were unable to recover it. 00:34:16.810 [2024-07-14 05:48:23.605078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.810 [2024-07-14 05:48:23.605107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.810 qpair failed and we were unable to recover it. 00:34:16.810 [2024-07-14 05:48:23.605307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.810 [2024-07-14 05:48:23.605336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.810 qpair failed and we were unable to recover it. 00:34:16.810 [2024-07-14 05:48:23.605532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.810 [2024-07-14 05:48:23.605558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.810 qpair failed and we were unable to recover it. 00:34:16.810 [2024-07-14 05:48:23.605765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.810 [2024-07-14 05:48:23.605794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.810 qpair failed and we were unable to recover it. 00:34:16.810 [2024-07-14 05:48:23.606036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.810 [2024-07-14 05:48:23.606069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.810 qpair failed and we were unable to recover it. 00:34:16.810 [2024-07-14 05:48:23.606282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.810 [2024-07-14 05:48:23.606308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.810 qpair failed and we were unable to recover it. 00:34:16.810 [2024-07-14 05:48:23.606521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.810 [2024-07-14 05:48:23.606549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.810 qpair failed and we were unable to recover it. 00:34:16.810 [2024-07-14 05:48:23.606742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.811 [2024-07-14 05:48:23.606771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.811 qpair failed and we were unable to recover it. 00:34:16.811 [2024-07-14 05:48:23.606973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.811 [2024-07-14 05:48:23.606999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.811 qpair failed and we were unable to recover it. 00:34:16.811 [2024-07-14 05:48:23.607237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.811 [2024-07-14 05:48:23.607268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.811 qpair failed and we were unable to recover it. 00:34:16.811 [2024-07-14 05:48:23.607471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.811 [2024-07-14 05:48:23.607500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.811 qpair failed and we were unable to recover it. 00:34:16.811 [2024-07-14 05:48:23.607696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.811 [2024-07-14 05:48:23.607722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.811 qpair failed and we were unable to recover it. 00:34:16.811 [2024-07-14 05:48:23.607909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.811 [2024-07-14 05:48:23.607936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.811 qpair failed and we were unable to recover it. 00:34:16.811 [2024-07-14 05:48:23.608141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.811 [2024-07-14 05:48:23.608169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.811 qpair failed and we were unable to recover it. 00:34:16.811 [2024-07-14 05:48:23.608343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.811 [2024-07-14 05:48:23.608368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.811 qpair failed and we were unable to recover it. 00:34:16.811 [2024-07-14 05:48:23.608549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.811 [2024-07-14 05:48:23.608578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.811 qpair failed and we were unable to recover it. 00:34:16.811 [2024-07-14 05:48:23.608780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.811 [2024-07-14 05:48:23.608808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.811 qpair failed and we were unable to recover it. 00:34:16.811 [2024-07-14 05:48:23.608978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.811 [2024-07-14 05:48:23.609004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.811 qpair failed and we were unable to recover it. 00:34:16.811 [2024-07-14 05:48:23.609211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.811 [2024-07-14 05:48:23.609239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.811 qpair failed and we were unable to recover it. 00:34:16.811 [2024-07-14 05:48:23.609411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.811 [2024-07-14 05:48:23.609439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.811 qpair failed and we were unable to recover it. 00:34:16.811 [2024-07-14 05:48:23.609623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.811 [2024-07-14 05:48:23.609648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.811 qpair failed and we were unable to recover it. 00:34:16.811 [2024-07-14 05:48:23.609880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.811 [2024-07-14 05:48:23.609909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.811 qpair failed and we were unable to recover it. 00:34:16.811 [2024-07-14 05:48:23.610136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.811 [2024-07-14 05:48:23.610164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.811 qpair failed and we were unable to recover it. 00:34:16.811 [2024-07-14 05:48:23.610348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.811 [2024-07-14 05:48:23.610374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.811 qpair failed and we were unable to recover it. 00:34:16.811 [2024-07-14 05:48:23.610549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.811 [2024-07-14 05:48:23.610575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.811 qpair failed and we were unable to recover it. 00:34:16.811 [2024-07-14 05:48:23.610782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.811 [2024-07-14 05:48:23.610811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.811 qpair failed and we were unable to recover it. 00:34:16.811 [2024-07-14 05:48:23.610984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.811 [2024-07-14 05:48:23.611010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.811 qpair failed and we were unable to recover it. 00:34:16.811 [2024-07-14 05:48:23.611189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.811 [2024-07-14 05:48:23.611217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.811 qpair failed and we were unable to recover it. 00:34:16.811 [2024-07-14 05:48:23.611446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.811 [2024-07-14 05:48:23.611471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.811 qpair failed and we were unable to recover it. 00:34:16.811 [2024-07-14 05:48:23.611629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.811 [2024-07-14 05:48:23.611654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.811 qpair failed and we were unable to recover it. 00:34:16.811 [2024-07-14 05:48:23.611836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.811 [2024-07-14 05:48:23.611874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.811 qpair failed and we were unable to recover it. 00:34:16.811 [2024-07-14 05:48:23.612102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.811 [2024-07-14 05:48:23.612128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.811 qpair failed and we were unable to recover it. 00:34:16.811 [2024-07-14 05:48:23.612315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.811 [2024-07-14 05:48:23.612341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.811 qpair failed and we were unable to recover it. 00:34:16.811 [2024-07-14 05:48:23.612576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.811 [2024-07-14 05:48:23.612604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.811 qpair failed and we were unable to recover it. 00:34:16.811 [2024-07-14 05:48:23.612806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.811 [2024-07-14 05:48:23.612834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.811 qpair failed and we were unable to recover it. 00:34:16.811 [2024-07-14 05:48:23.613044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.811 [2024-07-14 05:48:23.613070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.811 qpair failed and we were unable to recover it. 00:34:16.811 [2024-07-14 05:48:23.613227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.811 [2024-07-14 05:48:23.613252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.811 qpair failed and we were unable to recover it. 00:34:16.811 [2024-07-14 05:48:23.613408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.811 [2024-07-14 05:48:23.613433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.811 qpair failed and we were unable to recover it. 00:34:16.811 [2024-07-14 05:48:23.613640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.811 [2024-07-14 05:48:23.613665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.811 qpair failed and we were unable to recover it. 00:34:16.811 [2024-07-14 05:48:23.613904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.811 [2024-07-14 05:48:23.613946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.811 qpair failed and we were unable to recover it. 00:34:16.811 [2024-07-14 05:48:23.614107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.811 [2024-07-14 05:48:23.614132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.811 qpair failed and we were unable to recover it. 00:34:16.811 [2024-07-14 05:48:23.614340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.811 [2024-07-14 05:48:23.614365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.811 qpair failed and we were unable to recover it. 00:34:16.811 [2024-07-14 05:48:23.614556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.811 [2024-07-14 05:48:23.614584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.811 qpair failed and we were unable to recover it. 00:34:16.811 [2024-07-14 05:48:23.614787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.811 [2024-07-14 05:48:23.614815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.811 qpair failed and we were unable to recover it. 00:34:16.811 [2024-07-14 05:48:23.615027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.811 [2024-07-14 05:48:23.615057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.811 qpair failed and we were unable to recover it. 00:34:16.811 [2024-07-14 05:48:23.615287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.811 [2024-07-14 05:48:23.615316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.811 qpair failed and we were unable to recover it. 00:34:16.811 [2024-07-14 05:48:23.615529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.811 [2024-07-14 05:48:23.615554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.811 qpair failed and we were unable to recover it. 00:34:16.811 [2024-07-14 05:48:23.615738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.811 [2024-07-14 05:48:23.615763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.811 qpair failed and we were unable to recover it. 00:34:16.812 [2024-07-14 05:48:23.615994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.812 [2024-07-14 05:48:23.616023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.812 qpair failed and we were unable to recover it. 00:34:16.812 [2024-07-14 05:48:23.616256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.812 [2024-07-14 05:48:23.616285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.812 qpair failed and we were unable to recover it. 00:34:16.812 [2024-07-14 05:48:23.616514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.812 [2024-07-14 05:48:23.616539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.812 qpair failed and we were unable to recover it. 00:34:16.812 [2024-07-14 05:48:23.616754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.812 [2024-07-14 05:48:23.616782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.812 qpair failed and we were unable to recover it. 00:34:16.812 [2024-07-14 05:48:23.616992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.812 [2024-07-14 05:48:23.617021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.812 qpair failed and we were unable to recover it. 00:34:16.812 [2024-07-14 05:48:23.617225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.812 [2024-07-14 05:48:23.617250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.812 qpair failed and we were unable to recover it. 00:34:16.812 [2024-07-14 05:48:23.617457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.812 [2024-07-14 05:48:23.617485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.812 qpair failed and we were unable to recover it. 00:34:16.812 [2024-07-14 05:48:23.617686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.812 [2024-07-14 05:48:23.617712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.812 qpair failed and we were unable to recover it. 00:34:16.812 [2024-07-14 05:48:23.617922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.812 [2024-07-14 05:48:23.617948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.812 qpair failed and we were unable to recover it. 00:34:16.812 [2024-07-14 05:48:23.618153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.812 [2024-07-14 05:48:23.618179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.812 qpair failed and we were unable to recover it. 00:34:16.812 [2024-07-14 05:48:23.618416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.812 [2024-07-14 05:48:23.618444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.812 qpair failed and we were unable to recover it. 00:34:16.812 [2024-07-14 05:48:23.618619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.812 [2024-07-14 05:48:23.618645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.812 qpair failed and we were unable to recover it. 00:34:16.812 [2024-07-14 05:48:23.618853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.812 [2024-07-14 05:48:23.618901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.812 qpair failed and we were unable to recover it. 00:34:16.812 [2024-07-14 05:48:23.619065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.812 [2024-07-14 05:48:23.619094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.812 qpair failed and we were unable to recover it. 00:34:16.812 [2024-07-14 05:48:23.619272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.812 [2024-07-14 05:48:23.619298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.812 qpair failed and we were unable to recover it. 00:34:16.812 [2024-07-14 05:48:23.619511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.812 [2024-07-14 05:48:23.619540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.812 qpair failed and we were unable to recover it. 00:34:16.812 [2024-07-14 05:48:23.619743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.812 [2024-07-14 05:48:23.619772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.812 qpair failed and we were unable to recover it. 00:34:16.812 [2024-07-14 05:48:23.619965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.812 [2024-07-14 05:48:23.619991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.812 qpair failed and we were unable to recover it. 00:34:16.812 [2024-07-14 05:48:23.620165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.812 [2024-07-14 05:48:23.620190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.812 qpair failed and we were unable to recover it. 00:34:16.812 [2024-07-14 05:48:23.620394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.812 [2024-07-14 05:48:23.620422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.812 qpair failed and we were unable to recover it. 00:34:16.812 [2024-07-14 05:48:23.620621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.812 [2024-07-14 05:48:23.620646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.812 qpair failed and we were unable to recover it. 00:34:16.812 [2024-07-14 05:48:23.620813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.812 [2024-07-14 05:48:23.620839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.812 qpair failed and we were unable to recover it. 00:34:16.812 [2024-07-14 05:48:23.621029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.812 [2024-07-14 05:48:23.621055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.812 qpair failed and we were unable to recover it. 00:34:16.812 [2024-07-14 05:48:23.621272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.812 [2024-07-14 05:48:23.621298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.812 qpair failed and we were unable to recover it. 00:34:16.812 [2024-07-14 05:48:23.621479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.812 [2024-07-14 05:48:23.621507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.812 qpair failed and we were unable to recover it. 00:34:16.812 [2024-07-14 05:48:23.621684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.812 [2024-07-14 05:48:23.621712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.812 qpair failed and we were unable to recover it. 00:34:16.812 [2024-07-14 05:48:23.621974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.812 [2024-07-14 05:48:23.622000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.812 qpair failed and we were unable to recover it. 00:34:16.812 [2024-07-14 05:48:23.622229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.812 [2024-07-14 05:48:23.622258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.812 qpair failed and we were unable to recover it. 00:34:16.812 [2024-07-14 05:48:23.622466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.812 [2024-07-14 05:48:23.622493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.812 qpair failed and we were unable to recover it. 00:34:16.812 [2024-07-14 05:48:23.622679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.812 [2024-07-14 05:48:23.622706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.812 qpair failed and we were unable to recover it. 00:34:16.812 [2024-07-14 05:48:23.622931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.812 [2024-07-14 05:48:23.622960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.812 qpair failed and we were unable to recover it. 00:34:16.812 [2024-07-14 05:48:23.623140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.812 [2024-07-14 05:48:23.623166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.812 qpair failed and we were unable to recover it. 00:34:16.812 [2024-07-14 05:48:23.623378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.812 [2024-07-14 05:48:23.623404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.812 qpair failed and we were unable to recover it. 00:34:16.812 [2024-07-14 05:48:23.623568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.812 [2024-07-14 05:48:23.623593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.812 qpair failed and we were unable to recover it. 00:34:16.812 [2024-07-14 05:48:23.623814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.812 [2024-07-14 05:48:23.623843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.812 qpair failed and we were unable to recover it. 00:34:16.812 [2024-07-14 05:48:23.624025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-07-14 05:48:23.624051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-07-14 05:48:23.624236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-07-14 05:48:23.624265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-07-14 05:48:23.624452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-07-14 05:48:23.624477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-07-14 05:48:23.624665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-07-14 05:48:23.624691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-07-14 05:48:23.624841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-07-14 05:48:23.624870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-07-14 05:48:23.625079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-07-14 05:48:23.625108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-07-14 05:48:23.625307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-07-14 05:48:23.625333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-07-14 05:48:23.625493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-07-14 05:48:23.625519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-07-14 05:48:23.625719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-07-14 05:48:23.625747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-07-14 05:48:23.625982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-07-14 05:48:23.626009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-07-14 05:48:23.626222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-07-14 05:48:23.626251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-07-14 05:48:23.626445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-07-14 05:48:23.626473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-07-14 05:48:23.626670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-07-14 05:48:23.626696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-07-14 05:48:23.626899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-07-14 05:48:23.626929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-07-14 05:48:23.627155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-07-14 05:48:23.627183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-07-14 05:48:23.627363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-07-14 05:48:23.627389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-07-14 05:48:23.627621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-07-14 05:48:23.627649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-07-14 05:48:23.627816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-07-14 05:48:23.627845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-07-14 05:48:23.628082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-07-14 05:48:23.628108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-07-14 05:48:23.628329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-07-14 05:48:23.628356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-07-14 05:48:23.628594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-07-14 05:48:23.628622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-07-14 05:48:23.628824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-07-14 05:48:23.628854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-07-14 05:48:23.629071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-07-14 05:48:23.629097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-07-14 05:48:23.629342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-07-14 05:48:23.629368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-07-14 05:48:23.629556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-07-14 05:48:23.629583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-07-14 05:48:23.629740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-07-14 05:48:23.629766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-07-14 05:48:23.629946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-07-14 05:48:23.629972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-07-14 05:48:23.630156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-07-14 05:48:23.630182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-07-14 05:48:23.630370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-07-14 05:48:23.630399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-07-14 05:48:23.630624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-07-14 05:48:23.630650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-07-14 05:48:23.630860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-07-14 05:48:23.630891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-07-14 05:48:23.631090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-07-14 05:48:23.631121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-07-14 05:48:23.631350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-07-14 05:48:23.631380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-07-14 05:48:23.631567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-07-14 05:48:23.631593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-07-14 05:48:23.631797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-07-14 05:48:23.631826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-07-14 05:48:23.632070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-07-14 05:48:23.632107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-07-14 05:48:23.632308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-07-14 05:48:23.632334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-07-14 05:48:23.632540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-07-14 05:48:23.632569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-07-14 05:48:23.632768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-07-14 05:48:23.632797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-07-14 05:48:23.633008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-07-14 05:48:23.633034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.813 [2024-07-14 05:48:23.633245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.813 [2024-07-14 05:48:23.633274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.813 qpair failed and we were unable to recover it. 00:34:16.814 [2024-07-14 05:48:23.633476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-07-14 05:48:23.633509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-07-14 05:48:23.633743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-07-14 05:48:23.633769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-07-14 05:48:23.633983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-07-14 05:48:23.634012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-07-14 05:48:23.634187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-07-14 05:48:23.634216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-07-14 05:48:23.634423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-07-14 05:48:23.634449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-07-14 05:48:23.634613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-07-14 05:48:23.634639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-07-14 05:48:23.634845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-07-14 05:48:23.634889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-07-14 05:48:23.635116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-07-14 05:48:23.635142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-07-14 05:48:23.635358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-07-14 05:48:23.635387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-07-14 05:48:23.635590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-07-14 05:48:23.635618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-07-14 05:48:23.635826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-07-14 05:48:23.635852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-07-14 05:48:23.636066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-07-14 05:48:23.636108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-07-14 05:48:23.636341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-07-14 05:48:23.636369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-07-14 05:48:23.636550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-07-14 05:48:23.636575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-07-14 05:48:23.636730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-07-14 05:48:23.636755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-07-14 05:48:23.636986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-07-14 05:48:23.637012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-07-14 05:48:23.637200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-07-14 05:48:23.637227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-07-14 05:48:23.637462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-07-14 05:48:23.637491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-07-14 05:48:23.637720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-07-14 05:48:23.637748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-07-14 05:48:23.637948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-07-14 05:48:23.637974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-07-14 05:48:23.638154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-07-14 05:48:23.638180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-07-14 05:48:23.638372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-07-14 05:48:23.638398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-07-14 05:48:23.638607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-07-14 05:48:23.638633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-07-14 05:48:23.638819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-07-14 05:48:23.638847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-07-14 05:48:23.639052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-07-14 05:48:23.639081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-07-14 05:48:23.639306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-07-14 05:48:23.639331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-07-14 05:48:23.639535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-07-14 05:48:23.639563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-07-14 05:48:23.639772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-07-14 05:48:23.639800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-07-14 05:48:23.640005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-07-14 05:48:23.640035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-07-14 05:48:23.640245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-07-14 05:48:23.640273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-07-14 05:48:23.640446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-07-14 05:48:23.640474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-07-14 05:48:23.640650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-07-14 05:48:23.640675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-07-14 05:48:23.640879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-07-14 05:48:23.640908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-07-14 05:48:23.641139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-07-14 05:48:23.641168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-07-14 05:48:23.641348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-07-14 05:48:23.641373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-07-14 05:48:23.641556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-07-14 05:48:23.641581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-07-14 05:48:23.641813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-07-14 05:48:23.641841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-07-14 05:48:23.642074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-07-14 05:48:23.642100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-07-14 05:48:23.642290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-07-14 05:48:23.642317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-07-14 05:48:23.642525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.814 [2024-07-14 05:48:23.642553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.814 qpair failed and we were unable to recover it. 00:34:16.814 [2024-07-14 05:48:23.642757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-07-14 05:48:23.642787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-07-14 05:48:23.642967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-07-14 05:48:23.642996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-07-14 05:48:23.643198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-07-14 05:48:23.643223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-07-14 05:48:23.643372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-07-14 05:48:23.643397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-07-14 05:48:23.643552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-07-14 05:48:23.643577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-07-14 05:48:23.643772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-07-14 05:48:23.643800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-07-14 05:48:23.643981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-07-14 05:48:23.644008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-07-14 05:48:23.644236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-07-14 05:48:23.644265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-07-14 05:48:23.644472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-07-14 05:48:23.644500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-07-14 05:48:23.644695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-07-14 05:48:23.644720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-07-14 05:48:23.644923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-07-14 05:48:23.644953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-07-14 05:48:23.645156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-07-14 05:48:23.645184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-07-14 05:48:23.645386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-07-14 05:48:23.645411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-07-14 05:48:23.645621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-07-14 05:48:23.645649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-07-14 05:48:23.645847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-07-14 05:48:23.645882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-07-14 05:48:23.646067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-07-14 05:48:23.646093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-07-14 05:48:23.646293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-07-14 05:48:23.646321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-07-14 05:48:23.646494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-07-14 05:48:23.646522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-07-14 05:48:23.646787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-07-14 05:48:23.646816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-07-14 05:48:23.647028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-07-14 05:48:23.647054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-07-14 05:48:23.647255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-07-14 05:48:23.647283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-07-14 05:48:23.647492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-07-14 05:48:23.647518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-07-14 05:48:23.647727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-07-14 05:48:23.647755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-07-14 05:48:23.647955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-07-14 05:48:23.647984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-07-14 05:48:23.648217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-07-14 05:48:23.648242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-07-14 05:48:23.648482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-07-14 05:48:23.648511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-07-14 05:48:23.648728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-07-14 05:48:23.648753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-07-14 05:48:23.648966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-07-14 05:48:23.648993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-07-14 05:48:23.649173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-07-14 05:48:23.649202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-07-14 05:48:23.649406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-07-14 05:48:23.649434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-07-14 05:48:23.649620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-07-14 05:48:23.649645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-07-14 05:48:23.649844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-07-14 05:48:23.649877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-07-14 05:48:23.650045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-07-14 05:48:23.650073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-07-14 05:48:23.650262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-07-14 05:48:23.650287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-07-14 05:48:23.650470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-07-14 05:48:23.650495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-07-14 05:48:23.650694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-07-14 05:48:23.650722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-07-14 05:48:23.650924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-07-14 05:48:23.650950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-07-14 05:48:23.651137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-07-14 05:48:23.651166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-07-14 05:48:23.651368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-07-14 05:48:23.651396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-07-14 05:48:23.651596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-07-14 05:48:23.651621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.815 [2024-07-14 05:48:23.651782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.815 [2024-07-14 05:48:23.651811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.815 qpair failed and we were unable to recover it. 00:34:16.816 [2024-07-14 05:48:23.651986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.816 [2024-07-14 05:48:23.652014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.816 qpair failed and we were unable to recover it. 00:34:16.816 [2024-07-14 05:48:23.652195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.816 [2024-07-14 05:48:23.652221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.816 qpair failed and we were unable to recover it. 00:34:16.816 [2024-07-14 05:48:23.652395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.816 [2024-07-14 05:48:23.652423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.816 qpair failed and we were unable to recover it. 00:34:16.816 [2024-07-14 05:48:23.652620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.816 [2024-07-14 05:48:23.652650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.816 qpair failed and we were unable to recover it. 00:34:16.816 [2024-07-14 05:48:23.652861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.816 [2024-07-14 05:48:23.652891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.816 qpair failed and we were unable to recover it. 00:34:16.816 [2024-07-14 05:48:23.653097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.816 [2024-07-14 05:48:23.653126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.816 qpair failed and we were unable to recover it. 00:34:16.816 [2024-07-14 05:48:23.653298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.816 [2024-07-14 05:48:23.653326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.816 qpair failed and we were unable to recover it. 00:34:16.816 [2024-07-14 05:48:23.653495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.816 [2024-07-14 05:48:23.653522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.816 qpair failed and we were unable to recover it. 00:34:16.816 [2024-07-14 05:48:23.653752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.816 [2024-07-14 05:48:23.653780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.816 qpair failed and we were unable to recover it. 00:34:16.816 [2024-07-14 05:48:23.653980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.816 [2024-07-14 05:48:23.654009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.816 qpair failed and we were unable to recover it. 00:34:16.816 [2024-07-14 05:48:23.654185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.816 [2024-07-14 05:48:23.654210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.816 qpair failed and we were unable to recover it. 00:34:16.816 [2024-07-14 05:48:23.654396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.816 [2024-07-14 05:48:23.654422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.816 qpair failed and we were unable to recover it. 00:34:16.816 [2024-07-14 05:48:23.654606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.816 [2024-07-14 05:48:23.654636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.816 qpair failed and we were unable to recover it. 00:34:16.816 [2024-07-14 05:48:23.654923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.816 [2024-07-14 05:48:23.654949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.816 qpair failed and we were unable to recover it. 00:34:16.816 [2024-07-14 05:48:23.655126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.816 [2024-07-14 05:48:23.655168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.816 qpair failed and we were unable to recover it. 00:34:16.816 [2024-07-14 05:48:23.655348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.816 [2024-07-14 05:48:23.655378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.816 qpair failed and we were unable to recover it. 00:34:16.816 [2024-07-14 05:48:23.655581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.816 [2024-07-14 05:48:23.655607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.816 qpair failed and we were unable to recover it. 00:34:16.816 [2024-07-14 05:48:23.655838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.816 [2024-07-14 05:48:23.655871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.816 qpair failed and we were unable to recover it. 00:34:16.816 [2024-07-14 05:48:23.656089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.816 [2024-07-14 05:48:23.656114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.816 qpair failed and we were unable to recover it. 00:34:16.816 [2024-07-14 05:48:23.656299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.816 [2024-07-14 05:48:23.656325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.816 qpair failed and we were unable to recover it. 00:34:16.816 [2024-07-14 05:48:23.656526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.816 [2024-07-14 05:48:23.656554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.816 qpair failed and we were unable to recover it. 00:34:16.816 [2024-07-14 05:48:23.656728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.816 [2024-07-14 05:48:23.656756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.816 qpair failed and we were unable to recover it. 00:34:16.816 [2024-07-14 05:48:23.656961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.816 [2024-07-14 05:48:23.656987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.816 qpair failed and we were unable to recover it. 00:34:16.816 [2024-07-14 05:48:23.657194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.816 [2024-07-14 05:48:23.657222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.816 qpair failed and we were unable to recover it. 00:34:16.816 [2024-07-14 05:48:23.657392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.816 [2024-07-14 05:48:23.657417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.816 qpair failed and we were unable to recover it. 00:34:16.816 [2024-07-14 05:48:23.657628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.816 [2024-07-14 05:48:23.657654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.816 qpair failed and we were unable to recover it. 00:34:16.816 [2024-07-14 05:48:23.657872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.816 [2024-07-14 05:48:23.657901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.816 qpair failed and we were unable to recover it. 00:34:16.816 [2024-07-14 05:48:23.658134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.816 [2024-07-14 05:48:23.658162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.816 qpair failed and we were unable to recover it. 00:34:16.816 [2024-07-14 05:48:23.658364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.816 [2024-07-14 05:48:23.658389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.816 qpair failed and we were unable to recover it. 00:34:16.816 [2024-07-14 05:48:23.658612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.816 [2024-07-14 05:48:23.658641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.816 qpair failed and we were unable to recover it. 00:34:16.816 [2024-07-14 05:48:23.658823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.816 [2024-07-14 05:48:23.658851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.816 qpair failed and we were unable to recover it. 00:34:16.816 [2024-07-14 05:48:23.659077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.816 [2024-07-14 05:48:23.659102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.816 qpair failed and we were unable to recover it. 00:34:16.816 [2024-07-14 05:48:23.659284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.816 [2024-07-14 05:48:23.659309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.816 qpair failed and we were unable to recover it. 00:34:16.816 [2024-07-14 05:48:23.659477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.816 [2024-07-14 05:48:23.659506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.816 qpair failed and we were unable to recover it. 00:34:16.816 [2024-07-14 05:48:23.659715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.817 [2024-07-14 05:48:23.659741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.817 qpair failed and we were unable to recover it. 00:34:16.817 [2024-07-14 05:48:23.659898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.817 [2024-07-14 05:48:23.659925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.817 qpair failed and we were unable to recover it. 00:34:16.817 [2024-07-14 05:48:23.660129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.817 [2024-07-14 05:48:23.660157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.817 qpair failed and we were unable to recover it. 00:34:16.817 [2024-07-14 05:48:23.660364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.817 [2024-07-14 05:48:23.660390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.817 qpair failed and we were unable to recover it. 00:34:16.817 [2024-07-14 05:48:23.660592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.817 [2024-07-14 05:48:23.660621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.817 qpair failed and we were unable to recover it. 00:34:16.817 [2024-07-14 05:48:23.660792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.817 [2024-07-14 05:48:23.660824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.817 qpair failed and we were unable to recover it. 00:34:16.817 [2024-07-14 05:48:23.661067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.817 [2024-07-14 05:48:23.661094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.817 qpair failed and we were unable to recover it. 00:34:16.817 [2024-07-14 05:48:23.661321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.817 [2024-07-14 05:48:23.661350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.817 qpair failed and we were unable to recover it. 00:34:16.817 [2024-07-14 05:48:23.661585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.817 [2024-07-14 05:48:23.661613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.817 qpair failed and we were unable to recover it. 00:34:16.817 [2024-07-14 05:48:23.661797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.817 [2024-07-14 05:48:23.661822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.817 qpair failed and we were unable to recover it. 00:34:16.817 [2024-07-14 05:48:23.662029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.817 [2024-07-14 05:48:23.662057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.817 qpair failed and we were unable to recover it. 00:34:16.817 [2024-07-14 05:48:23.662273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.817 [2024-07-14 05:48:23.662301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.817 qpair failed and we were unable to recover it. 00:34:16.817 [2024-07-14 05:48:23.662478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.817 [2024-07-14 05:48:23.662504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.817 qpair failed and we were unable to recover it. 00:34:16.817 [2024-07-14 05:48:23.662684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.817 [2024-07-14 05:48:23.662710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.817 qpair failed and we were unable to recover it. 00:34:16.817 [2024-07-14 05:48:23.662890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.817 [2024-07-14 05:48:23.662919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.817 qpair failed and we were unable to recover it. 00:34:16.817 [2024-07-14 05:48:23.663099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.817 [2024-07-14 05:48:23.663125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.817 qpair failed and we were unable to recover it. 00:34:16.817 [2024-07-14 05:48:23.663364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.817 [2024-07-14 05:48:23.663392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.817 qpair failed and we were unable to recover it. 00:34:16.817 [2024-07-14 05:48:23.663646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.817 [2024-07-14 05:48:23.663672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.817 qpair failed and we were unable to recover it. 00:34:16.817 [2024-07-14 05:48:23.663877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.817 [2024-07-14 05:48:23.663906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.817 qpair failed and we were unable to recover it. 00:34:16.817 [2024-07-14 05:48:23.664080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.817 [2024-07-14 05:48:23.664106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.817 qpair failed and we were unable to recover it. 00:34:16.817 [2024-07-14 05:48:23.664343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.817 [2024-07-14 05:48:23.664371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.817 qpair failed and we were unable to recover it. 00:34:16.817 [2024-07-14 05:48:23.664567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.817 [2024-07-14 05:48:23.664592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.817 qpair failed and we were unable to recover it. 00:34:16.817 [2024-07-14 05:48:23.664798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.817 [2024-07-14 05:48:23.664827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.817 qpair failed and we were unable to recover it. 00:34:16.817 [2024-07-14 05:48:23.665065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.817 [2024-07-14 05:48:23.665091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.817 qpair failed and we were unable to recover it. 00:34:16.817 [2024-07-14 05:48:23.665275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.817 [2024-07-14 05:48:23.665301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.817 qpair failed and we were unable to recover it. 00:34:16.817 [2024-07-14 05:48:23.665516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.817 [2024-07-14 05:48:23.665544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.817 qpair failed and we were unable to recover it. 00:34:16.817 [2024-07-14 05:48:23.665744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.817 [2024-07-14 05:48:23.665772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.817 qpair failed and we were unable to recover it. 00:34:16.817 [2024-07-14 05:48:23.665946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.817 [2024-07-14 05:48:23.665972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.817 qpair failed and we were unable to recover it. 00:34:16.817 [2024-07-14 05:48:23.666135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.817 [2024-07-14 05:48:23.666161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.817 qpair failed and we were unable to recover it. 00:34:16.817 [2024-07-14 05:48:23.666340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.817 [2024-07-14 05:48:23.666366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.817 qpair failed and we were unable to recover it. 00:34:16.817 [2024-07-14 05:48:23.666559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.817 [2024-07-14 05:48:23.666584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.817 qpair failed and we were unable to recover it. 00:34:16.817 [2024-07-14 05:48:23.666815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.817 [2024-07-14 05:48:23.666843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.817 qpair failed and we were unable to recover it. 00:34:16.817 [2024-07-14 05:48:23.667076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.817 [2024-07-14 05:48:23.667102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.817 qpair failed and we were unable to recover it. 00:34:16.817 [2024-07-14 05:48:23.667261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.817 [2024-07-14 05:48:23.667287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.817 qpair failed and we were unable to recover it. 00:34:16.817 [2024-07-14 05:48:23.667523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.817 [2024-07-14 05:48:23.667551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.817 qpair failed and we were unable to recover it. 00:34:16.817 [2024-07-14 05:48:23.667756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.817 [2024-07-14 05:48:23.667784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.817 qpair failed and we were unable to recover it. 00:34:16.817 [2024-07-14 05:48:23.668016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.817 [2024-07-14 05:48:23.668043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.817 qpair failed and we were unable to recover it. 00:34:16.817 [2024-07-14 05:48:23.668230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.817 [2024-07-14 05:48:23.668259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.817 qpair failed and we were unable to recover it. 00:34:16.817 [2024-07-14 05:48:23.668471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.817 [2024-07-14 05:48:23.668499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.817 qpair failed and we were unable to recover it. 00:34:16.817 [2024-07-14 05:48:23.668707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.817 [2024-07-14 05:48:23.668733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.817 qpair failed and we were unable to recover it. 00:34:16.817 [2024-07-14 05:48:23.668940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.817 [2024-07-14 05:48:23.668969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.817 qpair failed and we were unable to recover it. 00:34:16.817 [2024-07-14 05:48:23.669163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-07-14 05:48:23.669192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-07-14 05:48:23.669392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-07-14 05:48:23.669418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-07-14 05:48:23.669602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-07-14 05:48:23.669630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-07-14 05:48:23.669795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-07-14 05:48:23.669823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-07-14 05:48:23.670036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-07-14 05:48:23.670067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-07-14 05:48:23.670249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-07-14 05:48:23.670277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-07-14 05:48:23.670477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-07-14 05:48:23.670505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-07-14 05:48:23.670688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-07-14 05:48:23.670714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-07-14 05:48:23.670875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-07-14 05:48:23.670901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-07-14 05:48:23.671070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-07-14 05:48:23.671099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-07-14 05:48:23.671326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-07-14 05:48:23.671351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-07-14 05:48:23.671573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-07-14 05:48:23.671602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-07-14 05:48:23.671832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-07-14 05:48:23.671860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-07-14 05:48:23.672060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-07-14 05:48:23.672086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-07-14 05:48:23.672288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-07-14 05:48:23.672316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-07-14 05:48:23.672491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-07-14 05:48:23.672521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-07-14 05:48:23.672733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-07-14 05:48:23.672758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-07-14 05:48:23.672940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-07-14 05:48:23.672969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-07-14 05:48:23.673178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-07-14 05:48:23.673207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-07-14 05:48:23.673405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-07-14 05:48:23.673432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-07-14 05:48:23.673638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-07-14 05:48:23.673666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-07-14 05:48:23.673829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-07-14 05:48:23.673858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-07-14 05:48:23.674061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-07-14 05:48:23.674087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-07-14 05:48:23.674312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-07-14 05:48:23.674340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-07-14 05:48:23.674519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-07-14 05:48:23.674548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-07-14 05:48:23.674754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-07-14 05:48:23.674780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-07-14 05:48:23.674968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-07-14 05:48:23.674997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-07-14 05:48:23.675195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-07-14 05:48:23.675223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-07-14 05:48:23.675456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-07-14 05:48:23.675482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-07-14 05:48:23.675715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-07-14 05:48:23.675743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-07-14 05:48:23.675954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-07-14 05:48:23.675981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-07-14 05:48:23.676142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-07-14 05:48:23.676168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-07-14 05:48:23.676383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-07-14 05:48:23.676411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-07-14 05:48:23.676640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-07-14 05:48:23.676669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-07-14 05:48:23.676855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-07-14 05:48:23.676886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-07-14 05:48:23.677089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-07-14 05:48:23.677117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-07-14 05:48:23.677318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-07-14 05:48:23.677346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-07-14 05:48:23.677540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-07-14 05:48:23.677567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-07-14 05:48:23.677806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-07-14 05:48:23.677835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-07-14 05:48:23.678025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.818 [2024-07-14 05:48:23.678053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.818 qpair failed and we were unable to recover it. 00:34:16.818 [2024-07-14 05:48:23.678238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-07-14 05:48:23.678264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-07-14 05:48:23.678472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-07-14 05:48:23.678500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-07-14 05:48:23.678702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-07-14 05:48:23.678731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-07-14 05:48:23.678934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-07-14 05:48:23.678960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-07-14 05:48:23.679195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-07-14 05:48:23.679228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-07-14 05:48:23.679407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-07-14 05:48:23.679435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-07-14 05:48:23.679638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-07-14 05:48:23.679664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-07-14 05:48:23.679876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-07-14 05:48:23.679905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-07-14 05:48:23.680142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-07-14 05:48:23.680170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-07-14 05:48:23.680375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-07-14 05:48:23.680401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-07-14 05:48:23.680595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-07-14 05:48:23.680623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-07-14 05:48:23.680823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-07-14 05:48:23.680853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-07-14 05:48:23.681071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-07-14 05:48:23.681098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-07-14 05:48:23.681280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-07-14 05:48:23.681307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-07-14 05:48:23.681505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-07-14 05:48:23.681533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-07-14 05:48:23.681766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-07-14 05:48:23.681791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-07-14 05:48:23.681992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-07-14 05:48:23.682022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-07-14 05:48:23.682229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-07-14 05:48:23.682258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-07-14 05:48:23.682441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-07-14 05:48:23.682467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-07-14 05:48:23.682627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-07-14 05:48:23.682653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-07-14 05:48:23.682852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-07-14 05:48:23.682896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-07-14 05:48:23.683129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-07-14 05:48:23.683155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-07-14 05:48:23.683364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-07-14 05:48:23.683392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-07-14 05:48:23.683580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-07-14 05:48:23.683608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-07-14 05:48:23.683821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-07-14 05:48:23.683849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-07-14 05:48:23.684058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-07-14 05:48:23.684084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-07-14 05:48:23.684315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-07-14 05:48:23.684344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-07-14 05:48:23.684551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-07-14 05:48:23.684576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-07-14 05:48:23.684782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-07-14 05:48:23.684810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-07-14 05:48:23.685017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-07-14 05:48:23.685043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-07-14 05:48:23.685199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-07-14 05:48:23.685224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-07-14 05:48:23.685410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-07-14 05:48:23.685436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-07-14 05:48:23.685679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-07-14 05:48:23.685708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-07-14 05:48:23.685884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-07-14 05:48:23.685910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-07-14 05:48:23.686070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-07-14 05:48:23.686095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-07-14 05:48:23.686282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-07-14 05:48:23.686307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-07-14 05:48:23.686491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-07-14 05:48:23.686517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-07-14 05:48:23.686752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-07-14 05:48:23.686780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-07-14 05:48:23.686981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-07-14 05:48:23.687011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-07-14 05:48:23.687244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-07-14 05:48:23.687270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-07-14 05:48:23.687469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.819 [2024-07-14 05:48:23.687497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.819 qpair failed and we were unable to recover it. 00:34:16.819 [2024-07-14 05:48:23.687724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.820 [2024-07-14 05:48:23.687752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.820 qpair failed and we were unable to recover it. 00:34:16.820 [2024-07-14 05:48:23.687956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.820 [2024-07-14 05:48:23.687982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.820 qpair failed and we were unable to recover it. 00:34:16.820 [2024-07-14 05:48:23.688205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.820 [2024-07-14 05:48:23.688233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.820 qpair failed and we were unable to recover it. 00:34:16.820 [2024-07-14 05:48:23.688444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.820 [2024-07-14 05:48:23.688470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.820 qpair failed and we were unable to recover it. 00:34:16.820 [2024-07-14 05:48:23.688659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.820 [2024-07-14 05:48:23.688685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.820 qpair failed and we were unable to recover it. 00:34:16.820 [2024-07-14 05:48:23.688920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.820 [2024-07-14 05:48:23.688949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.820 qpair failed and we were unable to recover it. 00:34:16.820 [2024-07-14 05:48:23.689187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.820 [2024-07-14 05:48:23.689215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.820 qpair failed and we were unable to recover it. 00:34:16.820 [2024-07-14 05:48:23.689421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.820 [2024-07-14 05:48:23.689448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.820 qpair failed and we were unable to recover it. 00:34:16.820 [2024-07-14 05:48:23.689654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.820 [2024-07-14 05:48:23.689682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.820 qpair failed and we were unable to recover it. 00:34:16.820 [2024-07-14 05:48:23.689909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.820 [2024-07-14 05:48:23.689939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.820 qpair failed and we were unable to recover it. 00:34:16.820 [2024-07-14 05:48:23.690115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.820 [2024-07-14 05:48:23.690141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.820 qpair failed and we were unable to recover it. 00:34:16.820 [2024-07-14 05:48:23.690305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.820 [2024-07-14 05:48:23.690330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.820 qpair failed and we were unable to recover it. 00:34:16.820 [2024-07-14 05:48:23.690558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.820 [2024-07-14 05:48:23.690583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.820 qpair failed and we were unable to recover it. 00:34:16.820 [2024-07-14 05:48:23.690735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.820 [2024-07-14 05:48:23.690761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.820 qpair failed and we were unable to recover it. 00:34:16.820 [2024-07-14 05:48:23.690955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.820 [2024-07-14 05:48:23.690981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.820 qpair failed and we were unable to recover it. 00:34:16.820 [2024-07-14 05:48:23.691170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.820 [2024-07-14 05:48:23.691196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.820 qpair failed and we were unable to recover it. 00:34:16.820 [2024-07-14 05:48:23.691345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.820 [2024-07-14 05:48:23.691370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.820 qpair failed and we were unable to recover it. 00:34:16.820 [2024-07-14 05:48:23.691591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.820 [2024-07-14 05:48:23.691617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.820 qpair failed and we were unable to recover it. 00:34:16.820 [2024-07-14 05:48:23.691772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.820 [2024-07-14 05:48:23.691797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.820 qpair failed and we were unable to recover it. 00:34:16.820 [2024-07-14 05:48:23.691964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.820 [2024-07-14 05:48:23.691999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.820 qpair failed and we were unable to recover it. 00:34:16.820 [2024-07-14 05:48:23.692210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.820 [2024-07-14 05:48:23.692246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.820 qpair failed and we were unable to recover it. 00:34:16.820 [2024-07-14 05:48:23.692404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.820 [2024-07-14 05:48:23.692429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.820 qpair failed and we were unable to recover it. 00:34:16.820 [2024-07-14 05:48:23.692644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.820 [2024-07-14 05:48:23.692670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.820 qpair failed and we were unable to recover it. 00:34:16.820 [2024-07-14 05:48:23.692878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.820 [2024-07-14 05:48:23.692904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.820 qpair failed and we were unable to recover it. 00:34:16.820 [2024-07-14 05:48:23.693065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.820 [2024-07-14 05:48:23.693091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.820 qpair failed and we were unable to recover it. 00:34:16.820 [2024-07-14 05:48:23.693301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.820 [2024-07-14 05:48:23.693326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.820 qpair failed and we were unable to recover it. 00:34:16.820 [2024-07-14 05:48:23.693511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.820 [2024-07-14 05:48:23.693537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.820 qpair failed and we were unable to recover it. 00:34:16.820 [2024-07-14 05:48:23.693722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.820 [2024-07-14 05:48:23.693752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.820 qpair failed and we were unable to recover it. 00:34:16.820 [2024-07-14 05:48:23.693987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.820 [2024-07-14 05:48:23.694013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.820 qpair failed and we were unable to recover it. 00:34:16.820 [2024-07-14 05:48:23.694237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.820 [2024-07-14 05:48:23.694262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.820 qpair failed and we were unable to recover it. 00:34:16.820 [2024-07-14 05:48:23.694445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.820 [2024-07-14 05:48:23.694474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.820 qpair failed and we were unable to recover it. 00:34:16.820 [2024-07-14 05:48:23.694634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.820 [2024-07-14 05:48:23.694659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.820 qpair failed and we were unable to recover it. 00:34:16.820 [2024-07-14 05:48:23.694832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.820 [2024-07-14 05:48:23.694862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.820 qpair failed and we were unable to recover it. 00:34:16.820 [2024-07-14 05:48:23.695108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.820 [2024-07-14 05:48:23.695137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.820 qpair failed and we were unable to recover it. 00:34:16.820 [2024-07-14 05:48:23.695335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.820 [2024-07-14 05:48:23.695360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.820 qpair failed and we were unable to recover it. 00:34:16.820 [2024-07-14 05:48:23.695557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.820 [2024-07-14 05:48:23.695582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.821 qpair failed and we were unable to recover it. 00:34:16.821 [2024-07-14 05:48:23.695763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.821 [2024-07-14 05:48:23.695789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.821 qpair failed and we were unable to recover it. 00:34:16.821 [2024-07-14 05:48:23.695971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.821 [2024-07-14 05:48:23.695997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.821 qpair failed and we were unable to recover it. 00:34:16.821 [2024-07-14 05:48:23.696181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.821 [2024-07-14 05:48:23.696206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.821 qpair failed and we were unable to recover it. 00:34:16.821 [2024-07-14 05:48:23.696364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.821 [2024-07-14 05:48:23.696389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.821 qpair failed and we were unable to recover it. 00:34:16.821 [2024-07-14 05:48:23.696567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.821 [2024-07-14 05:48:23.696592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.821 qpair failed and we were unable to recover it. 00:34:16.821 [2024-07-14 05:48:23.696751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.821 [2024-07-14 05:48:23.696776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.821 qpair failed and we were unable to recover it. 00:34:16.821 [2024-07-14 05:48:23.696958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.821 [2024-07-14 05:48:23.696984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.821 qpair failed and we were unable to recover it. 00:34:16.821 [2024-07-14 05:48:23.697168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.821 [2024-07-14 05:48:23.697193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.821 qpair failed and we were unable to recover it. 00:34:16.821 [2024-07-14 05:48:23.697402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.821 [2024-07-14 05:48:23.697431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.821 qpair failed and we were unable to recover it. 00:34:16.821 [2024-07-14 05:48:23.697660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.821 [2024-07-14 05:48:23.697689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.821 qpair failed and we were unable to recover it. 00:34:16.821 [2024-07-14 05:48:23.697879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.821 [2024-07-14 05:48:23.697904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.821 qpair failed and we were unable to recover it. 00:34:16.821 [2024-07-14 05:48:23.698065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.821 [2024-07-14 05:48:23.698091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.821 qpair failed and we were unable to recover it. 00:34:16.821 [2024-07-14 05:48:23.698304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.821 [2024-07-14 05:48:23.698330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.821 qpair failed and we were unable to recover it. 00:34:16.821 [2024-07-14 05:48:23.698514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.821 [2024-07-14 05:48:23.698539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.821 qpair failed and we were unable to recover it. 00:34:16.821 [2024-07-14 05:48:23.698720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.821 [2024-07-14 05:48:23.698746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.821 qpair failed and we were unable to recover it. 00:34:16.821 [2024-07-14 05:48:23.698905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.821 [2024-07-14 05:48:23.698932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.821 qpair failed and we were unable to recover it. 00:34:16.821 [2024-07-14 05:48:23.699120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.821 [2024-07-14 05:48:23.699146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.821 qpair failed and we were unable to recover it. 00:34:16.821 [2024-07-14 05:48:23.699324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.821 [2024-07-14 05:48:23.699349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.821 qpair failed and we were unable to recover it. 00:34:16.821 [2024-07-14 05:48:23.699540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.821 [2024-07-14 05:48:23.699565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.821 qpair failed and we were unable to recover it. 00:34:16.821 [2024-07-14 05:48:23.699847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.821 [2024-07-14 05:48:23.699882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.821 qpair failed and we were unable to recover it. 00:34:16.821 [2024-07-14 05:48:23.700100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.821 [2024-07-14 05:48:23.700126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.821 qpair failed and we were unable to recover it. 00:34:16.821 [2024-07-14 05:48:23.700329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.821 [2024-07-14 05:48:23.700355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.821 qpair failed and we were unable to recover it. 00:34:16.821 [2024-07-14 05:48:23.700514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.821 [2024-07-14 05:48:23.700539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.821 qpair failed and we were unable to recover it. 00:34:16.821 [2024-07-14 05:48:23.700740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.821 [2024-07-14 05:48:23.700769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.821 qpair failed and we were unable to recover it. 00:34:16.821 [2024-07-14 05:48:23.700949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.821 [2024-07-14 05:48:23.700978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.821 qpair failed and we were unable to recover it. 00:34:16.821 [2024-07-14 05:48:23.701205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.821 [2024-07-14 05:48:23.701231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.821 qpair failed and we were unable to recover it. 00:34:16.821 [2024-07-14 05:48:23.701426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.821 [2024-07-14 05:48:23.701453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.821 qpair failed and we were unable to recover it. 00:34:16.821 [2024-07-14 05:48:23.701612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.821 [2024-07-14 05:48:23.701637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.821 qpair failed and we were unable to recover it. 00:34:16.821 [2024-07-14 05:48:23.701827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.821 [2024-07-14 05:48:23.701852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.821 qpair failed and we were unable to recover it. 00:34:16.821 [2024-07-14 05:48:23.702065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.821 [2024-07-14 05:48:23.702093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.821 qpair failed and we were unable to recover it. 00:34:16.821 [2024-07-14 05:48:23.702292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.821 [2024-07-14 05:48:23.702320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.821 qpair failed and we were unable to recover it. 00:34:16.821 [2024-07-14 05:48:23.702525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.821 [2024-07-14 05:48:23.702550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.821 qpair failed and we were unable to recover it. 00:34:16.821 [2024-07-14 05:48:23.702742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.821 [2024-07-14 05:48:23.702768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.821 qpair failed and we were unable to recover it. 00:34:16.821 [2024-07-14 05:48:23.702927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.821 [2024-07-14 05:48:23.702964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.821 qpair failed and we were unable to recover it. 00:34:16.821 [2024-07-14 05:48:23.703151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.821 [2024-07-14 05:48:23.703180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.821 qpair failed and we were unable to recover it. 00:34:16.821 [2024-07-14 05:48:23.703368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.821 [2024-07-14 05:48:23.703393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.821 qpair failed and we were unable to recover it. 00:34:16.821 [2024-07-14 05:48:23.703577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.821 [2024-07-14 05:48:23.703603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.821 qpair failed and we were unable to recover it. 00:34:16.821 [2024-07-14 05:48:23.703784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.821 [2024-07-14 05:48:23.703810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.821 qpair failed and we were unable to recover it. 00:34:16.821 [2024-07-14 05:48:23.703968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.821 [2024-07-14 05:48:23.703995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.821 qpair failed and we were unable to recover it. 00:34:16.821 [2024-07-14 05:48:23.704192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.821 [2024-07-14 05:48:23.704220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.821 qpair failed and we were unable to recover it. 00:34:16.821 [2024-07-14 05:48:23.704426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-07-14 05:48:23.704451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-07-14 05:48:23.704632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-07-14 05:48:23.704658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-07-14 05:48:23.704886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-07-14 05:48:23.704916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-07-14 05:48:23.705124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-07-14 05:48:23.705151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-07-14 05:48:23.705371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-07-14 05:48:23.705400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-07-14 05:48:23.705566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-07-14 05:48:23.705594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-07-14 05:48:23.705788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-07-14 05:48:23.705813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-07-14 05:48:23.706047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-07-14 05:48:23.706076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-07-14 05:48:23.706301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-07-14 05:48:23.706328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-07-14 05:48:23.706480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-07-14 05:48:23.706504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-07-14 05:48:23.706708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-07-14 05:48:23.706736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-07-14 05:48:23.706936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-07-14 05:48:23.706965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-07-14 05:48:23.707147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-07-14 05:48:23.707172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-07-14 05:48:23.707336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-07-14 05:48:23.707361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-07-14 05:48:23.707574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-07-14 05:48:23.707600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-07-14 05:48:23.707815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-07-14 05:48:23.707843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-07-14 05:48:23.708088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-07-14 05:48:23.708114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-07-14 05:48:23.708305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-07-14 05:48:23.708347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-07-14 05:48:23.708550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-07-14 05:48:23.708575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-07-14 05:48:23.708745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-07-14 05:48:23.708771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-07-14 05:48:23.708988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-07-14 05:48:23.709014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-07-14 05:48:23.709173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-07-14 05:48:23.709199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-07-14 05:48:23.709408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-07-14 05:48:23.709434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-07-14 05:48:23.709598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-07-14 05:48:23.709623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-07-14 05:48:23.709831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-07-14 05:48:23.709857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-07-14 05:48:23.710036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-07-14 05:48:23.710064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-07-14 05:48:23.710275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-07-14 05:48:23.710303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-07-14 05:48:23.710525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-07-14 05:48:23.710550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-07-14 05:48:23.710747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-07-14 05:48:23.710775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-07-14 05:48:23.710952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-07-14 05:48:23.710981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-07-14 05:48:23.711208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-07-14 05:48:23.711234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-07-14 05:48:23.711388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-07-14 05:48:23.711413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-07-14 05:48:23.711591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-07-14 05:48:23.711616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-07-14 05:48:23.711800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-07-14 05:48:23.711826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-07-14 05:48:23.712014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-07-14 05:48:23.712044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-07-14 05:48:23.712253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-07-14 05:48:23.712279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-07-14 05:48:23.712429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-07-14 05:48:23.712454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-07-14 05:48:23.712658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-07-14 05:48:23.712683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-07-14 05:48:23.712839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-07-14 05:48:23.712875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-07-14 05:48:23.713039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-07-14 05:48:23.713065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-07-14 05:48:23.713246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.822 [2024-07-14 05:48:23.713272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.822 qpair failed and we were unable to recover it. 00:34:16.822 [2024-07-14 05:48:23.713451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-07-14 05:48:23.713476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-07-14 05:48:23.713656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-07-14 05:48:23.713681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-07-14 05:48:23.713876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-07-14 05:48:23.713902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-07-14 05:48:23.714062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-07-14 05:48:23.714088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-07-14 05:48:23.714295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-07-14 05:48:23.714321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-07-14 05:48:23.714504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-07-14 05:48:23.714529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-07-14 05:48:23.714693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-07-14 05:48:23.714720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-07-14 05:48:23.714932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-07-14 05:48:23.714958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-07-14 05:48:23.715167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-07-14 05:48:23.715192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-07-14 05:48:23.715386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-07-14 05:48:23.715412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-07-14 05:48:23.715623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-07-14 05:48:23.715648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-07-14 05:48:23.715832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-07-14 05:48:23.715857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-07-14 05:48:23.716031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-07-14 05:48:23.716057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-07-14 05:48:23.716208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-07-14 05:48:23.716234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-07-14 05:48:23.716406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-07-14 05:48:23.716431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-07-14 05:48:23.716600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-07-14 05:48:23.716641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-07-14 05:48:23.716903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-07-14 05:48:23.716946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-07-14 05:48:23.717124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-07-14 05:48:23.717149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-07-14 05:48:23.717328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-07-14 05:48:23.717353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-07-14 05:48:23.717561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-07-14 05:48:23.717586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-07-14 05:48:23.717779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-07-14 05:48:23.717805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-07-14 05:48:23.717960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-07-14 05:48:23.717987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-07-14 05:48:23.718141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-07-14 05:48:23.718167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-07-14 05:48:23.718339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-07-14 05:48:23.718365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-07-14 05:48:23.718549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-07-14 05:48:23.718574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-07-14 05:48:23.718790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-07-14 05:48:23.718815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-07-14 05:48:23.718970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-07-14 05:48:23.718996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-07-14 05:48:23.719207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-07-14 05:48:23.719233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-07-14 05:48:23.719379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-07-14 05:48:23.719404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-07-14 05:48:23.719593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-07-14 05:48:23.719618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-07-14 05:48:23.719773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-07-14 05:48:23.719799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-07-14 05:48:23.719996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-07-14 05:48:23.720022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-07-14 05:48:23.720184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-07-14 05:48:23.720209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-07-14 05:48:23.720372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-07-14 05:48:23.720402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-07-14 05:48:23.720612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-07-14 05:48:23.720637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-07-14 05:48:23.720824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-07-14 05:48:23.720849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-07-14 05:48:23.721057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-07-14 05:48:23.721085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-07-14 05:48:23.721290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-07-14 05:48:23.721315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-07-14 05:48:23.721477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-07-14 05:48:23.721502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-07-14 05:48:23.721688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-07-14 05:48:23.721713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-07-14 05:48:23.721905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.823 [2024-07-14 05:48:23.721931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.823 qpair failed and we were unable to recover it. 00:34:16.823 [2024-07-14 05:48:23.722138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.824 [2024-07-14 05:48:23.722164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.824 [2024-07-14 05:48:23.722344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.824 [2024-07-14 05:48:23.722370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.824 [2024-07-14 05:48:23.722520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.824 [2024-07-14 05:48:23.722546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.824 [2024-07-14 05:48:23.722726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.824 [2024-07-14 05:48:23.722751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.824 [2024-07-14 05:48:23.722928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.824 [2024-07-14 05:48:23.722954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.824 [2024-07-14 05:48:23.723129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.824 [2024-07-14 05:48:23.723154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.824 [2024-07-14 05:48:23.723367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.824 [2024-07-14 05:48:23.723396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.824 [2024-07-14 05:48:23.723616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.824 [2024-07-14 05:48:23.723644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.824 [2024-07-14 05:48:23.723821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.824 [2024-07-14 05:48:23.723847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.824 [2024-07-14 05:48:23.724035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.824 [2024-07-14 05:48:23.724060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.824 [2024-07-14 05:48:23.724242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.824 [2024-07-14 05:48:23.724268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.824 [2024-07-14 05:48:23.724477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.824 [2024-07-14 05:48:23.724503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.824 [2024-07-14 05:48:23.724689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.824 [2024-07-14 05:48:23.724715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.824 [2024-07-14 05:48:23.724921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.824 [2024-07-14 05:48:23.724947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.824 [2024-07-14 05:48:23.725126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.824 [2024-07-14 05:48:23.725152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.824 [2024-07-14 05:48:23.725334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.824 [2024-07-14 05:48:23.725360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.824 [2024-07-14 05:48:23.725545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.824 [2024-07-14 05:48:23.725570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.824 [2024-07-14 05:48:23.725758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.824 [2024-07-14 05:48:23.725784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.824 [2024-07-14 05:48:23.725964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.824 [2024-07-14 05:48:23.725990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.824 [2024-07-14 05:48:23.726202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.824 [2024-07-14 05:48:23.726227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.824 [2024-07-14 05:48:23.726404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.824 [2024-07-14 05:48:23.726429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.824 [2024-07-14 05:48:23.726611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.824 [2024-07-14 05:48:23.726637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.824 [2024-07-14 05:48:23.726824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.824 [2024-07-14 05:48:23.726849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.824 [2024-07-14 05:48:23.727039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.824 [2024-07-14 05:48:23.727065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.824 [2024-07-14 05:48:23.727253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.824 [2024-07-14 05:48:23.727278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.824 [2024-07-14 05:48:23.727463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.824 [2024-07-14 05:48:23.727488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.824 [2024-07-14 05:48:23.727673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.824 [2024-07-14 05:48:23.727698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.824 [2024-07-14 05:48:23.727856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.824 [2024-07-14 05:48:23.727887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.824 [2024-07-14 05:48:23.728065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.824 [2024-07-14 05:48:23.728091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.824 [2024-07-14 05:48:23.728281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.824 [2024-07-14 05:48:23.728307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.824 [2024-07-14 05:48:23.728485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.824 [2024-07-14 05:48:23.728511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.824 [2024-07-14 05:48:23.728721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.824 [2024-07-14 05:48:23.728747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.824 [2024-07-14 05:48:23.728932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.824 [2024-07-14 05:48:23.728962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.824 [2024-07-14 05:48:23.729142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.824 [2024-07-14 05:48:23.729168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.824 [2024-07-14 05:48:23.729378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.824 [2024-07-14 05:48:23.729404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.824 [2024-07-14 05:48:23.729585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.824 [2024-07-14 05:48:23.729610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.824 qpair failed and we were unable to recover it. 00:34:16.825 [2024-07-14 05:48:23.729796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-07-14 05:48:23.729821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-07-14 05:48:23.730007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-07-14 05:48:23.730033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-07-14 05:48:23.730216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-07-14 05:48:23.730242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-07-14 05:48:23.730398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-07-14 05:48:23.730424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-07-14 05:48:23.730586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-07-14 05:48:23.730612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-07-14 05:48:23.730840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-07-14 05:48:23.730870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-07-14 05:48:23.731034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-07-14 05:48:23.731060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-07-14 05:48:23.731265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-07-14 05:48:23.731291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-07-14 05:48:23.731476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-07-14 05:48:23.731501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-07-14 05:48:23.731658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-07-14 05:48:23.731684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-07-14 05:48:23.731938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-07-14 05:48:23.731964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-07-14 05:48:23.732120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-07-14 05:48:23.732146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-07-14 05:48:23.732326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-07-14 05:48:23.732352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-07-14 05:48:23.732518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-07-14 05:48:23.732543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-07-14 05:48:23.732761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-07-14 05:48:23.732789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-07-14 05:48:23.733024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-07-14 05:48:23.733051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-07-14 05:48:23.733241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-07-14 05:48:23.733266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-07-14 05:48:23.733450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-07-14 05:48:23.733475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-07-14 05:48:23.733667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-07-14 05:48:23.733693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-07-14 05:48:23.733914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-07-14 05:48:23.733940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-07-14 05:48:23.734093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-07-14 05:48:23.734121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-07-14 05:48:23.734296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-07-14 05:48:23.734322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-07-14 05:48:23.734479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-07-14 05:48:23.734505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-07-14 05:48:23.734722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-07-14 05:48:23.734748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-07-14 05:48:23.734940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-07-14 05:48:23.734966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-07-14 05:48:23.735150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-07-14 05:48:23.735175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-07-14 05:48:23.735345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-07-14 05:48:23.735370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-07-14 05:48:23.735552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-07-14 05:48:23.735578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-07-14 05:48:23.735755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-07-14 05:48:23.735780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-07-14 05:48:23.735954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-07-14 05:48:23.735980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-07-14 05:48:23.736189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-07-14 05:48:23.736215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-07-14 05:48:23.736397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-07-14 05:48:23.736424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-07-14 05:48:23.736612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-07-14 05:48:23.736637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-07-14 05:48:23.736800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-07-14 05:48:23.736826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-07-14 05:48:23.737029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-07-14 05:48:23.737056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-07-14 05:48:23.737219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-07-14 05:48:23.737245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-07-14 05:48:23.737441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-07-14 05:48:23.737475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-07-14 05:48:23.737662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-07-14 05:48:23.737687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-07-14 05:48:23.737873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-07-14 05:48:23.737899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.825 [2024-07-14 05:48:23.738081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.825 [2024-07-14 05:48:23.738108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.825 qpair failed and we were unable to recover it. 00:34:16.826 [2024-07-14 05:48:23.738281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-07-14 05:48:23.738307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.826 [2024-07-14 05:48:23.738491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-07-14 05:48:23.738518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.826 [2024-07-14 05:48:23.738724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-07-14 05:48:23.738750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.826 [2024-07-14 05:48:23.738907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-07-14 05:48:23.738933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.826 [2024-07-14 05:48:23.739113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-07-14 05:48:23.739139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.826 [2024-07-14 05:48:23.739294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-07-14 05:48:23.739319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.826 [2024-07-14 05:48:23.739512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-07-14 05:48:23.739537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.826 [2024-07-14 05:48:23.739721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-07-14 05:48:23.739750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.826 [2024-07-14 05:48:23.739953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-07-14 05:48:23.739979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.826 [2024-07-14 05:48:23.740138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-07-14 05:48:23.740164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.826 [2024-07-14 05:48:23.740351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-07-14 05:48:23.740377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.826 [2024-07-14 05:48:23.740561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-07-14 05:48:23.740587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.826 [2024-07-14 05:48:23.740739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-07-14 05:48:23.740764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.826 [2024-07-14 05:48:23.740949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-07-14 05:48:23.740976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.826 [2024-07-14 05:48:23.741137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-07-14 05:48:23.741163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.826 [2024-07-14 05:48:23.741356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-07-14 05:48:23.741382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.826 [2024-07-14 05:48:23.741565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-07-14 05:48:23.741591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.826 [2024-07-14 05:48:23.741749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-07-14 05:48:23.741774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.826 [2024-07-14 05:48:23.741936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-07-14 05:48:23.741963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.826 [2024-07-14 05:48:23.742127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-07-14 05:48:23.742152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.826 [2024-07-14 05:48:23.742363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-07-14 05:48:23.742388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.826 [2024-07-14 05:48:23.742552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-07-14 05:48:23.742579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.826 [2024-07-14 05:48:23.742764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-07-14 05:48:23.742791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.826 [2024-07-14 05:48:23.742981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-07-14 05:48:23.743008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.826 [2024-07-14 05:48:23.743193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-07-14 05:48:23.743219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.826 [2024-07-14 05:48:23.743372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-07-14 05:48:23.743399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.826 [2024-07-14 05:48:23.743609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-07-14 05:48:23.743635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.826 [2024-07-14 05:48:23.743791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-07-14 05:48:23.743817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.826 [2024-07-14 05:48:23.743969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-07-14 05:48:23.743994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.826 [2024-07-14 05:48:23.744172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-07-14 05:48:23.744198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.826 [2024-07-14 05:48:23.744357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-07-14 05:48:23.744383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.826 [2024-07-14 05:48:23.744598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-07-14 05:48:23.744624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.826 [2024-07-14 05:48:23.744782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-07-14 05:48:23.744807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.826 [2024-07-14 05:48:23.745000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-07-14 05:48:23.745027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.826 [2024-07-14 05:48:23.745209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-07-14 05:48:23.745235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.826 [2024-07-14 05:48:23.745451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-07-14 05:48:23.745476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.826 [2024-07-14 05:48:23.745664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-07-14 05:48:23.745694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.826 [2024-07-14 05:48:23.745880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-07-14 05:48:23.745913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.826 [2024-07-14 05:48:23.746131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-07-14 05:48:23.746161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.826 [2024-07-14 05:48:23.746362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-07-14 05:48:23.746391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.826 [2024-07-14 05:48:23.746584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.826 [2024-07-14 05:48:23.746611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.826 qpair failed and we were unable to recover it. 00:34:16.827 [2024-07-14 05:48:23.746816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-07-14 05:48:23.746845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-07-14 05:48:23.747069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-07-14 05:48:23.747095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-07-14 05:48:23.747254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-07-14 05:48:23.747280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-07-14 05:48:23.747484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-07-14 05:48:23.747510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-07-14 05:48:23.747746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-07-14 05:48:23.747775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-07-14 05:48:23.747986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-07-14 05:48:23.748012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-07-14 05:48:23.748194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-07-14 05:48:23.748219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-07-14 05:48:23.748379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-07-14 05:48:23.748404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-07-14 05:48:23.748590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-07-14 05:48:23.748616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-07-14 05:48:23.748808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-07-14 05:48:23.748834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-07-14 05:48:23.749011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-07-14 05:48:23.749037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-07-14 05:48:23.749221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-07-14 05:48:23.749247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-07-14 05:48:23.749458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-07-14 05:48:23.749484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-07-14 05:48:23.749694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-07-14 05:48:23.749722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-07-14 05:48:23.749912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-07-14 05:48:23.749940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-07-14 05:48:23.750103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-07-14 05:48:23.750129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-07-14 05:48:23.750309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-07-14 05:48:23.750337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-07-14 05:48:23.750565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-07-14 05:48:23.750590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-07-14 05:48:23.750769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-07-14 05:48:23.750795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-07-14 05:48:23.751008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-07-14 05:48:23.751034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-07-14 05:48:23.751212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-07-14 05:48:23.751238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-07-14 05:48:23.751397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-07-14 05:48:23.751422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-07-14 05:48:23.751628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-07-14 05:48:23.751657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-07-14 05:48:23.751835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-07-14 05:48:23.751860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-07-14 05:48:23.752049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-07-14 05:48:23.752076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-07-14 05:48:23.752272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-07-14 05:48:23.752298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-07-14 05:48:23.752476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-07-14 05:48:23.752502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-07-14 05:48:23.752726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-07-14 05:48:23.752754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-07-14 05:48:23.752986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-07-14 05:48:23.753015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-07-14 05:48:23.753220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-07-14 05:48:23.753246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-07-14 05:48:23.753433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-07-14 05:48:23.753474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-07-14 05:48:23.753642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-07-14 05:48:23.753670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-07-14 05:48:23.753875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-07-14 05:48:23.753910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-07-14 05:48:23.754093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-07-14 05:48:23.754118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-07-14 05:48:23.754306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-07-14 05:48:23.754333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-07-14 05:48:23.754517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-07-14 05:48:23.754547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-07-14 05:48:23.754731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-07-14 05:48:23.754757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-07-14 05:48:23.754969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-07-14 05:48:23.754998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-07-14 05:48:23.755200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-07-14 05:48:23.755226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-07-14 05:48:23.755386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-07-14 05:48:23.755412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.827 qpair failed and we were unable to recover it. 00:34:16.827 [2024-07-14 05:48:23.755601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.827 [2024-07-14 05:48:23.755644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-07-14 05:48:23.755849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-07-14 05:48:23.755885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-07-14 05:48:23.756115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-07-14 05:48:23.756141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-07-14 05:48:23.756328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-07-14 05:48:23.756354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-07-14 05:48:23.756561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-07-14 05:48:23.756587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-07-14 05:48:23.756741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-07-14 05:48:23.756767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-07-14 05:48:23.756950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-07-14 05:48:23.756980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-07-14 05:48:23.757187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-07-14 05:48:23.757214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-07-14 05:48:23.757401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-07-14 05:48:23.757427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-07-14 05:48:23.757612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-07-14 05:48:23.757637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-07-14 05:48:23.757795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-07-14 05:48:23.757821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-07-14 05:48:23.758047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-07-14 05:48:23.758073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-07-14 05:48:23.758286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-07-14 05:48:23.758315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-07-14 05:48:23.758511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-07-14 05:48:23.758537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-07-14 05:48:23.758721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-07-14 05:48:23.758748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-07-14 05:48:23.758939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-07-14 05:48:23.758968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-07-14 05:48:23.759141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-07-14 05:48:23.759168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-07-14 05:48:23.759351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-07-14 05:48:23.759377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-07-14 05:48:23.759595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-07-14 05:48:23.759620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-07-14 05:48:23.759804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-07-14 05:48:23.759830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-07-14 05:48:23.760019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-07-14 05:48:23.760046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-07-14 05:48:23.760230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-07-14 05:48:23.760256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-07-14 05:48:23.760451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-07-14 05:48:23.760477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-07-14 05:48:23.760661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-07-14 05:48:23.760686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-07-14 05:48:23.760924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-07-14 05:48:23.760953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-07-14 05:48:23.761149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-07-14 05:48:23.761175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-07-14 05:48:23.761356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-07-14 05:48:23.761382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-07-14 05:48:23.761568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-07-14 05:48:23.761594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-07-14 05:48:23.761778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-07-14 05:48:23.761808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-07-14 05:48:23.762014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-07-14 05:48:23.762040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-07-14 05:48:23.762247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-07-14 05:48:23.762276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-07-14 05:48:23.762443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-07-14 05:48:23.762468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-07-14 05:48:23.762650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-07-14 05:48:23.762676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-07-14 05:48:23.762859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-07-14 05:48:23.762896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-07-14 05:48:23.763083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-07-14 05:48:23.763110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-07-14 05:48:23.763350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-07-14 05:48:23.763383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-07-14 05:48:23.763583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-07-14 05:48:23.763611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-07-14 05:48:23.763794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-07-14 05:48:23.763819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-07-14 05:48:23.764030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-07-14 05:48:23.764060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-07-14 05:48:23.764266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-07-14 05:48:23.764295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-07-14 05:48:23.764504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-07-14 05:48:23.764530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.828 qpair failed and we were unable to recover it. 00:34:16.828 [2024-07-14 05:48:23.764709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.828 [2024-07-14 05:48:23.764735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.829 qpair failed and we were unable to recover it. 00:34:16.829 [2024-07-14 05:48:23.764916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.829 [2024-07-14 05:48:23.764943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.829 qpair failed and we were unable to recover it. 00:34:16.829 [2024-07-14 05:48:23.765121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.829 [2024-07-14 05:48:23.765146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.829 qpair failed and we were unable to recover it. 00:34:16.829 [2024-07-14 05:48:23.765326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.829 [2024-07-14 05:48:23.765351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.829 qpair failed and we were unable to recover it. 00:34:16.829 [2024-07-14 05:48:23.765505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.829 [2024-07-14 05:48:23.765530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.829 qpair failed and we were unable to recover it. 00:34:16.829 [2024-07-14 05:48:23.765740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.829 [2024-07-14 05:48:23.765766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.829 qpair failed and we were unable to recover it. 00:34:16.829 [2024-07-14 05:48:23.765926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.829 [2024-07-14 05:48:23.765953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.829 qpair failed and we were unable to recover it. 00:34:16.829 [2024-07-14 05:48:23.766168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.829 [2024-07-14 05:48:23.766197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.829 qpair failed and we were unable to recover it. 00:34:16.829 [2024-07-14 05:48:23.766372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.829 [2024-07-14 05:48:23.766399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.829 qpair failed and we were unable to recover it. 00:34:16.829 [2024-07-14 05:48:23.766604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.829 [2024-07-14 05:48:23.766630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.829 qpair failed and we were unable to recover it. 00:34:16.829 [2024-07-14 05:48:23.766811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.829 [2024-07-14 05:48:23.766836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.829 qpair failed and we were unable to recover it. 00:34:16.829 [2024-07-14 05:48:23.767002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.829 [2024-07-14 05:48:23.767028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.829 qpair failed and we were unable to recover it. 00:34:16.829 [2024-07-14 05:48:23.767209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.829 [2024-07-14 05:48:23.767234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.829 qpair failed and we were unable to recover it. 00:34:16.829 [2024-07-14 05:48:23.767434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.829 [2024-07-14 05:48:23.767463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.829 qpair failed and we were unable to recover it. 00:34:16.829 [2024-07-14 05:48:23.767685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.829 [2024-07-14 05:48:23.767711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.829 qpair failed and we were unable to recover it. 00:34:16.829 [2024-07-14 05:48:23.767889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.829 [2024-07-14 05:48:23.767915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.829 qpair failed and we were unable to recover it. 00:34:16.829 [2024-07-14 05:48:23.768096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.829 [2024-07-14 05:48:23.768121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.829 qpair failed and we were unable to recover it. 00:34:16.829 [2024-07-14 05:48:23.768313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.829 [2024-07-14 05:48:23.768339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.829 qpair failed and we were unable to recover it. 00:34:16.829 [2024-07-14 05:48:23.768568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.829 [2024-07-14 05:48:23.768596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.829 qpair failed and we were unable to recover it. 00:34:16.829 [2024-07-14 05:48:23.768802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.829 [2024-07-14 05:48:23.768831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.829 qpair failed and we were unable to recover it. 00:34:16.829 [2024-07-14 05:48:23.769059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.829 [2024-07-14 05:48:23.769085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.829 qpair failed and we were unable to recover it. 00:34:16.829 [2024-07-14 05:48:23.769284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.829 [2024-07-14 05:48:23.769310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.829 qpair failed and we were unable to recover it. 00:34:16.829 [2024-07-14 05:48:23.769465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.829 [2024-07-14 05:48:23.769490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.829 qpair failed and we were unable to recover it. 00:34:16.829 [2024-07-14 05:48:23.769671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.829 [2024-07-14 05:48:23.769697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.829 qpair failed and we were unable to recover it. 00:34:16.829 [2024-07-14 05:48:23.769849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.829 [2024-07-14 05:48:23.769890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.829 qpair failed and we were unable to recover it. 00:34:16.829 [2024-07-14 05:48:23.770077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.829 [2024-07-14 05:48:23.770103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.829 qpair failed and we were unable to recover it. 00:34:16.829 [2024-07-14 05:48:23.770338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.829 [2024-07-14 05:48:23.770364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.829 qpair failed and we were unable to recover it. 00:34:16.829 [2024-07-14 05:48:23.770548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.829 [2024-07-14 05:48:23.770576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.829 qpair failed and we were unable to recover it. 00:34:16.829 [2024-07-14 05:48:23.770774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.829 [2024-07-14 05:48:23.770802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.829 qpair failed and we were unable to recover it. 00:34:16.829 [2024-07-14 05:48:23.771039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.829 [2024-07-14 05:48:23.771065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.829 qpair failed and we were unable to recover it. 00:34:16.829 [2024-07-14 05:48:23.771247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.829 [2024-07-14 05:48:23.771273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.829 qpair failed and we were unable to recover it. 00:34:16.829 [2024-07-14 05:48:23.771505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.829 [2024-07-14 05:48:23.771533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.829 qpair failed and we were unable to recover it. 00:34:16.829 [2024-07-14 05:48:23.771735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.829 [2024-07-14 05:48:23.771761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.829 qpair failed and we were unable to recover it. 00:34:16.829 [2024-07-14 05:48:23.771937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.829 [2024-07-14 05:48:23.771963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.829 qpair failed and we were unable to recover it. 00:34:16.829 [2024-07-14 05:48:23.772121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.829 [2024-07-14 05:48:23.772151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.829 qpair failed and we were unable to recover it. 00:34:16.829 [2024-07-14 05:48:23.772341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.829 [2024-07-14 05:48:23.772366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-07-14 05:48:23.772571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-07-14 05:48:23.772599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-07-14 05:48:23.772801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-07-14 05:48:23.772829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-07-14 05:48:23.773045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-07-14 05:48:23.773071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-07-14 05:48:23.773305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-07-14 05:48:23.773334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-07-14 05:48:23.773556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-07-14 05:48:23.773585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-07-14 05:48:23.773788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-07-14 05:48:23.773814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-07-14 05:48:23.774027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-07-14 05:48:23.774055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-07-14 05:48:23.774287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-07-14 05:48:23.774312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-07-14 05:48:23.774498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-07-14 05:48:23.774523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-07-14 05:48:23.774893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-07-14 05:48:23.774925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-07-14 05:48:23.775150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-07-14 05:48:23.775178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-07-14 05:48:23.775376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-07-14 05:48:23.775402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-07-14 05:48:23.775612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-07-14 05:48:23.775640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-07-14 05:48:23.775809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-07-14 05:48:23.775837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-07-14 05:48:23.776030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-07-14 05:48:23.776056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-07-14 05:48:23.776234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-07-14 05:48:23.776263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-07-14 05:48:23.776432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-07-14 05:48:23.776461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-07-14 05:48:23.776673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-07-14 05:48:23.776699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-07-14 05:48:23.776900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-07-14 05:48:23.776933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-07-14 05:48:23.777169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-07-14 05:48:23.777198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-07-14 05:48:23.777429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-07-14 05:48:23.777455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-07-14 05:48:23.777652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-07-14 05:48:23.777680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-07-14 05:48:23.777846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-07-14 05:48:23.777881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-07-14 05:48:23.778093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-07-14 05:48:23.778119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-07-14 05:48:23.778328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-07-14 05:48:23.778356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-07-14 05:48:23.778592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-07-14 05:48:23.778621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-07-14 05:48:23.778805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-07-14 05:48:23.778831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-07-14 05:48:23.779007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-07-14 05:48:23.779033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-07-14 05:48:23.779243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-07-14 05:48:23.779273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-07-14 05:48:23.779480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-07-14 05:48:23.779506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-07-14 05:48:23.779689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-07-14 05:48:23.779718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-07-14 05:48:23.779944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-07-14 05:48:23.779974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-07-14 05:48:23.780207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-07-14 05:48:23.780233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-07-14 05:48:23.780465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-07-14 05:48:23.780493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-07-14 05:48:23.780690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-07-14 05:48:23.780718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-07-14 05:48:23.780923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-07-14 05:48:23.780950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-07-14 05:48:23.781107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-07-14 05:48:23.781135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-07-14 05:48:23.781369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-07-14 05:48:23.781398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-07-14 05:48:23.781612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-07-14 05:48:23.781641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-07-14 05:48:23.781853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.830 [2024-07-14 05:48:23.781891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.830 qpair failed and we were unable to recover it. 00:34:16.830 [2024-07-14 05:48:23.782075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.831 [2024-07-14 05:48:23.782100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.831 qpair failed and we were unable to recover it. 00:34:16.831 [2024-07-14 05:48:23.782257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.831 [2024-07-14 05:48:23.782284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.831 qpair failed and we were unable to recover it. 00:34:16.831 [2024-07-14 05:48:23.782516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.831 [2024-07-14 05:48:23.782545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.831 qpair failed and we were unable to recover it. 00:34:16.831 [2024-07-14 05:48:23.782709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.831 [2024-07-14 05:48:23.782737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.831 qpair failed and we were unable to recover it. 00:34:16.831 [2024-07-14 05:48:23.782965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.831 [2024-07-14 05:48:23.782992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.831 qpair failed and we were unable to recover it. 00:34:16.831 [2024-07-14 05:48:23.783231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.831 [2024-07-14 05:48:23.783266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.831 qpair failed and we were unable to recover it. 00:34:16.831 [2024-07-14 05:48:23.783472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.831 [2024-07-14 05:48:23.783500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.831 qpair failed and we were unable to recover it. 00:34:16.831 [2024-07-14 05:48:23.783677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.831 [2024-07-14 05:48:23.783703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.831 qpair failed and we were unable to recover it. 00:34:16.831 [2024-07-14 05:48:23.783931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.831 [2024-07-14 05:48:23.783960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.831 qpair failed and we were unable to recover it. 00:34:16.831 [2024-07-14 05:48:23.784139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.831 [2024-07-14 05:48:23.784168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.831 qpair failed and we were unable to recover it. 00:34:16.831 [2024-07-14 05:48:23.784372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.831 [2024-07-14 05:48:23.784399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.831 qpair failed and we were unable to recover it. 00:34:16.831 [2024-07-14 05:48:23.784564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.831 [2024-07-14 05:48:23.784589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.831 qpair failed and we were unable to recover it. 00:34:16.831 [2024-07-14 05:48:23.784828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.831 [2024-07-14 05:48:23.784856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.831 qpair failed and we were unable to recover it. 00:34:16.831 [2024-07-14 05:48:23.785103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.831 [2024-07-14 05:48:23.785129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.831 qpair failed and we were unable to recover it. 00:34:16.831 [2024-07-14 05:48:23.785301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.831 [2024-07-14 05:48:23.785329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.831 qpair failed and we were unable to recover it. 00:34:16.831 [2024-07-14 05:48:23.785512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.831 [2024-07-14 05:48:23.785537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.831 qpair failed and we were unable to recover it. 00:34:16.831 [2024-07-14 05:48:23.785747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.831 [2024-07-14 05:48:23.785773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.831 qpair failed and we were unable to recover it. 00:34:16.831 [2024-07-14 05:48:23.785979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.831 [2024-07-14 05:48:23.786008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.831 qpair failed and we were unable to recover it. 00:34:16.831 [2024-07-14 05:48:23.786209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.831 [2024-07-14 05:48:23.786239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.831 qpair failed and we were unable to recover it. 00:34:16.831 [2024-07-14 05:48:23.786475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.831 [2024-07-14 05:48:23.786501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.831 qpair failed and we were unable to recover it. 00:34:16.831 [2024-07-14 05:48:23.786746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.831 [2024-07-14 05:48:23.786776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.831 qpair failed and we were unable to recover it. 00:34:16.831 [2024-07-14 05:48:23.786982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.831 [2024-07-14 05:48:23.787011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.831 qpair failed and we were unable to recover it. 00:34:16.831 [2024-07-14 05:48:23.787215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.831 [2024-07-14 05:48:23.787241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.831 qpair failed and we were unable to recover it. 00:34:16.831 [2024-07-14 05:48:23.787420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.831 [2024-07-14 05:48:23.787450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.831 qpair failed and we were unable to recover it. 00:34:16.831 [2024-07-14 05:48:23.787656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.831 [2024-07-14 05:48:23.787683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.831 qpair failed and we were unable to recover it. 00:34:16.831 [2024-07-14 05:48:23.787874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.831 [2024-07-14 05:48:23.787901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.831 qpair failed and we were unable to recover it. 00:34:16.831 [2024-07-14 05:48:23.788123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.831 [2024-07-14 05:48:23.788152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.831 qpair failed and we were unable to recover it. 00:34:16.831 [2024-07-14 05:48:23.788349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.831 [2024-07-14 05:48:23.788378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.831 qpair failed and we were unable to recover it. 00:34:16.831 [2024-07-14 05:48:23.788587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.831 [2024-07-14 05:48:23.788613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.831 qpair failed and we were unable to recover it. 00:34:16.831 [2024-07-14 05:48:23.788816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.831 [2024-07-14 05:48:23.788845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.831 qpair failed and we were unable to recover it. 00:34:16.831 [2024-07-14 05:48:23.789031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.831 [2024-07-14 05:48:23.789061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.831 qpair failed and we were unable to recover it. 00:34:16.831 [2024-07-14 05:48:23.789238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.831 [2024-07-14 05:48:23.789264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.831 qpair failed and we were unable to recover it. 00:34:16.831 [2024-07-14 05:48:23.789460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.831 [2024-07-14 05:48:23.789489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.831 qpair failed and we were unable to recover it. 00:34:16.831 [2024-07-14 05:48:23.789657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.831 [2024-07-14 05:48:23.789685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.831 qpair failed and we were unable to recover it. 00:34:16.831 [2024-07-14 05:48:23.789929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.831 [2024-07-14 05:48:23.789957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.831 qpair failed and we were unable to recover it. 00:34:16.831 [2024-07-14 05:48:23.790196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.831 [2024-07-14 05:48:23.790225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.831 qpair failed and we were unable to recover it. 00:34:16.831 [2024-07-14 05:48:23.790432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.831 [2024-07-14 05:48:23.790461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.831 qpair failed and we were unable to recover it. 00:34:16.831 [2024-07-14 05:48:23.790688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.831 [2024-07-14 05:48:23.790714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.831 qpair failed and we were unable to recover it. 00:34:16.831 [2024-07-14 05:48:23.790912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.831 [2024-07-14 05:48:23.790946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.831 qpair failed and we were unable to recover it. 00:34:16.831 [2024-07-14 05:48:23.791170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.831 [2024-07-14 05:48:23.791199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.831 qpair failed and we were unable to recover it. 00:34:16.831 [2024-07-14 05:48:23.791428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.831 [2024-07-14 05:48:23.791454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.831 qpair failed and we were unable to recover it. 00:34:16.831 [2024-07-14 05:48:23.791640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-07-14 05:48:23.791669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-07-14 05:48:23.791898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-07-14 05:48:23.791928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-07-14 05:48:23.792163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-07-14 05:48:23.792188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-07-14 05:48:23.792388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-07-14 05:48:23.792417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-07-14 05:48:23.792643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-07-14 05:48:23.792672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-07-14 05:48:23.792854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-07-14 05:48:23.792891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-07-14 05:48:23.793106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-07-14 05:48:23.793131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-07-14 05:48:23.793305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-07-14 05:48:23.793333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-07-14 05:48:23.793515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-07-14 05:48:23.793541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-07-14 05:48:23.793723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-07-14 05:48:23.793750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-07-14 05:48:23.793985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-07-14 05:48:23.794014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-07-14 05:48:23.794194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-07-14 05:48:23.794220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-07-14 05:48:23.794403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-07-14 05:48:23.794430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-07-14 05:48:23.794664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-07-14 05:48:23.794693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-07-14 05:48:23.794923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-07-14 05:48:23.794949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-07-14 05:48:23.795189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-07-14 05:48:23.795218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-07-14 05:48:23.795420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-07-14 05:48:23.795449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-07-14 05:48:23.795677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-07-14 05:48:23.795704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-07-14 05:48:23.795942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-07-14 05:48:23.795980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-07-14 05:48:23.796159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-07-14 05:48:23.796188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-07-14 05:48:23.796373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-07-14 05:48:23.796400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-07-14 05:48:23.796608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-07-14 05:48:23.796638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-07-14 05:48:23.796877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-07-14 05:48:23.796906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-07-14 05:48:23.797142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-07-14 05:48:23.797168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-07-14 05:48:23.797406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-07-14 05:48:23.797435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-07-14 05:48:23.797633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-07-14 05:48:23.797662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-07-14 05:48:23.797892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-07-14 05:48:23.797927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-07-14 05:48:23.798140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-07-14 05:48:23.798168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-07-14 05:48:23.798373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-07-14 05:48:23.798398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-07-14 05:48:23.798559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-07-14 05:48:23.798585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-07-14 05:48:23.798817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-07-14 05:48:23.798846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-07-14 05:48:23.799033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-07-14 05:48:23.799064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-07-14 05:48:23.799271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-07-14 05:48:23.799297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-07-14 05:48:23.799505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-07-14 05:48:23.799533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-07-14 05:48:23.799760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-07-14 05:48:23.799789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-07-14 05:48:23.799997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-07-14 05:48:23.800024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-07-14 05:48:23.800230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-07-14 05:48:23.800259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-07-14 05:48:23.800484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-07-14 05:48:23.800517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-07-14 05:48:23.800698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-07-14 05:48:23.800724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-07-14 05:48:23.800908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-07-14 05:48:23.800935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-07-14 05:48:23.801146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.832 [2024-07-14 05:48:23.801175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.832 qpair failed and we were unable to recover it. 00:34:16.832 [2024-07-14 05:48:23.801381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.833 [2024-07-14 05:48:23.801407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.833 qpair failed and we were unable to recover it. 00:34:16.833 [2024-07-14 05:48:23.801637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.833 [2024-07-14 05:48:23.801665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.833 qpair failed and we were unable to recover it. 00:34:16.833 [2024-07-14 05:48:23.801873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.833 [2024-07-14 05:48:23.801902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.833 qpair failed and we were unable to recover it. 00:34:16.833 [2024-07-14 05:48:23.802084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.833 [2024-07-14 05:48:23.802110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.833 qpair failed and we were unable to recover it. 00:34:16.833 [2024-07-14 05:48:23.802308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.833 [2024-07-14 05:48:23.802336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.833 qpair failed and we were unable to recover it. 00:34:16.833 [2024-07-14 05:48:23.802562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.833 [2024-07-14 05:48:23.802588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.833 qpair failed and we were unable to recover it. 00:34:16.833 [2024-07-14 05:48:23.802799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.833 [2024-07-14 05:48:23.802824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.833 qpair failed and we were unable to recover it. 00:34:16.833 [2024-07-14 05:48:23.803040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.833 [2024-07-14 05:48:23.803071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.833 qpair failed and we were unable to recover it. 00:34:16.833 [2024-07-14 05:48:23.803310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.833 [2024-07-14 05:48:23.803336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.833 qpair failed and we were unable to recover it. 00:34:16.833 [2024-07-14 05:48:23.803493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.833 [2024-07-14 05:48:23.803520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.833 qpair failed and we were unable to recover it. 00:34:16.833 [2024-07-14 05:48:23.803674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.833 [2024-07-14 05:48:23.803717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.833 qpair failed and we were unable to recover it. 00:34:16.833 [2024-07-14 05:48:23.803935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.833 [2024-07-14 05:48:23.803961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.833 qpair failed and we were unable to recover it. 00:34:16.833 [2024-07-14 05:48:23.804172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.833 [2024-07-14 05:48:23.804197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.833 qpair failed and we were unable to recover it. 00:34:16.833 [2024-07-14 05:48:23.804408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.833 [2024-07-14 05:48:23.804437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.833 qpair failed and we were unable to recover it. 00:34:16.833 [2024-07-14 05:48:23.804653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.833 [2024-07-14 05:48:23.804681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.833 qpair failed and we were unable to recover it. 00:34:16.833 [2024-07-14 05:48:23.804887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.833 [2024-07-14 05:48:23.804914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.833 qpair failed and we were unable to recover it. 00:34:16.833 [2024-07-14 05:48:23.805116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.833 [2024-07-14 05:48:23.805144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.833 qpair failed and we were unable to recover it. 00:34:16.833 [2024-07-14 05:48:23.805342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.833 [2024-07-14 05:48:23.805372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.833 qpair failed and we were unable to recover it. 00:34:16.833 [2024-07-14 05:48:23.805602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.833 [2024-07-14 05:48:23.805628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.833 qpair failed and we were unable to recover it. 00:34:16.833 [2024-07-14 05:48:23.805842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.833 [2024-07-14 05:48:23.805884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.833 qpair failed and we were unable to recover it. 00:34:16.833 [2024-07-14 05:48:23.806106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.833 [2024-07-14 05:48:23.806135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.833 qpair failed and we were unable to recover it. 00:34:16.833 [2024-07-14 05:48:23.806336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.833 [2024-07-14 05:48:23.806362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.833 qpair failed and we were unable to recover it. 00:34:16.833 [2024-07-14 05:48:23.806592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.833 [2024-07-14 05:48:23.806621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.833 qpair failed and we were unable to recover it. 00:34:16.833 [2024-07-14 05:48:23.806795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.833 [2024-07-14 05:48:23.806830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.833 qpair failed and we were unable to recover it. 00:34:16.833 [2024-07-14 05:48:23.807073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.833 [2024-07-14 05:48:23.807100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.833 qpair failed and we were unable to recover it. 00:34:16.833 [2024-07-14 05:48:23.807304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.833 [2024-07-14 05:48:23.807333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.833 qpair failed and we were unable to recover it. 00:34:16.833 [2024-07-14 05:48:23.807516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.833 [2024-07-14 05:48:23.807545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.833 qpair failed and we were unable to recover it. 00:34:16.833 [2024-07-14 05:48:23.807750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.833 [2024-07-14 05:48:23.807776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.833 qpair failed and we were unable to recover it. 00:34:16.833 [2024-07-14 05:48:23.808021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.833 [2024-07-14 05:48:23.808050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.833 qpair failed and we were unable to recover it. 00:34:16.833 [2024-07-14 05:48:23.808228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.833 [2024-07-14 05:48:23.808258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.833 qpair failed and we were unable to recover it. 00:34:16.833 [2024-07-14 05:48:23.808459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.833 [2024-07-14 05:48:23.808486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.833 qpair failed and we were unable to recover it. 00:34:16.833 [2024-07-14 05:48:23.808651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.833 [2024-07-14 05:48:23.808677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.833 qpair failed and we were unable to recover it. 00:34:16.833 [2024-07-14 05:48:23.808871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.833 [2024-07-14 05:48:23.808897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.833 qpair failed and we were unable to recover it. 00:34:16.833 [2024-07-14 05:48:23.809086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.833 [2024-07-14 05:48:23.809113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.833 qpair failed and we were unable to recover it. 00:34:16.833 [2024-07-14 05:48:23.809326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.833 [2024-07-14 05:48:23.809355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.833 qpair failed and we were unable to recover it. 00:34:16.833 [2024-07-14 05:48:23.809560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.833 [2024-07-14 05:48:23.809588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.833 qpair failed and we were unable to recover it. 00:34:16.834 [2024-07-14 05:48:23.809792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-07-14 05:48:23.809817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-07-14 05:48:23.809989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-07-14 05:48:23.810016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-07-14 05:48:23.810166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-07-14 05:48:23.810192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-07-14 05:48:23.810403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-07-14 05:48:23.810428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-07-14 05:48:23.810618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-07-14 05:48:23.810648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-07-14 05:48:23.810876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-07-14 05:48:23.810903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-07-14 05:48:23.811086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-07-14 05:48:23.811111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-07-14 05:48:23.811322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-07-14 05:48:23.811351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-07-14 05:48:23.811551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-07-14 05:48:23.811580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-07-14 05:48:23.811786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-07-14 05:48:23.811814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-07-14 05:48:23.811998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-07-14 05:48:23.812024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-07-14 05:48:23.812230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-07-14 05:48:23.812258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-07-14 05:48:23.812458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-07-14 05:48:23.812484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-07-14 05:48:23.812669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-07-14 05:48:23.812695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-07-14 05:48:23.812909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-07-14 05:48:23.812939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-07-14 05:48:23.813141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-07-14 05:48:23.813167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-07-14 05:48:23.813374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-07-14 05:48:23.813402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-07-14 05:48:23.813578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-07-14 05:48:23.813606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-07-14 05:48:23.813783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-07-14 05:48:23.813809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-07-14 05:48:23.813970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-07-14 05:48:23.813997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-07-14 05:48:23.814197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-07-14 05:48:23.814225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-07-14 05:48:23.814454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-07-14 05:48:23.814480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-07-14 05:48:23.814688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-07-14 05:48:23.814717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-07-14 05:48:23.814916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-07-14 05:48:23.814945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-07-14 05:48:23.815123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-07-14 05:48:23.815149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-07-14 05:48:23.815334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-07-14 05:48:23.815359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-07-14 05:48:23.815533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-07-14 05:48:23.815561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-07-14 05:48:23.815751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-07-14 05:48:23.815781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-07-14 05:48:23.815960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-07-14 05:48:23.815989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-07-14 05:48:23.816217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-07-14 05:48:23.816246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-07-14 05:48:23.816428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-07-14 05:48:23.816454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-07-14 05:48:23.816635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-07-14 05:48:23.816661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-07-14 05:48:23.816822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-07-14 05:48:23.816848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-07-14 05:48:23.817033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-07-14 05:48:23.817059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-07-14 05:48:23.817240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-07-14 05:48:23.817268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-07-14 05:48:23.817497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-07-14 05:48:23.817525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-07-14 05:48:23.817758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-07-14 05:48:23.817784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-07-14 05:48:23.817991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-07-14 05:48:23.818020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-07-14 05:48:23.818220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-07-14 05:48:23.818249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-07-14 05:48:23.818448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-07-14 05:48:23.818474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.834 [2024-07-14 05:48:23.818638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.834 [2024-07-14 05:48:23.818664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.834 qpair failed and we were unable to recover it. 00:34:16.835 [2024-07-14 05:48:23.818845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-07-14 05:48:23.818881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.835 qpair failed and we were unable to recover it. 00:34:16.835 [2024-07-14 05:48:23.819117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-07-14 05:48:23.819143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.835 qpair failed and we were unable to recover it. 00:34:16.835 [2024-07-14 05:48:23.819390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-07-14 05:48:23.819418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.835 qpair failed and we were unable to recover it. 00:34:16.835 [2024-07-14 05:48:23.819630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-07-14 05:48:23.819659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.835 qpair failed and we were unable to recover it. 00:34:16.835 [2024-07-14 05:48:23.819856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-07-14 05:48:23.819892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.835 qpair failed and we were unable to recover it. 00:34:16.835 [2024-07-14 05:48:23.820070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-07-14 05:48:23.820097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.835 qpair failed and we were unable to recover it. 00:34:16.835 [2024-07-14 05:48:23.820250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-07-14 05:48:23.820275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.835 qpair failed and we were unable to recover it. 00:34:16.835 [2024-07-14 05:48:23.820483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-07-14 05:48:23.820509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.835 qpair failed and we were unable to recover it. 00:34:16.835 [2024-07-14 05:48:23.820709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-07-14 05:48:23.820738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.835 qpair failed and we were unable to recover it. 00:34:16.835 [2024-07-14 05:48:23.820937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-07-14 05:48:23.820966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.835 qpair failed and we were unable to recover it. 00:34:16.835 [2024-07-14 05:48:23.821197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-07-14 05:48:23.821223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.835 qpair failed and we were unable to recover it. 00:34:16.835 [2024-07-14 05:48:23.821453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-07-14 05:48:23.821479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.835 qpair failed and we were unable to recover it. 00:34:16.835 [2024-07-14 05:48:23.821660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-07-14 05:48:23.821686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.835 qpair failed and we were unable to recover it. 00:34:16.835 [2024-07-14 05:48:23.821846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-07-14 05:48:23.821878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.835 qpair failed and we were unable to recover it. 00:34:16.835 [2024-07-14 05:48:23.822083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-07-14 05:48:23.822112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.835 qpair failed and we were unable to recover it. 00:34:16.835 [2024-07-14 05:48:23.822313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-07-14 05:48:23.822341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.835 qpair failed and we were unable to recover it. 00:34:16.835 [2024-07-14 05:48:23.822575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-07-14 05:48:23.822600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.835 qpair failed and we were unable to recover it. 00:34:16.835 [2024-07-14 05:48:23.822792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-07-14 05:48:23.822818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.835 qpair failed and we were unable to recover it. 00:34:16.835 [2024-07-14 05:48:23.823002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-07-14 05:48:23.823028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.835 qpair failed and we were unable to recover it. 00:34:16.835 [2024-07-14 05:48:23.823211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-07-14 05:48:23.823236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.835 qpair failed and we were unable to recover it. 00:34:16.835 [2024-07-14 05:48:23.823447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-07-14 05:48:23.823477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.835 qpair failed and we were unable to recover it. 00:34:16.835 [2024-07-14 05:48:23.823656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-07-14 05:48:23.823684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.835 qpair failed and we were unable to recover it. 00:34:16.835 [2024-07-14 05:48:23.823886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-07-14 05:48:23.823912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.835 qpair failed and we were unable to recover it. 00:34:16.835 [2024-07-14 05:48:23.824072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-07-14 05:48:23.824098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.835 qpair failed and we were unable to recover it. 00:34:16.835 [2024-07-14 05:48:23.824315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-07-14 05:48:23.824344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.835 qpair failed and we were unable to recover it. 00:34:16.835 [2024-07-14 05:48:23.824571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-07-14 05:48:23.824597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.835 qpair failed and we were unable to recover it. 00:34:16.835 [2024-07-14 05:48:23.824813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-07-14 05:48:23.824846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.835 qpair failed and we were unable to recover it. 00:34:16.835 [2024-07-14 05:48:23.825072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-07-14 05:48:23.825099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.835 qpair failed and we were unable to recover it. 00:34:16.835 [2024-07-14 05:48:23.825262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-07-14 05:48:23.825287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.835 qpair failed and we were unable to recover it. 00:34:16.835 [2024-07-14 05:48:23.825494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-07-14 05:48:23.825523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.835 qpair failed and we were unable to recover it. 00:34:16.835 [2024-07-14 05:48:23.825754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-07-14 05:48:23.825783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.835 qpair failed and we were unable to recover it. 00:34:16.835 [2024-07-14 05:48:23.825995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-07-14 05:48:23.826021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.835 qpair failed and we were unable to recover it. 00:34:16.835 [2024-07-14 05:48:23.826209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-07-14 05:48:23.826236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.835 qpair failed and we were unable to recover it. 00:34:16.835 [2024-07-14 05:48:23.826442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-07-14 05:48:23.826471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.835 qpair failed and we were unable to recover it. 00:34:16.835 [2024-07-14 05:48:23.826696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-07-14 05:48:23.826722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.835 qpair failed and we were unable to recover it. 00:34:16.835 [2024-07-14 05:48:23.826888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-07-14 05:48:23.826921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.835 qpair failed and we were unable to recover it. 00:34:16.835 [2024-07-14 05:48:23.827103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-07-14 05:48:23.827129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.835 qpair failed and we were unable to recover it. 00:34:16.835 [2024-07-14 05:48:23.827315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-07-14 05:48:23.827340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.835 qpair failed and we were unable to recover it. 00:34:16.835 [2024-07-14 05:48:23.827516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-07-14 05:48:23.827544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.835 qpair failed and we were unable to recover it. 00:34:16.835 [2024-07-14 05:48:23.827747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-07-14 05:48:23.827776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.835 qpair failed and we were unable to recover it. 00:34:16.835 [2024-07-14 05:48:23.827985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.835 [2024-07-14 05:48:23.828011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-07-14 05:48:23.828195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-07-14 05:48:23.828224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-07-14 05:48:23.828427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-07-14 05:48:23.828455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-07-14 05:48:23.828662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-07-14 05:48:23.828687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-07-14 05:48:23.828871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-07-14 05:48:23.828900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-07-14 05:48:23.829105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-07-14 05:48:23.829133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-07-14 05:48:23.829315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-07-14 05:48:23.829340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-07-14 05:48:23.829489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-07-14 05:48:23.829530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-07-14 05:48:23.829729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-07-14 05:48:23.829758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-07-14 05:48:23.829964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-07-14 05:48:23.829991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-07-14 05:48:23.830192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-07-14 05:48:23.830220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-07-14 05:48:23.830386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-07-14 05:48:23.830414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-07-14 05:48:23.830622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-07-14 05:48:23.830647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-07-14 05:48:23.830812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-07-14 05:48:23.830838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-07-14 05:48:23.831028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-07-14 05:48:23.831055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-07-14 05:48:23.831208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-07-14 05:48:23.831235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-07-14 05:48:23.831441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-07-14 05:48:23.831470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-07-14 05:48:23.831637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-07-14 05:48:23.831665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-07-14 05:48:23.831895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-07-14 05:48:23.831925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-07-14 05:48:23.832126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-07-14 05:48:23.832155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-07-14 05:48:23.832346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-07-14 05:48:23.832374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-07-14 05:48:23.832548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-07-14 05:48:23.832573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-07-14 05:48:23.832761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-07-14 05:48:23.832788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-07-14 05:48:23.832974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-07-14 05:48:23.833000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-07-14 05:48:23.833185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-07-14 05:48:23.833210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-07-14 05:48:23.833368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-07-14 05:48:23.833394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-07-14 05:48:23.833576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-07-14 05:48:23.833607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-07-14 05:48:23.833789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-07-14 05:48:23.833819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-07-14 05:48:23.834025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-07-14 05:48:23.834052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-07-14 05:48:23.834204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-07-14 05:48:23.834231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-07-14 05:48:23.834422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-07-14 05:48:23.834448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-07-14 05:48:23.834662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-07-14 05:48:23.834691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-07-14 05:48:23.834893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-07-14 05:48:23.834929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-07-14 05:48:23.835132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-07-14 05:48:23.835158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-07-14 05:48:23.835363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-07-14 05:48:23.835391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-07-14 05:48:23.835593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-07-14 05:48:23.835622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-07-14 05:48:23.835795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-07-14 05:48:23.835821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-07-14 05:48:23.836042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-07-14 05:48:23.836072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-07-14 05:48:23.836302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-07-14 05:48:23.836331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-07-14 05:48:23.836563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-07-14 05:48:23.836589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-07-14 05:48:23.836774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-07-14 05:48:23.836803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.836 [2024-07-14 05:48:23.837005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.836 [2024-07-14 05:48:23.837034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.836 qpair failed and we were unable to recover it. 00:34:16.837 [2024-07-14 05:48:23.837242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.837 [2024-07-14 05:48:23.837269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.837 qpair failed and we were unable to recover it. 00:34:16.837 [2024-07-14 05:48:23.837452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.837 [2024-07-14 05:48:23.837480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.837 qpair failed and we were unable to recover it. 00:34:16.837 [2024-07-14 05:48:23.837680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.837 [2024-07-14 05:48:23.837709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.837 qpair failed and we were unable to recover it. 00:34:16.837 [2024-07-14 05:48:23.837947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.837 [2024-07-14 05:48:23.837973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.837 qpair failed and we were unable to recover it. 00:34:16.837 [2024-07-14 05:48:23.838152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.837 [2024-07-14 05:48:23.838181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.837 qpair failed and we were unable to recover it. 00:34:16.837 [2024-07-14 05:48:23.838362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.837 [2024-07-14 05:48:23.838391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.837 qpair failed and we were unable to recover it. 00:34:16.837 [2024-07-14 05:48:23.838623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.837 [2024-07-14 05:48:23.838649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.837 qpair failed and we were unable to recover it. 00:34:16.837 [2024-07-14 05:48:23.838854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.837 [2024-07-14 05:48:23.838890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.837 qpair failed and we were unable to recover it. 00:34:16.837 [2024-07-14 05:48:23.839090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.837 [2024-07-14 05:48:23.839120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.837 qpair failed and we were unable to recover it. 00:34:16.837 [2024-07-14 05:48:23.839332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.837 [2024-07-14 05:48:23.839358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.837 qpair failed and we were unable to recover it. 00:34:16.837 [2024-07-14 05:48:23.839546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.837 [2024-07-14 05:48:23.839571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.837 qpair failed and we were unable to recover it. 00:34:16.837 [2024-07-14 05:48:23.839799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.837 [2024-07-14 05:48:23.839824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.837 qpair failed and we were unable to recover it. 00:34:16.837 [2024-07-14 05:48:23.840012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.837 [2024-07-14 05:48:23.840038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.837 qpair failed and we were unable to recover it. 00:34:16.837 [2024-07-14 05:48:23.840247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.837 [2024-07-14 05:48:23.840276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.837 qpair failed and we were unable to recover it. 00:34:16.837 [2024-07-14 05:48:23.840472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.837 [2024-07-14 05:48:23.840501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.837 qpair failed and we were unable to recover it. 00:34:16.837 [2024-07-14 05:48:23.840734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.837 [2024-07-14 05:48:23.840759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.837 qpair failed and we were unable to recover it. 00:34:16.837 [2024-07-14 05:48:23.840944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.837 [2024-07-14 05:48:23.840970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.837 qpair failed and we were unable to recover it. 00:34:16.837 [2024-07-14 05:48:23.841154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.837 [2024-07-14 05:48:23.841180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.837 qpair failed and we were unable to recover it. 00:34:16.837 [2024-07-14 05:48:23.841346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.837 [2024-07-14 05:48:23.841372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.837 qpair failed and we were unable to recover it. 00:34:16.837 [2024-07-14 05:48:23.841562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.837 [2024-07-14 05:48:23.841587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.837 qpair failed and we were unable to recover it. 00:34:16.837 [2024-07-14 05:48:23.841740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.837 [2024-07-14 05:48:23.841767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.837 qpair failed and we were unable to recover it. 00:34:16.837 [2024-07-14 05:48:23.841952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.837 [2024-07-14 05:48:23.841978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.837 qpair failed and we were unable to recover it. 00:34:16.837 [2024-07-14 05:48:23.842162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.837 [2024-07-14 05:48:23.842188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.837 qpair failed and we were unable to recover it. 00:34:16.837 [2024-07-14 05:48:23.842392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.837 [2024-07-14 05:48:23.842421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.837 qpair failed and we were unable to recover it. 00:34:16.837 [2024-07-14 05:48:23.842624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.837 [2024-07-14 05:48:23.842654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.837 qpair failed and we were unable to recover it. 00:34:16.837 [2024-07-14 05:48:23.842864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.837 [2024-07-14 05:48:23.842900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.837 qpair failed and we were unable to recover it. 00:34:16.837 [2024-07-14 05:48:23.843102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.837 [2024-07-14 05:48:23.843130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.837 qpair failed and we were unable to recover it. 00:34:16.837 [2024-07-14 05:48:23.843339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.837 [2024-07-14 05:48:23.843365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.837 qpair failed and we were unable to recover it. 00:34:16.837 [2024-07-14 05:48:23.843594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.837 [2024-07-14 05:48:23.843623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.837 qpair failed and we were unable to recover it. 00:34:16.837 [2024-07-14 05:48:23.843845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.837 [2024-07-14 05:48:23.843880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.837 qpair failed and we were unable to recover it. 00:34:16.837 [2024-07-14 05:48:23.844087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.837 [2024-07-14 05:48:23.844114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.837 qpair failed and we were unable to recover it. 00:34:16.837 [2024-07-14 05:48:23.844349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.837 [2024-07-14 05:48:23.844378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.837 qpair failed and we were unable to recover it. 00:34:16.837 [2024-07-14 05:48:23.844588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.837 [2024-07-14 05:48:23.844613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.837 qpair failed and we were unable to recover it. 00:34:16.837 [2024-07-14 05:48:23.844796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.837 [2024-07-14 05:48:23.844821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.837 qpair failed and we were unable to recover it. 00:34:16.837 [2024-07-14 05:48:23.845042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.837 [2024-07-14 05:48:23.845085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.837 qpair failed and we were unable to recover it. 00:34:16.837 [2024-07-14 05:48:23.845301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-07-14 05:48:23.845330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-07-14 05:48:23.845511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-07-14 05:48:23.845538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-07-14 05:48:23.845745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-07-14 05:48:23.845773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-07-14 05:48:23.845988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-07-14 05:48:23.846017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-07-14 05:48:23.846211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-07-14 05:48:23.846236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-07-14 05:48:23.846466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-07-14 05:48:23.846495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-07-14 05:48:23.846727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-07-14 05:48:23.846753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-07-14 05:48:23.846943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-07-14 05:48:23.846971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-07-14 05:48:23.847188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-07-14 05:48:23.847217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-07-14 05:48:23.847417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-07-14 05:48:23.847446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-07-14 05:48:23.847654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-07-14 05:48:23.847680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-07-14 05:48:23.847890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-07-14 05:48:23.847926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-07-14 05:48:23.848164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-07-14 05:48:23.848190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-07-14 05:48:23.848378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-07-14 05:48:23.848405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-07-14 05:48:23.848610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-07-14 05:48:23.848639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-07-14 05:48:23.848833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-07-14 05:48:23.848861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-07-14 05:48:23.849108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-07-14 05:48:23.849134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-07-14 05:48:23.849322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-07-14 05:48:23.849350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-07-14 05:48:23.849527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-07-14 05:48:23.849555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-07-14 05:48:23.849760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-07-14 05:48:23.849789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-07-14 05:48:23.849994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-07-14 05:48:23.850021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-07-14 05:48:23.850254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-07-14 05:48:23.850283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-07-14 05:48:23.850454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-07-14 05:48:23.850481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-07-14 05:48:23.850683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-07-14 05:48:23.850711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-07-14 05:48:23.850878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-07-14 05:48:23.850907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-07-14 05:48:23.851138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-07-14 05:48:23.851164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-07-14 05:48:23.851378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-07-14 05:48:23.851407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-07-14 05:48:23.851610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-07-14 05:48:23.851638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-07-14 05:48:23.851850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-07-14 05:48:23.851883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-07-14 05:48:23.852058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-07-14 05:48:23.852093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-07-14 05:48:23.852271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-07-14 05:48:23.852300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-07-14 05:48:23.852514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-07-14 05:48:23.852540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-07-14 05:48:23.852741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-07-14 05:48:23.852769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-07-14 05:48:23.852969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-07-14 05:48:23.852999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-07-14 05:48:23.853230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-07-14 05:48:23.853255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-07-14 05:48:23.853438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-07-14 05:48:23.853468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-07-14 05:48:23.853666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-07-14 05:48:23.853694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-07-14 05:48:23.853929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-07-14 05:48:23.853955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-07-14 05:48:23.854169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-07-14 05:48:23.854198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-07-14 05:48:23.854410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-07-14 05:48:23.854438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.838 [2024-07-14 05:48:23.854631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.838 [2024-07-14 05:48:23.854657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.838 qpair failed and we were unable to recover it. 00:34:16.839 [2024-07-14 05:48:23.854894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-07-14 05:48:23.854938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-07-14 05:48:23.855127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-07-14 05:48:23.855153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-07-14 05:48:23.855337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-07-14 05:48:23.855363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-07-14 05:48:23.855521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-07-14 05:48:23.855546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-07-14 05:48:23.855730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-07-14 05:48:23.855755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-07-14 05:48:23.855960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-07-14 05:48:23.855987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-07-14 05:48:23.856154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-07-14 05:48:23.856185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-07-14 05:48:23.856412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-07-14 05:48:23.856440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-07-14 05:48:23.856648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-07-14 05:48:23.856674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-07-14 05:48:23.856857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-07-14 05:48:23.856894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-07-14 05:48:23.857134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-07-14 05:48:23.857173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-07-14 05:48:23.857377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-07-14 05:48:23.857402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-07-14 05:48:23.857633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-07-14 05:48:23.857661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-07-14 05:48:23.857841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-07-14 05:48:23.857877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-07-14 05:48:23.858120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-07-14 05:48:23.858145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-07-14 05:48:23.858367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-07-14 05:48:23.858396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-07-14 05:48:23.858606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-07-14 05:48:23.858635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-07-14 05:48:23.858884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-07-14 05:48:23.858911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-07-14 05:48:23.859103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-07-14 05:48:23.859131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-07-14 05:48:23.859366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-07-14 05:48:23.859394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-07-14 05:48:23.859588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-07-14 05:48:23.859613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-07-14 05:48:23.859818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-07-14 05:48:23.859848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-07-14 05:48:23.860051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-07-14 05:48:23.860079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-07-14 05:48:23.860288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-07-14 05:48:23.860314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-07-14 05:48:23.860537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-07-14 05:48:23.860566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-07-14 05:48:23.860763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-07-14 05:48:23.860791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-07-14 05:48:23.861022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-07-14 05:48:23.861050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-07-14 05:48:23.861259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-07-14 05:48:23.861287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-07-14 05:48:23.861473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-07-14 05:48:23.861504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-07-14 05:48:23.861657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-07-14 05:48:23.861682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-07-14 05:48:23.861853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-07-14 05:48:23.861890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-07-14 05:48:23.862097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-07-14 05:48:23.862125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-07-14 05:48:23.862343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-07-14 05:48:23.862370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-07-14 05:48:23.862574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-07-14 05:48:23.862603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-07-14 05:48:23.862813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-07-14 05:48:23.862840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-07-14 05:48:23.863079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-07-14 05:48:23.863106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-07-14 05:48:23.863360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-07-14 05:48:23.863389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-07-14 05:48:23.863567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-07-14 05:48:23.863596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-07-14 05:48:23.863816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-07-14 05:48:23.863844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-07-14 05:48:23.864037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-07-14 05:48:23.864063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.839 qpair failed and we were unable to recover it. 00:34:16.839 [2024-07-14 05:48:23.864265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.839 [2024-07-14 05:48:23.864294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-07-14 05:48:23.864495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-07-14 05:48:23.864521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-07-14 05:48:23.864756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-07-14 05:48:23.864785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-07-14 05:48:23.864994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-07-14 05:48:23.865023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-07-14 05:48:23.865267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-07-14 05:48:23.865292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-07-14 05:48:23.865503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-07-14 05:48:23.865532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-07-14 05:48:23.865731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-07-14 05:48:23.865759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-07-14 05:48:23.865945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-07-14 05:48:23.865972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-07-14 05:48:23.866136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-07-14 05:48:23.866165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-07-14 05:48:23.866372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-07-14 05:48:23.866401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-07-14 05:48:23.866601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-07-14 05:48:23.866627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-07-14 05:48:23.866863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-07-14 05:48:23.866898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-07-14 05:48:23.867108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-07-14 05:48:23.867137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-07-14 05:48:23.867376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-07-14 05:48:23.867402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-07-14 05:48:23.867591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-07-14 05:48:23.867616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-07-14 05:48:23.867826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-07-14 05:48:23.867855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-07-14 05:48:23.868119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-07-14 05:48:23.868145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-07-14 05:48:23.868360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-07-14 05:48:23.868386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-07-14 05:48:23.868546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-07-14 05:48:23.868572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-07-14 05:48:23.868755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-07-14 05:48:23.868782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-07-14 05:48:23.868968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-07-14 05:48:23.868994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-07-14 05:48:23.869219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-07-14 05:48:23.869248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-07-14 05:48:23.869457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-07-14 05:48:23.869482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-07-14 05:48:23.869640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-07-14 05:48:23.869666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-07-14 05:48:23.869851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-07-14 05:48:23.869887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-07-14 05:48:23.870121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-07-14 05:48:23.870147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-07-14 05:48:23.870357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-07-14 05:48:23.870383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-07-14 05:48:23.870590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-07-14 05:48:23.870616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-07-14 05:48:23.870803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-07-14 05:48:23.870833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-07-14 05:48:23.871054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-07-14 05:48:23.871083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-07-14 05:48:23.871281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-07-14 05:48:23.871310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-07-14 05:48:23.871503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-07-14 05:48:23.871529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-07-14 05:48:23.871738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-07-14 05:48:23.871766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-07-14 05:48:23.871983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-07-14 05:48:23.872010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-07-14 05:48:23.872216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-07-14 05:48:23.872242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-07-14 05:48:23.872400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-07-14 05:48:23.872427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-07-14 05:48:23.872608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-07-14 05:48:23.872635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-07-14 05:48:23.872818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-07-14 05:48:23.872844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-07-14 05:48:23.873028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-07-14 05:48:23.873056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-07-14 05:48:23.873282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-07-14 05:48:23.873311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-07-14 05:48:23.873516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-07-14 05:48:23.873542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.840 qpair failed and we were unable to recover it. 00:34:16.840 [2024-07-14 05:48:23.873749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.840 [2024-07-14 05:48:23.873778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-07-14 05:48:23.873966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-07-14 05:48:23.873995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-07-14 05:48:23.874191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-07-14 05:48:23.874217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-07-14 05:48:23.874421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-07-14 05:48:23.874450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-07-14 05:48:23.874657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-07-14 05:48:23.874683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-07-14 05:48:23.874903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-07-14 05:48:23.874929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-07-14 05:48:23.875122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-07-14 05:48:23.875150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-07-14 05:48:23.875329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-07-14 05:48:23.875358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-07-14 05:48:23.875565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-07-14 05:48:23.875590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-07-14 05:48:23.875794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-07-14 05:48:23.875822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-07-14 05:48:23.876032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-07-14 05:48:23.876058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-07-14 05:48:23.876240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-07-14 05:48:23.876266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-07-14 05:48:23.876427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-07-14 05:48:23.876454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-07-14 05:48:23.876688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-07-14 05:48:23.876716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-07-14 05:48:23.876914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-07-14 05:48:23.876942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-07-14 05:48:23.877147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-07-14 05:48:23.877176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-07-14 05:48:23.877392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-07-14 05:48:23.877420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-07-14 05:48:23.877619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-07-14 05:48:23.877646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-07-14 05:48:23.877851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-07-14 05:48:23.877888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-07-14 05:48:23.878121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-07-14 05:48:23.878146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-07-14 05:48:23.878362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-07-14 05:48:23.878387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-07-14 05:48:23.878573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-07-14 05:48:23.878601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-07-14 05:48:23.878909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-07-14 05:48:23.878936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-07-14 05:48:23.879150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-07-14 05:48:23.879176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-07-14 05:48:23.879363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-07-14 05:48:23.879392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-07-14 05:48:23.879607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-07-14 05:48:23.879635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-07-14 05:48:23.879811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-07-14 05:48:23.879837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-07-14 05:48:23.880037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-07-14 05:48:23.880072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-07-14 05:48:23.880282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-07-14 05:48:23.880311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-07-14 05:48:23.880495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-07-14 05:48:23.880521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-07-14 05:48:23.880695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-07-14 05:48:23.880723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-07-14 05:48:23.880930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-07-14 05:48:23.880959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-07-14 05:48:23.881166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-07-14 05:48:23.881192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-07-14 05:48:23.881346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-07-14 05:48:23.881372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-07-14 05:48:23.881570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-07-14 05:48:23.881598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-07-14 05:48:23.881828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-07-14 05:48:23.881853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-07-14 05:48:23.882070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-07-14 05:48:23.882099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-07-14 05:48:23.882274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-07-14 05:48:23.882303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.841 qpair failed and we were unable to recover it. 00:34:16.841 [2024-07-14 05:48:23.882481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.841 [2024-07-14 05:48:23.882507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.842 qpair failed and we were unable to recover it. 00:34:16.842 [2024-07-14 05:48:23.882691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.842 [2024-07-14 05:48:23.882719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.842 qpair failed and we were unable to recover it. 00:34:16.842 [2024-07-14 05:48:23.882902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.842 [2024-07-14 05:48:23.882928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.842 qpair failed and we were unable to recover it. 00:34:16.842 [2024-07-14 05:48:23.883088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.842 [2024-07-14 05:48:23.883114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.842 qpair failed and we were unable to recover it. 00:34:16.842 [2024-07-14 05:48:23.883319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.842 [2024-07-14 05:48:23.883347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.842 qpair failed and we were unable to recover it. 00:34:16.842 [2024-07-14 05:48:23.883561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.842 [2024-07-14 05:48:23.883589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.842 qpair failed and we were unable to recover it. 00:34:16.842 [2024-07-14 05:48:23.883751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.842 [2024-07-14 05:48:23.883789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.842 qpair failed and we were unable to recover it. 00:34:16.842 [2024-07-14 05:48:23.884012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.842 [2024-07-14 05:48:23.884042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.842 qpair failed and we were unable to recover it. 00:34:16.842 [2024-07-14 05:48:23.884208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.842 [2024-07-14 05:48:23.884237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.842 qpair failed and we were unable to recover it. 00:34:16.842 [2024-07-14 05:48:23.884463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.842 [2024-07-14 05:48:23.884488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.842 qpair failed and we were unable to recover it. 00:34:16.842 [2024-07-14 05:48:23.884654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.842 [2024-07-14 05:48:23.884683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.842 qpair failed and we were unable to recover it. 00:34:16.842 [2024-07-14 05:48:23.884864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.842 [2024-07-14 05:48:23.884900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.842 qpair failed and we were unable to recover it. 00:34:16.842 [2024-07-14 05:48:23.885084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.842 [2024-07-14 05:48:23.885111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.842 qpair failed and we were unable to recover it. 00:34:16.842 [2024-07-14 05:48:23.885331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.842 [2024-07-14 05:48:23.885361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.842 qpair failed and we were unable to recover it. 00:34:16.842 [2024-07-14 05:48:23.885604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.842 [2024-07-14 05:48:23.885642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.842 qpair failed and we were unable to recover it. 00:34:16.842 [2024-07-14 05:48:23.885858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.842 [2024-07-14 05:48:23.885891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.842 qpair failed and we were unable to recover it. 00:34:16.842 [2024-07-14 05:48:23.886103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.842 [2024-07-14 05:48:23.886133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.842 qpair failed and we were unable to recover it. 00:34:16.842 [2024-07-14 05:48:23.886308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.842 [2024-07-14 05:48:23.886336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.842 qpair failed and we were unable to recover it. 00:34:16.842 [2024-07-14 05:48:23.886541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.842 [2024-07-14 05:48:23.886566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.842 qpair failed and we were unable to recover it. 00:34:16.842 [2024-07-14 05:48:23.886754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.842 [2024-07-14 05:48:23.886786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.842 qpair failed and we were unable to recover it. 00:34:16.842 [2024-07-14 05:48:23.886952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.842 [2024-07-14 05:48:23.886991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.842 qpair failed and we were unable to recover it. 00:34:16.842 [2024-07-14 05:48:23.887215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.842 [2024-07-14 05:48:23.887243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:16.842 qpair failed and we were unable to recover it. 00:34:17.120 [2024-07-14 05:48:23.887476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-07-14 05:48:23.887505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-07-14 05:48:23.887728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-07-14 05:48:23.887756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-07-14 05:48:23.887944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-07-14 05:48:23.887972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-07-14 05:48:23.888134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-07-14 05:48:23.888178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-07-14 05:48:23.888354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-07-14 05:48:23.888380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-07-14 05:48:23.888587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-07-14 05:48:23.888613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-07-14 05:48:23.888839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-07-14 05:48:23.888875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-07-14 05:48:23.889102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-07-14 05:48:23.889136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-07-14 05:48:23.889342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-07-14 05:48:23.889367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-07-14 05:48:23.889574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-07-14 05:48:23.889602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-07-14 05:48:23.889766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-07-14 05:48:23.889795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-07-14 05:48:23.890007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-07-14 05:48:23.890033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-07-14 05:48:23.890191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-07-14 05:48:23.890217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-07-14 05:48:23.890427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-07-14 05:48:23.890456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-07-14 05:48:23.890659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-07-14 05:48:23.890686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-07-14 05:48:23.890898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-07-14 05:48:23.890928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-07-14 05:48:23.891130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-07-14 05:48:23.891158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-07-14 05:48:23.891361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-07-14 05:48:23.891387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-07-14 05:48:23.891615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-07-14 05:48:23.891644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-07-14 05:48:23.891838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-07-14 05:48:23.891874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-07-14 05:48:23.892078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-07-14 05:48:23.892103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-07-14 05:48:23.892314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-07-14 05:48:23.892343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-07-14 05:48:23.892516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-07-14 05:48:23.892545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-07-14 05:48:23.892730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-07-14 05:48:23.892756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-07-14 05:48:23.892960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-07-14 05:48:23.892990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-07-14 05:48:23.893160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-07-14 05:48:23.893190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-07-14 05:48:23.893402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-07-14 05:48:23.893428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-07-14 05:48:23.893610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-07-14 05:48:23.893638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-07-14 05:48:23.893800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-07-14 05:48:23.893829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-07-14 05:48:23.894047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-07-14 05:48:23.894074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-07-14 05:48:23.894314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-07-14 05:48:23.894343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-07-14 05:48:23.894579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-07-14 05:48:23.894608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-07-14 05:48:23.894842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-07-14 05:48:23.894883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-07-14 05:48:23.895097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-07-14 05:48:23.895122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-07-14 05:48:23.895306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-07-14 05:48:23.895335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-07-14 05:48:23.895542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-07-14 05:48:23.895567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-07-14 05:48:23.895726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-07-14 05:48:23.895752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-07-14 05:48:23.895939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-07-14 05:48:23.895966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-07-14 05:48:23.896192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-07-14 05:48:23.896218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-07-14 05:48:23.896429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-07-14 05:48:23.896457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-07-14 05:48:23.896666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-07-14 05:48:23.896694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-07-14 05:48:23.896923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-07-14 05:48:23.896950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-07-14 05:48:23.897131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-07-14 05:48:23.897156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-07-14 05:48:23.897371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-07-14 05:48:23.897399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-07-14 05:48:23.897602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-07-14 05:48:23.897627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-07-14 05:48:23.897831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-07-14 05:48:23.897860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-07-14 05:48:23.898100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-07-14 05:48:23.898129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-07-14 05:48:23.898333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-07-14 05:48:23.898364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.120 [2024-07-14 05:48:23.898519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.120 [2024-07-14 05:48:23.898545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.120 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.898720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.898750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.898958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.898984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.899197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.899226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.899408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.899436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.899660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.899686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.899846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.899878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.900090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.900117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.900294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.900319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.900495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.900525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.900723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.900751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.900963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.900990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.901233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.901258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.901451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.901477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.901667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.901693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.901880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.901906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.902085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.902112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.902270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.902295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.902477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.902506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.902668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.902696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.902900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.902927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.903114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.903142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.903360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.903388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.903573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.903608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.903791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.903817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.904028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.904055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.904214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.904239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.904444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.904472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.904680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.904708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.904916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.904942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.905106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.905132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.905289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.905315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.905474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.905500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.905688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.905713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.905949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.905975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.906133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.906159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.906358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.906386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.906602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.906627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.906808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.906833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.907046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.907079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.907314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.907343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.907529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.907554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.907738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.907763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.907955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.907983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.908197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.908223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.908431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.908461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.908743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.908771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.908975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.909002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.909152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.909194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.909397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.909423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.909603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.909628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.909812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.909838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.910031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.910058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.910225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.910252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.910458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.910487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.910668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.910696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.910905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.910933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.911139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.911167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.911396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.911425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.911610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.911636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.911825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.911850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.912076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.912105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.912274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.912300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.912473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.912502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.912684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.912713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.912934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.912960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.913191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.913220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.913409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.913437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.913671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.913697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.913907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.913937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.914160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.914189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.914391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.914416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.914625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.914653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.914935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.914965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.915164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.915190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.915396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.915426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.915632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.915661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.121 [2024-07-14 05:48:23.915939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.121 [2024-07-14 05:48:23.915965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.121 qpair failed and we were unable to recover it. 00:34:17.122 [2024-07-14 05:48:23.916171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-07-14 05:48:23.916200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-07-14 05:48:23.916403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-07-14 05:48:23.916436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-07-14 05:48:23.916672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-07-14 05:48:23.916698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-07-14 05:48:23.916944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-07-14 05:48:23.916971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-07-14 05:48:23.917170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-07-14 05:48:23.917198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-07-14 05:48:23.917369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-07-14 05:48:23.917394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-07-14 05:48:23.917552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-07-14 05:48:23.917577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-07-14 05:48:23.917785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-07-14 05:48:23.917813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-07-14 05:48:23.918027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-07-14 05:48:23.918054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-07-14 05:48:23.918257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-07-14 05:48:23.918285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-07-14 05:48:23.918515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-07-14 05:48:23.918543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-07-14 05:48:23.918747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-07-14 05:48:23.918773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-07-14 05:48:23.919049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-07-14 05:48:23.919078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-07-14 05:48:23.919281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-07-14 05:48:23.919310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-07-14 05:48:23.919541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-07-14 05:48:23.919566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-07-14 05:48:23.919811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-07-14 05:48:23.919839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-07-14 05:48:23.920054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-07-14 05:48:23.920080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-07-14 05:48:23.920345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-07-14 05:48:23.920397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-07-14 05:48:23.920619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-07-14 05:48:23.920648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-07-14 05:48:23.920879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-07-14 05:48:23.920908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-07-14 05:48:23.921115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-07-14 05:48:23.921142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-07-14 05:48:23.921362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-07-14 05:48:23.921390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-07-14 05:48:23.921598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-07-14 05:48:23.921623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-07-14 05:48:23.921809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-07-14 05:48:23.921834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-07-14 05:48:23.922034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-07-14 05:48:23.922060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-07-14 05:48:23.922292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-07-14 05:48:23.922318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-07-14 05:48:23.922501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-07-14 05:48:23.922527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-07-14 05:48:23.922796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-07-14 05:48:23.922822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-07-14 05:48:23.922982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-07-14 05:48:23.923012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-07-14 05:48:23.923223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-07-14 05:48:23.923248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-07-14 05:48:23.923430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-07-14 05:48:23.923458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-07-14 05:48:23.923632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-07-14 05:48:23.923661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-07-14 05:48:23.923892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-07-14 05:48:23.923925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-07-14 05:48:23.924110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-07-14 05:48:23.924139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-07-14 05:48:23.924339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-07-14 05:48:23.924368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-07-14 05:48:23.924606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-07-14 05:48:23.924631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-07-14 05:48:23.924907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-07-14 05:48:23.924936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-07-14 05:48:23.925162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-07-14 05:48:23.925191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-07-14 05:48:23.925424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-07-14 05:48:23.925450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-07-14 05:48:23.925652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-07-14 05:48:23.925680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-07-14 05:48:23.925882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-07-14 05:48:23.925928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-07-14 05:48:23.926108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-07-14 05:48:23.926134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-07-14 05:48:23.926346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-07-14 05:48:23.926374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-07-14 05:48:23.926559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-07-14 05:48:23.926587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-07-14 05:48:23.926762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-07-14 05:48:23.926787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-07-14 05:48:23.926996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-07-14 05:48:23.927025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-07-14 05:48:23.927232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-07-14 05:48:23.927261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-07-14 05:48:23.927439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-07-14 05:48:23.927465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-07-14 05:48:23.927626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-07-14 05:48:23.927651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-07-14 05:48:23.927852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-07-14 05:48:23.927889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-07-14 05:48:23.928080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-07-14 05:48:23.928105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-07-14 05:48:23.928331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-07-14 05:48:23.928360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-07-14 05:48:23.928562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-07-14 05:48:23.928590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-07-14 05:48:23.928790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-07-14 05:48:23.928815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-07-14 05:48:23.929043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-07-14 05:48:23.929073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-07-14 05:48:23.929284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-07-14 05:48:23.929312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-07-14 05:48:23.929493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-07-14 05:48:23.929519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-07-14 05:48:23.929746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-07-14 05:48:23.929774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-07-14 05:48:23.929998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-07-14 05:48:23.930027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-07-14 05:48:23.930271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-07-14 05:48:23.930297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-07-14 05:48:23.930484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-07-14 05:48:23.930510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-07-14 05:48:23.930690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-07-14 05:48:23.930715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-07-14 05:48:23.930961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-07-14 05:48:23.930987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-07-14 05:48:23.931159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-07-14 05:48:23.931186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-07-14 05:48:23.931381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.122 [2024-07-14 05:48:23.931409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.122 qpair failed and we were unable to recover it. 00:34:17.122 [2024-07-14 05:48:23.931638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.931663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.931847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.931881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.932088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.932114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.932279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.932309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.932540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.932568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.932799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.932828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.933042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.933068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.933298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.933326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.933557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.933586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.933812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.933841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.934050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.934076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.934281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.934307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.934514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.934540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.934817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.934845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.935074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.935103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.935313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.935339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.935520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.935549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.935733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.935762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.935995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.936022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.936234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.936262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.936465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.936494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.936705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.936731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.936962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.936991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.937163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.937191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.937396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.937422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.937599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.937627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.937861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.937896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.938109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.938135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.938368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.938396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.938625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.938654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.938864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.938904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.939091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.939119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.939322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.939351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.939550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.939576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.939784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.939811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.940029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.940055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.940237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.940262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.940450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.940478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.940701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.940729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.940936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.940962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.941137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.941165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.941371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.941399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.941597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.941622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.941799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.941828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.942001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.942029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.942209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.942235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.942419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.942446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.942605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.942632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.942815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.942844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.943056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.943084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.943243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.943269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.943452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.943478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.943688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.943716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.943926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.943952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.944113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.944138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.944308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.944335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.944521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.944546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.944709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.944735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.944964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.944993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.945221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.945246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.945445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.945470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.945649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.945674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.945891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.945926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.946101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.946126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.946320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.946345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.946528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.946554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.946736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.946762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.946923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.946949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.947153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.947181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.947391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.947417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.947630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.947656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.947835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.947896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.948096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.948122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.948308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.948333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.948535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.948563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.123 [2024-07-14 05:48:23.948731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.123 [2024-07-14 05:48:23.948756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.123 qpair failed and we were unable to recover it. 00:34:17.124 [2024-07-14 05:48:23.948965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-07-14 05:48:23.948992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-07-14 05:48:23.949177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-07-14 05:48:23.949206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-07-14 05:48:23.949364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-07-14 05:48:23.949390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-07-14 05:48:23.949545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-07-14 05:48:23.949572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-07-14 05:48:23.949755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-07-14 05:48:23.949785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-07-14 05:48:23.949987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-07-14 05:48:23.950026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-07-14 05:48:23.950193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-07-14 05:48:23.950219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-07-14 05:48:23.950396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-07-14 05:48:23.950426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-07-14 05:48:23.950608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-07-14 05:48:23.950634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-07-14 05:48:23.950797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-07-14 05:48:23.950825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-07-14 05:48:23.951017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-07-14 05:48:23.951043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-07-14 05:48:23.951191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-07-14 05:48:23.951217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-07-14 05:48:23.951426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-07-14 05:48:23.951467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-07-14 05:48:23.951668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-07-14 05:48:23.951697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-07-14 05:48:23.951883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-07-14 05:48:23.951911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-07-14 05:48:23.952099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-07-14 05:48:23.952124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-07-14 05:48:23.952307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-07-14 05:48:23.952333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-07-14 05:48:23.952481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-07-14 05:48:23.952507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-07-14 05:48:23.952695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-07-14 05:48:23.952720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-07-14 05:48:23.952900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-07-14 05:48:23.952926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-07-14 05:48:23.953106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-07-14 05:48:23.953132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-07-14 05:48:23.953338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-07-14 05:48:23.953367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-07-14 05:48:23.953564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-07-14 05:48:23.953593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-07-14 05:48:23.953763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-07-14 05:48:23.953790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-07-14 05:48:23.953965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-07-14 05:48:23.953992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-07-14 05:48:23.954199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-07-14 05:48:23.954225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-07-14 05:48:23.954413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-07-14 05:48:23.954438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-07-14 05:48:23.954672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-07-14 05:48:23.954700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-07-14 05:48:23.954928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-07-14 05:48:23.954956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-07-14 05:48:23.955111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-07-14 05:48:23.955138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-07-14 05:48:23.955326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-07-14 05:48:23.955352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-07-14 05:48:23.955597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-07-14 05:48:23.955625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-07-14 05:48:23.955880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-07-14 05:48:23.955923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-07-14 05:48:23.956104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-07-14 05:48:23.956130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-07-14 05:48:23.956375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-07-14 05:48:23.956403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-07-14 05:48:23.956598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-07-14 05:48:23.956623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-07-14 05:48:23.956849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-07-14 05:48:23.956887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-07-14 05:48:23.957082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-07-14 05:48:23.957111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-07-14 05:48:23.957309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-07-14 05:48:23.957334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-07-14 05:48:23.957491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-07-14 05:48:23.957517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-07-14 05:48:23.957698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-07-14 05:48:23.957723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-07-14 05:48:23.957907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-07-14 05:48:23.957934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-07-14 05:48:23.958148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-07-14 05:48:23.958176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-07-14 05:48:23.958405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-07-14 05:48:23.958431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-07-14 05:48:23.958596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-07-14 05:48:23.958623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-07-14 05:48:23.958811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-07-14 05:48:23.958836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-07-14 05:48:23.959024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-07-14 05:48:23.959050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-07-14 05:48:23.959237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-07-14 05:48:23.959268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-07-14 05:48:23.959424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-07-14 05:48:23.959450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-07-14 05:48:23.959612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-07-14 05:48:23.959657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-07-14 05:48:23.959872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-07-14 05:48:23.959898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-07-14 05:48:23.960062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-07-14 05:48:23.960087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-07-14 05:48:23.960264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-07-14 05:48:23.960292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-07-14 05:48:23.960492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-07-14 05:48:23.960517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-07-14 05:48:23.960666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-07-14 05:48:23.960691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-07-14 05:48:23.960898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-07-14 05:48:23.960924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-07-14 05:48:23.961100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-07-14 05:48:23.961126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-07-14 05:48:23.961279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-07-14 05:48:23.961305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-07-14 05:48:23.961485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-07-14 05:48:23.961510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-07-14 05:48:23.961764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-07-14 05:48:23.961789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-07-14 05:48:23.961984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-07-14 05:48:23.962010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-07-14 05:48:23.962227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-07-14 05:48:23.962253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-07-14 05:48:23.962463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-07-14 05:48:23.962488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-07-14 05:48:23.962672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-07-14 05:48:23.962714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-07-14 05:48:23.962943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-07-14 05:48:23.962969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-07-14 05:48:23.963121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-07-14 05:48:23.963146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-07-14 05:48:23.963354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-07-14 05:48:23.963380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-07-14 05:48:23.963589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-07-14 05:48:23.963615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-07-14 05:48:23.963796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-07-14 05:48:23.963824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-07-14 05:48:23.964043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-07-14 05:48:23.964069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.124 [2024-07-14 05:48:23.964303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.124 [2024-07-14 05:48:23.964332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.124 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.964555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.964580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.964796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.964824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.965036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.965062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.965230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.965259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.965491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.965520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.965740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.965769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.965977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.966003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.966209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.966237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.966449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.966477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.966681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.966707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.966893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.966920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.967118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.967143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.967330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.967356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.967589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.967617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.967847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.967883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.968084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.968110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.968291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.968321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.968479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.968505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.968691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.968716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.968889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.968918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.969141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.969170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.969401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.969426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.969614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.969639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.969824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.969849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.970040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.970066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.970233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.970260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.970478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.970506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.970724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.970752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.970957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.970983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.971178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.971204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.971424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.971449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.971690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.971718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.971910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.971939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.972146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.972171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.972341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.972366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.972576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.972602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.972782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.972809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.972995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.973021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.973265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.973290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.973473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.973498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.973713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.973741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.973982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.974011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.974219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.974245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.974437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.974463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.974679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.974708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.974888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.974915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.975100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.975126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.975302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.975328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.975510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.975536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.975718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.975743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.975895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.975922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.976106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.976132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.976341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.976370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.976562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.976591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.976812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.976837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.976993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.977019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.977176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.977207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.977355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.977380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.977562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.977591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.977794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.977822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.978031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.978057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.978216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.978241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.978450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.978491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.978675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.978700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.978893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.978936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.979132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.979157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.979319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.979345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.979526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.979552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.979737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.979780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.979963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.979989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.980200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.980243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.980441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.980469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.980641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.980666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.980847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.980878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.981093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.981118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.125 [2024-07-14 05:48:23.981297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.125 [2024-07-14 05:48:23.981322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.125 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.981561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.981589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.981787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.981816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.982055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.982081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.982264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.982294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.982475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.982502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.982686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.982712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.982872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.982899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.983067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.983093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.983306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.983332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.983518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.983547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.983758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.983785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.984002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.984028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.984210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.984235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.984467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.984495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.984701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.984727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.984943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.984973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.985180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.985208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.985411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.985437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.985646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.985671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.985851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.985883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.986091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.986120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.986297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.986323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.986503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.986533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.986740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.986766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.986976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.987002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.987187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.987213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.987407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.987433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.987661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.987689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.987891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.987933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.988089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.988114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.988331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.988360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.988569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.988597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.988797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.988823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.989041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.989067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.989288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.989318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.989546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.989572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.989786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.989816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.990043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.990070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.990226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.990252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.990413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.990438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.990617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.990642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.990794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.990821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.991046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.991076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.991283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.991312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.991516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.991541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.991762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.991788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.991947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.991975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.992145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.992171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.992396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.992421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.992576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.992603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.992807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.992833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.993052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.993081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.993313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.993341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.993557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.993583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.993770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.993795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.994030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.994059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.994274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.994299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.994468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.994497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.994718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.994747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.994951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.994978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.995136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.995167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.995356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.995382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.995539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.995566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.995768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.995798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.996000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.996030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.996242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.996268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.996418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.996443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.996651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.996677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.996857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.996890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.997127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.997155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.997345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.997374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.997552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.997578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.997765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.997791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.997977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.998003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.998217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.998243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.998472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.998500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.998678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.998708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.126 qpair failed and we were unable to recover it. 00:34:17.126 [2024-07-14 05:48:23.998949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.126 [2024-07-14 05:48:23.998975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-07-14 05:48:23.999162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-07-14 05:48:23.999191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-07-14 05:48:23.999395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-07-14 05:48:23.999424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-07-14 05:48:23.999653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-07-14 05:48:23.999678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-07-14 05:48:23.999890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-07-14 05:48:23.999917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-07-14 05:48:24.000120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-07-14 05:48:24.000163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-07-14 05:48:24.000353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-07-14 05:48:24.000379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-07-14 05:48:24.000614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-07-14 05:48:24.000642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-07-14 05:48:24.000878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-07-14 05:48:24.000907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-07-14 05:48:24.001080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-07-14 05:48:24.001106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-07-14 05:48:24.001322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-07-14 05:48:24.001349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-07-14 05:48:24.001533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-07-14 05:48:24.001561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-07-14 05:48:24.001773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-07-14 05:48:24.001802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-07-14 05:48:24.002002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-07-14 05:48:24.002029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-07-14 05:48:24.002238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-07-14 05:48:24.002266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-07-14 05:48:24.002459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-07-14 05:48:24.002485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-07-14 05:48:24.002691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-07-14 05:48:24.002717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-07-14 05:48:24.002908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-07-14 05:48:24.002935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-07-14 05:48:24.003080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-07-14 05:48:24.003106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-07-14 05:48:24.003292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-07-14 05:48:24.003318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-07-14 05:48:24.003518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-07-14 05:48:24.003547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-07-14 05:48:24.003719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-07-14 05:48:24.003744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-07-14 05:48:24.003953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-07-14 05:48:24.003988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-07-14 05:48:24.004153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-07-14 05:48:24.004184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-07-14 05:48:24.004348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-07-14 05:48:24.004375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-07-14 05:48:24.004562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-07-14 05:48:24.004587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-07-14 05:48:24.004782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-07-14 05:48:24.004808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-07-14 05:48:24.005060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-07-14 05:48:24.005087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-07-14 05:48:24.005293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-07-14 05:48:24.005321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-07-14 05:48:24.005503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-07-14 05:48:24.005533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-07-14 05:48:24.005826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-07-14 05:48:24.005854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-07-14 05:48:24.006065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-07-14 05:48:24.006091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-07-14 05:48:24.006276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-07-14 05:48:24.006301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-07-14 05:48:24.006514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-07-14 05:48:24.006540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-07-14 05:48:24.006755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-07-14 05:48:24.006783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-07-14 05:48:24.006949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-07-14 05:48:24.006978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-07-14 05:48:24.007204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-07-14 05:48:24.007229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-07-14 05:48:24.007464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-07-14 05:48:24.007492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-07-14 05:48:24.007702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-07-14 05:48:24.007728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-07-14 05:48:24.007941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-07-14 05:48:24.007968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-07-14 05:48:24.008122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-07-14 05:48:24.008148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-07-14 05:48:24.008382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-07-14 05:48:24.008411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-07-14 05:48:24.008613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-07-14 05:48:24.008638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-07-14 05:48:24.008824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-07-14 05:48:24.008851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-07-14 05:48:24.009039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-07-14 05:48:24.009081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-07-14 05:48:24.009291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-07-14 05:48:24.009317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-07-14 05:48:24.009543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-07-14 05:48:24.009572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-07-14 05:48:24.009776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-07-14 05:48:24.009804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-07-14 05:48:24.010025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-07-14 05:48:24.010051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-07-14 05:48:24.010232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-07-14 05:48:24.010258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-07-14 05:48:24.010447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-07-14 05:48:24.010473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-07-14 05:48:24.010656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-07-14 05:48:24.010681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-07-14 05:48:24.010843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-07-14 05:48:24.010877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-07-14 05:48:24.011070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-07-14 05:48:24.011095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-07-14 05:48:24.011281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-07-14 05:48:24.011307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-07-14 05:48:24.011544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-07-14 05:48:24.011572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-07-14 05:48:24.011776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-07-14 05:48:24.011802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-07-14 05:48:24.012012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-07-14 05:48:24.012038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-07-14 05:48:24.012234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-07-14 05:48:24.012260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-07-14 05:48:24.012415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-07-14 05:48:24.012440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-07-14 05:48:24.012601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-07-14 05:48:24.012627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-07-14 05:48:24.012830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-07-14 05:48:24.012860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-07-14 05:48:24.013046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-07-14 05:48:24.013072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-07-14 05:48:24.013254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-07-14 05:48:24.013284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-07-14 05:48:24.013469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-07-14 05:48:24.013495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-07-14 05:48:24.013683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-07-14 05:48:24.013709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-07-14 05:48:24.013934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-07-14 05:48:24.013960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-07-14 05:48:24.014148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-07-14 05:48:24.014177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-07-14 05:48:24.014396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-07-14 05:48:24.014424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-07-14 05:48:24.014657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-07-14 05:48:24.014683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-07-14 05:48:24.014894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-07-14 05:48:24.014921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.127 qpair failed and we were unable to recover it. 00:34:17.127 [2024-07-14 05:48:24.015086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.127 [2024-07-14 05:48:24.015112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.015320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.015345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.015503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.015529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.015709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.015739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.015947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.015973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.016134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.016160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.016347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.016373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.016526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.016551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.016710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.016735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.016918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.016944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.017127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.017152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.017303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.017329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.017487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.017512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.017696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.017722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.017883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.017909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.018100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.018127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.018329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.018355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.018597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.018625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.018824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.018853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.019070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.019096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.019279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.019307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.019487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.019512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.019672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.019697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.019885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.019911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.020091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.020116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.020295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.020321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.020541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.020569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.020740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.020768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.020992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.021018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.021177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.021203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.021389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.021431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.021638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.021664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.021825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.021854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.022028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.022054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.022231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.022256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.022440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.022470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.022695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.022724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.022954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.022981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.023189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.023215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.023377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.023402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.023558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.023584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.023754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.023783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.024020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.024047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.024262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.024288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.024532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.024558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.024720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.024747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.024932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.024958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.025146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.025173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.025354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.025380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.025591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.025616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.025775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.025802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.025970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.025996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.026177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.026203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.026385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.026411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.026556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.026582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.026779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.026805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.027033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.027062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.027271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.027297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.027477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.027502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.027714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.027740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.027897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.027923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.028126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.028152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.028345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.028374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.028564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.028592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.028803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.028829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.029005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.029031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.029210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.029236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.029491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.029543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.029738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.029767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.030002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.030028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.030213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.030239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.030422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.030447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.030633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.030662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.030842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.030882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.031104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.031133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.031330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.031359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.128 [2024-07-14 05:48:24.031563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.128 [2024-07-14 05:48:24.031588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.128 qpair failed and we were unable to recover it. 00:34:17.129 [2024-07-14 05:48:24.031777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-07-14 05:48:24.031803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-07-14 05:48:24.031987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-07-14 05:48:24.032014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-07-14 05:48:24.032192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-07-14 05:48:24.032218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-07-14 05:48:24.032404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-07-14 05:48:24.032446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-07-14 05:48:24.032621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-07-14 05:48:24.032650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-07-14 05:48:24.032857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-07-14 05:48:24.032890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-07-14 05:48:24.033077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-07-14 05:48:24.033103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-07-14 05:48:24.033318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-07-14 05:48:24.033347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-07-14 05:48:24.033515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-07-14 05:48:24.033542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-07-14 05:48:24.033728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-07-14 05:48:24.033754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-07-14 05:48:24.033906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-07-14 05:48:24.033932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-07-14 05:48:24.034156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-07-14 05:48:24.034182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-07-14 05:48:24.034389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-07-14 05:48:24.034418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-07-14 05:48:24.034623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-07-14 05:48:24.034651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-07-14 05:48:24.034856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-07-14 05:48:24.034899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-07-14 05:48:24.035057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-07-14 05:48:24.035082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-07-14 05:48:24.035236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-07-14 05:48:24.035261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-07-14 05:48:24.035446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-07-14 05:48:24.035472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-07-14 05:48:24.035679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-07-14 05:48:24.035708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-07-14 05:48:24.035906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-07-14 05:48:24.035935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-07-14 05:48:24.036161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-07-14 05:48:24.036187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-07-14 05:48:24.036342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-07-14 05:48:24.036368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-07-14 05:48:24.036586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-07-14 05:48:24.036616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-07-14 05:48:24.036773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-07-14 05:48:24.036799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-07-14 05:48:24.036969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-07-14 05:48:24.036998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-07-14 05:48:24.037205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-07-14 05:48:24.037233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-07-14 05:48:24.037451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-07-14 05:48:24.037476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-07-14 05:48:24.037683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-07-14 05:48:24.037709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-07-14 05:48:24.037860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-07-14 05:48:24.037892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-07-14 05:48:24.038076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-07-14 05:48:24.038102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-07-14 05:48:24.038305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-07-14 05:48:24.038334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-07-14 05:48:24.038572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-07-14 05:48:24.038598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-07-14 05:48:24.038813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-07-14 05:48:24.038839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-07-14 05:48:24.039063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-07-14 05:48:24.039089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-07-14 05:48:24.039321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-07-14 05:48:24.039349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-07-14 05:48:24.039526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-07-14 05:48:24.039552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-07-14 05:48:24.039758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-07-14 05:48:24.039787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-07-14 05:48:24.039966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-07-14 05:48:24.039995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-07-14 05:48:24.040216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-07-14 05:48:24.040242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-07-14 05:48:24.040466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-07-14 05:48:24.040495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-07-14 05:48:24.040709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-07-14 05:48:24.040734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-07-14 05:48:24.040919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-07-14 05:48:24.040946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-07-14 05:48:24.041133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-07-14 05:48:24.041158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-07-14 05:48:24.041315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-07-14 05:48:24.041357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-07-14 05:48:24.041540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-07-14 05:48:24.041567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-07-14 05:48:24.041760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-07-14 05:48:24.041786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-07-14 05:48:24.041945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-07-14 05:48:24.041972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-07-14 05:48:24.042158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-07-14 05:48:24.042183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-07-14 05:48:24.042350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-07-14 05:48:24.042376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-07-14 05:48:24.042589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-07-14 05:48:24.042615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-07-14 05:48:24.042796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-07-14 05:48:24.042825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-07-14 05:48:24.043015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-07-14 05:48:24.043041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-07-14 05:48:24.043199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-07-14 05:48:24.043224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-07-14 05:48:24.043406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-07-14 05:48:24.043431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-07-14 05:48:24.043636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-07-14 05:48:24.043664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-07-14 05:48:24.043878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-07-14 05:48:24.043907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-07-14 05:48:24.044113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-07-14 05:48:24.044138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-07-14 05:48:24.044343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-07-14 05:48:24.044371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-07-14 05:48:24.044563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-07-14 05:48:24.044591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-07-14 05:48:24.044767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-07-14 05:48:24.044793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-07-14 05:48:24.045002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-07-14 05:48:24.045033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-07-14 05:48:24.045284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-07-14 05:48:24.045313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-07-14 05:48:24.045538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-07-14 05:48:24.045568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-07-14 05:48:24.045726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-07-14 05:48:24.045751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-07-14 05:48:24.045933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-07-14 05:48:24.045960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-07-14 05:48:24.046142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-07-14 05:48:24.046168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-07-14 05:48:24.046381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-07-14 05:48:24.046406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-07-14 05:48:24.046590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-07-14 05:48:24.046616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-07-14 05:48:24.046799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.129 [2024-07-14 05:48:24.046825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.129 qpair failed and we were unable to recover it. 00:34:17.129 [2024-07-14 05:48:24.047009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.047036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.047196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.047222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.047430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.047456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.047636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.047662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.047820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.047846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.048035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.048061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.048242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.048268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.048511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.048540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.048774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.048802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.048997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.049023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.049227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.049253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.049462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.049488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.049671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.049696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.049852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.049886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.050049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.050075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.050316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.050344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.050543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.050571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.050780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.050806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.050973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.050999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.051207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.051249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.051491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.051517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.051723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.051752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.051978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.052007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.052211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.052236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.052390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.052416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.052606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.052634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.052813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.052838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.053036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.053061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.053264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.053292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.053507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.053533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.053717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.053743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.053928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.053954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.054163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.054189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.054424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.054457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.054669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.054698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.054893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.054919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.055075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.055100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.055283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.055308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.055525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.055550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.055709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.055735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.055967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.055996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.056175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.056201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.056408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.056437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.056647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.056673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.056858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.056894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.057122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.057148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.057334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.057361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.057564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.057591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.057771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.057798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.058029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.058059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.058268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.058293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.058519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.058547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.058734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.058763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.058984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.059011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.059191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.059220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.059428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.059453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.059611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.059636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.059814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.059840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.060040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.060066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.060277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.060302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.060486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.060512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.060715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.060740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.060986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.061012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.061186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.061214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.061413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.061442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.061652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.061678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.061862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.061896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.062111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.062136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.062315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.062341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.062551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.062576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.062768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.062796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.063002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.063028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.063189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.063215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.063396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.063426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.063584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.063610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.130 [2024-07-14 05:48:24.063816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-14 05:48:24.063845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.130 qpair failed and we were unable to recover it. 00:34:17.131 [2024-07-14 05:48:24.064060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-14 05:48:24.064087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-07-14 05:48:24.064271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-14 05:48:24.064296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-07-14 05:48:24.064534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-14 05:48:24.064563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-07-14 05:48:24.064770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-14 05:48:24.064799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-07-14 05:48:24.065003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-14 05:48:24.065030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-07-14 05:48:24.065218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-14 05:48:24.065244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-07-14 05:48:24.065431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-14 05:48:24.065473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-07-14 05:48:24.065679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-14 05:48:24.065706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-07-14 05:48:24.065882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-14 05:48:24.065925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-07-14 05:48:24.066140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-14 05:48:24.066183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-07-14 05:48:24.066363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-14 05:48:24.066390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-07-14 05:48:24.066583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-14 05:48:24.066609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-07-14 05:48:24.066764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-14 05:48:24.066789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-07-14 05:48:24.066957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-14 05:48:24.066984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-07-14 05:48:24.067137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-14 05:48:24.067162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-07-14 05:48:24.067402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-14 05:48:24.067430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-07-14 05:48:24.067634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-14 05:48:24.067660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-07-14 05:48:24.067823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-14 05:48:24.067848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-07-14 05:48:24.068033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-14 05:48:24.068058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-07-14 05:48:24.068244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-14 05:48:24.068271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-07-14 05:48:24.068457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-14 05:48:24.068500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-07-14 05:48:24.068729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-14 05:48:24.068758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-07-14 05:48:24.068928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-14 05:48:24.068954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-07-14 05:48:24.069160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-14 05:48:24.069188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-07-14 05:48:24.069401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-14 05:48:24.069430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-07-14 05:48:24.069631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-14 05:48:24.069657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-07-14 05:48:24.069810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-14 05:48:24.069835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-07-14 05:48:24.070028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-14 05:48:24.070054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-07-14 05:48:24.070229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-14 05:48:24.070255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-07-14 05:48:24.070437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-14 05:48:24.070462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-07-14 05:48:24.070619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-14 05:48:24.070646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-07-14 05:48:24.070822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-14 05:48:24.070848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-07-14 05:48:24.071061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-14 05:48:24.071089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-07-14 05:48:24.071314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-14 05:48:24.071342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-07-14 05:48:24.071549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-14 05:48:24.071574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-07-14 05:48:24.071761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-14 05:48:24.071786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-07-14 05:48:24.071994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-14 05:48:24.072021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-07-14 05:48:24.072208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-14 05:48:24.072237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-07-14 05:48:24.072447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-14 05:48:24.072476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-07-14 05:48:24.072703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-14 05:48:24.072731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-07-14 05:48:24.072956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-14 05:48:24.072984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-07-14 05:48:24.073164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-14 05:48:24.073190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-07-14 05:48:24.073378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-14 05:48:24.073406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-07-14 05:48:24.073614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-14 05:48:24.073640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-07-14 05:48:24.073852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-14 05:48:24.073888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-07-14 05:48:24.074124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-14 05:48:24.074149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-07-14 05:48:24.074329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-14 05:48:24.074354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-07-14 05:48:24.074537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-14 05:48:24.074562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-07-14 05:48:24.074717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-14 05:48:24.074743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-07-14 05:48:24.074951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-14 05:48:24.074977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-07-14 05:48:24.075185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-14 05:48:24.075211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-07-14 05:48:24.075432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-14 05:48:24.075461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-07-14 05:48:24.075690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-14 05:48:24.075716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-07-14 05:48:24.075926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-14 05:48:24.075952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-07-14 05:48:24.076157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-14 05:48:24.076182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-07-14 05:48:24.076367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-14 05:48:24.076392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-07-14 05:48:24.076600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-14 05:48:24.076629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-07-14 05:48:24.076825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-14 05:48:24.076853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-07-14 05:48:24.077057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-14 05:48:24.077084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-07-14 05:48:24.077250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-14 05:48:24.077276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-07-14 05:48:24.077463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-14 05:48:24.077489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-07-14 05:48:24.077645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-14 05:48:24.077671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-07-14 05:48:24.077877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-14 05:48:24.077903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-07-14 05:48:24.078089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-14 05:48:24.078117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-07-14 05:48:24.078354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-14 05:48:24.078380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-07-14 05:48:24.078599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-14 05:48:24.078628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-07-14 05:48:24.078827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-14 05:48:24.078856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-07-14 05:48:24.079106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-14 05:48:24.079133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-07-14 05:48:24.079291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-14 05:48:24.079317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-07-14 05:48:24.079523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-14 05:48:24.079549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.131 [2024-07-14 05:48:24.079711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-14 05:48:24.079737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.131 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.079951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.079980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.080211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.080239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.080471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.080497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.080654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.080680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.080834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.080860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.081079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.081104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.081309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.081343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.081541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.081570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.081766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.081796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.082008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.082034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.082214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.082239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.082398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.082423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.082609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.082635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.082841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.082877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.083061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.083086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.083296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.083325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.083556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.083585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.083791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.083818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.084024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.084054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.084250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.084279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.084486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.084512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.084742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.084770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.084967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.084997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.085207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.085233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.085434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.085462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.085638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.085667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.085837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.085864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.086037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.086063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.086226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.086252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.086463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.086489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.086693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.086722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.086946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.086976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.087173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.087198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.087390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.087416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.087602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.087627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.087809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.087835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.088023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.088050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.088260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.088288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.088490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.088515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.088700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.088726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.088915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.088942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.089094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.089121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.089282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.089308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.089492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.089518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.089704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.089730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.089914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.089956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.090114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.090160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.090360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.090387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.090570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.090597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.090776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.090802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.090984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.091011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.091240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.091269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.091474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.091502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.091733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.091759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.091942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.091968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.092154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.092179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.092356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.092382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.092562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.092588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.092805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.092834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.093039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.093065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.093301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.093330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.093534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.093563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.093763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.093789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.093951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.093977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.094188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.094214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.094423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.094449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.094630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.094658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.094829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.094858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.095075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.095101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.095286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.095312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.095480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.095505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.095716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.095742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.095923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.095949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.096127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.096156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.096326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.096352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.096540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.096566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.096773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.096799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.096956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-14 05:48:24.096983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.132 qpair failed and we were unable to recover it. 00:34:17.132 [2024-07-14 05:48:24.097194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.097237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.097438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.097466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.097670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.097695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.097924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.097953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.098153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.098178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.098363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.098388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.098599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.098625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.098780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.098806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.098994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.099025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.099251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.099280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.099479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.099508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.099723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.099749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.099934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.099961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.100114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.100140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.100296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.100320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.100502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.100529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.100764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.100792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.100992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.101018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.101223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.101251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.101451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.101480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.101678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.101703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.101874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.101917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.102125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.102155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.102331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.102357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.102564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.102590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.102790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.102818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.103036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.103063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.103245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.103271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.103427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.103454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.103632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.103658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.103882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.103911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.104102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.104130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.104340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.104366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.104547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.104573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.104751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.104776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.104933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.104959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.105165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.105193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.105389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.105418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.105588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.105613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.105789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.105818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.106033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.106059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.106274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.106300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.106479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.106505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.106691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.106716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.106888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.106915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.107070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.107096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.107310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.107338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.107564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.107590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.107771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.107801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.108002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.108028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.108237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.108263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.108521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.108547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.108775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.108804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.109039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.109065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.109300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.109329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.109499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.109527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.109726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.109755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.109929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.109971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.110127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.110169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.110396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.110421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.110666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.110691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.110891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.110918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.111109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.111134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.111321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.111347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.111550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.111579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.111807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.111832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.112039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.112067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.112312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.112338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.112504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.112529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.112715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.112740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.112950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.112977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.113164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.113191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.113374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.113418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.113593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.113620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.113825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.113851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.114067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.114097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.114294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.114323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.114487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.114513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.133 qpair failed and we were unable to recover it. 00:34:17.133 [2024-07-14 05:48:24.114711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-14 05:48:24.114739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-07-14 05:48:24.114903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-07-14 05:48:24.114932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-07-14 05:48:24.115144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-07-14 05:48:24.115170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-07-14 05:48:24.115403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-07-14 05:48:24.115429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-07-14 05:48:24.115605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-07-14 05:48:24.115631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-07-14 05:48:24.115805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-07-14 05:48:24.115831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-07-14 05:48:24.116022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-07-14 05:48:24.116048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-07-14 05:48:24.116235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-07-14 05:48:24.116260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-07-14 05:48:24.116445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-07-14 05:48:24.116472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-07-14 05:48:24.116655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-07-14 05:48:24.116680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-07-14 05:48:24.116840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-07-14 05:48:24.116879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-07-14 05:48:24.117033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-07-14 05:48:24.117060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-07-14 05:48:24.117277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-07-14 05:48:24.117305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-07-14 05:48:24.117499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-07-14 05:48:24.117528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-07-14 05:48:24.117756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-07-14 05:48:24.117784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-07-14 05:48:24.117995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-07-14 05:48:24.118021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-07-14 05:48:24.118202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-07-14 05:48:24.118229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-07-14 05:48:24.118443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-07-14 05:48:24.118469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-07-14 05:48:24.118654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-07-14 05:48:24.118682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-07-14 05:48:24.118878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-07-14 05:48:24.118908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-07-14 05:48:24.119110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-07-14 05:48:24.119136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-07-14 05:48:24.119338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-07-14 05:48:24.119367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-07-14 05:48:24.119544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-07-14 05:48:24.119572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-07-14 05:48:24.119804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-07-14 05:48:24.119829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-07-14 05:48:24.120030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-07-14 05:48:24.120057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-07-14 05:48:24.120293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-07-14 05:48:24.120321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-07-14 05:48:24.120526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-07-14 05:48:24.120552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-07-14 05:48:24.120733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-07-14 05:48:24.120759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-07-14 05:48:24.120943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-07-14 05:48:24.120969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-07-14 05:48:24.121124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-07-14 05:48:24.121149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-07-14 05:48:24.121355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-07-14 05:48:24.121396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-07-14 05:48:24.121598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-07-14 05:48:24.121625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-07-14 05:48:24.121797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-07-14 05:48:24.121822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-07-14 05:48:24.121972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-07-14 05:48:24.121998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-07-14 05:48:24.122159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-07-14 05:48:24.122184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-07-14 05:48:24.122339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-07-14 05:48:24.122366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-07-14 05:48:24.122570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-07-14 05:48:24.122598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-07-14 05:48:24.122836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-07-14 05:48:24.122880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-07-14 05:48:24.123061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-07-14 05:48:24.123086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-07-14 05:48:24.123276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-07-14 05:48:24.123301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-07-14 05:48:24.123464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-07-14 05:48:24.123491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-07-14 05:48:24.123672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-07-14 05:48:24.123698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-07-14 05:48:24.123879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-07-14 05:48:24.123906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-07-14 05:48:24.124064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-07-14 05:48:24.124089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-07-14 05:48:24.124244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-07-14 05:48:24.124269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-07-14 05:48:24.124475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-07-14 05:48:24.124501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-07-14 05:48:24.124720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-07-14 05:48:24.124748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-07-14 05:48:24.124971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-07-14 05:48:24.124997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-07-14 05:48:24.125180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-07-14 05:48:24.125207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-07-14 05:48:24.125361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-07-14 05:48:24.125387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-07-14 05:48:24.125546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-07-14 05:48:24.125576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-07-14 05:48:24.125782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-07-14 05:48:24.125823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-07-14 05:48:24.126025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-07-14 05:48:24.126051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-07-14 05:48:24.126230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-07-14 05:48:24.126255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-07-14 05:48:24.126416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-07-14 05:48:24.126442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-07-14 05:48:24.126635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-07-14 05:48:24.126660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-07-14 05:48:24.126845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-07-14 05:48:24.126877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-07-14 05:48:24.127059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-07-14 05:48:24.127085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-07-14 05:48:24.127242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-07-14 05:48:24.127269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-07-14 05:48:24.127452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-07-14 05:48:24.127478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-07-14 05:48:24.127656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-07-14 05:48:24.127681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-07-14 05:48:24.127872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-07-14 05:48:24.127900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-07-14 05:48:24.128085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-07-14 05:48:24.128112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-07-14 05:48:24.128310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-07-14 05:48:24.128335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-07-14 05:48:24.128526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-07-14 05:48:24.128555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-07-14 05:48:24.128749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-07-14 05:48:24.128778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-07-14 05:48:24.128959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-07-14 05:48:24.128985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-07-14 05:48:24.129176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-07-14 05:48:24.129201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-07-14 05:48:24.129379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.134 [2024-07-14 05:48:24.129404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.134 qpair failed and we were unable to recover it. 00:34:17.134 [2024-07-14 05:48:24.129584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.129610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.129794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.129821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.130023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.130050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.130254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.130283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.130486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.130515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.130715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.130740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.130929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.130956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.131167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.131193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.131360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.131386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.131614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.131642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.131819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.131847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.132038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.132064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.132272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.132297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.132460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.132487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.132646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.132671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.132854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.132887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.133050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.133077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.133293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.133319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.133526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.133554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.133734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.133763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.133983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.134009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.134191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.134220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.134461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.134490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.134692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.134719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.134927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.134954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.135112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.135137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.135319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.135345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.135553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.135583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.135784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.135813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.136014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.136042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.136226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.136251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.136439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.136464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.136670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.136695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.136884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.136911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.137096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.137122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.137277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.137303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.137486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.137511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.137698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.137724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.137908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.137934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.138082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.138108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.138309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.138334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.138516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.138542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.138744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.138773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.138980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.139009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.139211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.139237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.139421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.139447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.139626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.139669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.139913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.139939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.140190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.140218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.140392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.140420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.140651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.140677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.140859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.140890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.141046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.141072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.141253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.141279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.141491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.141519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.141695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.141723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.141895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.141921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.142128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.142153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.142334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.142359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.142538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.142564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.142747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.142772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.142960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.142990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.143176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.143203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.143409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.143437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.143667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.143693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.143909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.143936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.144123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.144149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.144305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.144330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.144545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.144570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.144757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.144786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.144998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.145025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.145204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.145229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.145409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.145434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.145644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.145670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.145848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.145889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.146056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.146082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.135 [2024-07-14 05:48:24.146237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.135 [2024-07-14 05:48:24.146263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.135 qpair failed and we were unable to recover it. 00:34:17.136 [2024-07-14 05:48:24.146450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.136 [2024-07-14 05:48:24.146475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.136 qpair failed and we were unable to recover it. 00:34:17.136 [2024-07-14 05:48:24.146715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.136 [2024-07-14 05:48:24.146743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.136 qpair failed and we were unable to recover it. 00:34:17.136 [2024-07-14 05:48:24.146924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.136 [2024-07-14 05:48:24.146950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.136 qpair failed and we were unable to recover it. 00:34:17.136 [2024-07-14 05:48:24.147149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.136 [2024-07-14 05:48:24.147174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.136 qpair failed and we were unable to recover it. 00:34:17.136 [2024-07-14 05:48:24.147325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.136 [2024-07-14 05:48:24.147350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.136 qpair failed and we were unable to recover it. 00:34:17.136 [2024-07-14 05:48:24.147588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.136 [2024-07-14 05:48:24.147616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.136 qpair failed and we were unable to recover it. 00:34:17.136 [2024-07-14 05:48:24.147804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.136 [2024-07-14 05:48:24.147830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.136 qpair failed and we were unable to recover it. 00:34:17.136 [2024-07-14 05:48:24.148017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.136 [2024-07-14 05:48:24.148046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.136 qpair failed and we were unable to recover it. 00:34:17.136 [2024-07-14 05:48:24.148245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.136 [2024-07-14 05:48:24.148273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.136 qpair failed and we were unable to recover it. 00:34:17.136 [2024-07-14 05:48:24.148478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.136 [2024-07-14 05:48:24.148504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.136 qpair failed and we were unable to recover it. 00:34:17.136 [2024-07-14 05:48:24.148684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.136 [2024-07-14 05:48:24.148710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.136 qpair failed and we were unable to recover it. 00:34:17.136 [2024-07-14 05:48:24.148905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.136 [2024-07-14 05:48:24.148932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.136 qpair failed and we were unable to recover it. 00:34:17.136 [2024-07-14 05:48:24.149152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.136 [2024-07-14 05:48:24.149178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.136 qpair failed and we were unable to recover it. 00:34:17.136 [2024-07-14 05:48:24.149381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.136 [2024-07-14 05:48:24.149409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.136 qpair failed and we were unable to recover it. 00:34:17.136 [2024-07-14 05:48:24.149583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.136 [2024-07-14 05:48:24.149611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.136 qpair failed and we were unable to recover it. 00:34:17.136 [2024-07-14 05:48:24.149839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.136 [2024-07-14 05:48:24.149874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.136 qpair failed and we were unable to recover it. 00:34:17.136 [2024-07-14 05:48:24.150083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.136 [2024-07-14 05:48:24.150109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.136 qpair failed and we were unable to recover it. 00:34:17.136 [2024-07-14 05:48:24.150293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.136 [2024-07-14 05:48:24.150318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.136 qpair failed and we were unable to recover it. 00:34:17.136 [2024-07-14 05:48:24.150501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.136 [2024-07-14 05:48:24.150526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.136 qpair failed and we were unable to recover it. 00:34:17.136 [2024-07-14 05:48:24.150711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.136 [2024-07-14 05:48:24.150739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.136 qpair failed and we were unable to recover it. 00:34:17.136 [2024-07-14 05:48:24.150948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.136 [2024-07-14 05:48:24.150974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.136 qpair failed and we were unable to recover it. 00:34:17.136 [2024-07-14 05:48:24.151128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.136 [2024-07-14 05:48:24.151153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.136 qpair failed and we were unable to recover it. 00:34:17.136 [2024-07-14 05:48:24.151392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.136 [2024-07-14 05:48:24.151420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.136 qpair failed and we were unable to recover it. 00:34:17.136 [2024-07-14 05:48:24.151595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.136 [2024-07-14 05:48:24.151622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.136 qpair failed and we were unable to recover it. 00:34:17.136 [2024-07-14 05:48:24.151813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.136 [2024-07-14 05:48:24.151839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.136 qpair failed and we were unable to recover it. 00:34:17.136 [2024-07-14 05:48:24.152031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.136 [2024-07-14 05:48:24.152056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.136 qpair failed and we were unable to recover it. 00:34:17.136 [2024-07-14 05:48:24.152216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.136 [2024-07-14 05:48:24.152243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.136 qpair failed and we were unable to recover it. 00:34:17.136 [2024-07-14 05:48:24.152449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.136 [2024-07-14 05:48:24.152477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.136 qpair failed and we were unable to recover it. 00:34:17.136 [2024-07-14 05:48:24.152708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.136 [2024-07-14 05:48:24.152736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.136 qpair failed and we were unable to recover it. 00:34:17.136 [2024-07-14 05:48:24.152955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.136 [2024-07-14 05:48:24.152981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.136 qpair failed and we were unable to recover it. 00:34:17.136 [2024-07-14 05:48:24.153133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.136 [2024-07-14 05:48:24.153175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.136 qpair failed and we were unable to recover it. 00:34:17.136 [2024-07-14 05:48:24.153412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.136 [2024-07-14 05:48:24.153441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.136 qpair failed and we were unable to recover it. 00:34:17.136 [2024-07-14 05:48:24.153682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.136 [2024-07-14 05:48:24.153710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.136 qpair failed and we were unable to recover it. 00:34:17.136 [2024-07-14 05:48:24.153926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.136 [2024-07-14 05:48:24.153952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.136 qpair failed and we were unable to recover it. 00:34:17.136 [2024-07-14 05:48:24.154138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.136 [2024-07-14 05:48:24.154182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.136 qpair failed and we were unable to recover it. 00:34:17.136 [2024-07-14 05:48:24.154381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.136 [2024-07-14 05:48:24.154410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.136 qpair failed and we were unable to recover it. 00:34:17.136 [2024-07-14 05:48:24.154591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.136 [2024-07-14 05:48:24.154616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.136 qpair failed and we were unable to recover it. 00:34:17.136 [2024-07-14 05:48:24.154789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.136 [2024-07-14 05:48:24.154815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.136 qpair failed and we were unable to recover it. 00:34:17.136 [2024-07-14 05:48:24.154978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.136 [2024-07-14 05:48:24.155004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.136 qpair failed and we were unable to recover it. 00:34:17.136 [2024-07-14 05:48:24.155182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.136 [2024-07-14 05:48:24.155207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.136 qpair failed and we were unable to recover it. 00:34:17.136 [2024-07-14 05:48:24.155431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.136 [2024-07-14 05:48:24.155460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.136 qpair failed and we were unable to recover it. 00:34:17.136 [2024-07-14 05:48:24.155689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.136 [2024-07-14 05:48:24.155717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.136 qpair failed and we were unable to recover it. 00:34:17.136 [2024-07-14 05:48:24.155922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.136 [2024-07-14 05:48:24.155948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.136 qpair failed and we were unable to recover it. 00:34:17.136 [2024-07-14 05:48:24.156114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.136 [2024-07-14 05:48:24.156139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.136 qpair failed and we were unable to recover it. 00:34:17.136 [2024-07-14 05:48:24.156344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.136 [2024-07-14 05:48:24.156369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.136 qpair failed and we were unable to recover it. 00:34:17.136 [2024-07-14 05:48:24.156546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.136 [2024-07-14 05:48:24.156572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.136 qpair failed and we were unable to recover it. 00:34:17.136 [2024-07-14 05:48:24.156755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.136 [2024-07-14 05:48:24.156780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.136 qpair failed and we were unable to recover it. 00:34:17.136 [2024-07-14 05:48:24.156964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.136 [2024-07-14 05:48:24.156990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.136 qpair failed and we were unable to recover it. 00:34:17.136 [2024-07-14 05:48:24.157174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.136 [2024-07-14 05:48:24.157200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.136 qpair failed and we were unable to recover it. 00:34:17.136 [2024-07-14 05:48:24.157409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.136 [2024-07-14 05:48:24.157437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.136 qpair failed and we were unable to recover it. 00:34:17.136 [2024-07-14 05:48:24.157640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.136 [2024-07-14 05:48:24.157669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.136 qpair failed and we were unable to recover it. 00:34:17.136 [2024-07-14 05:48:24.157905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.136 [2024-07-14 05:48:24.157951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.136 qpair failed and we were unable to recover it. 00:34:17.136 [2024-07-14 05:48:24.158138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.136 [2024-07-14 05:48:24.158164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.136 qpair failed and we were unable to recover it. 00:34:17.136 [2024-07-14 05:48:24.158324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.136 [2024-07-14 05:48:24.158349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.136 qpair failed and we were unable to recover it. 00:34:17.136 [2024-07-14 05:48:24.158532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.136 [2024-07-14 05:48:24.158558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.136 qpair failed and we were unable to recover it. 00:34:17.136 [2024-07-14 05:48:24.158710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.136 [2024-07-14 05:48:24.158737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.136 qpair failed and we were unable to recover it. 00:34:17.136 [2024-07-14 05:48:24.158980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.136 [2024-07-14 05:48:24.159006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.136 qpair failed and we were unable to recover it. 00:34:17.136 [2024-07-14 05:48:24.159194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.136 [2024-07-14 05:48:24.159219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.136 qpair failed and we were unable to recover it. 00:34:17.136 [2024-07-14 05:48:24.159428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.136 [2024-07-14 05:48:24.159457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.136 qpair failed and we were unable to recover it. 00:34:17.136 [2024-07-14 05:48:24.159638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.136 [2024-07-14 05:48:24.159666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.136 qpair failed and we were unable to recover it. 00:34:17.136 [2024-07-14 05:48:24.159898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.136 [2024-07-14 05:48:24.159925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.136 qpair failed and we were unable to recover it. 00:34:17.136 [2024-07-14 05:48:24.160123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.136 [2024-07-14 05:48:24.160151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.136 qpair failed and we were unable to recover it. 00:34:17.136 [2024-07-14 05:48:24.160320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.136 [2024-07-14 05:48:24.160348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.136 qpair failed and we were unable to recover it. 00:34:17.136 [2024-07-14 05:48:24.160545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.136 [2024-07-14 05:48:24.160571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.136 qpair failed and we were unable to recover it. 00:34:17.136 [2024-07-14 05:48:24.160784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.136 [2024-07-14 05:48:24.160812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.136 qpair failed and we were unable to recover it. 00:34:17.136 [2024-07-14 05:48:24.161041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.136 [2024-07-14 05:48:24.161068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.136 qpair failed and we were unable to recover it. 00:34:17.136 [2024-07-14 05:48:24.161251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.136 [2024-07-14 05:48:24.161277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.136 qpair failed and we were unable to recover it. 00:34:17.136 [2024-07-14 05:48:24.161460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.136 [2024-07-14 05:48:24.161486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.136 qpair failed and we were unable to recover it. 00:34:17.136 [2024-07-14 05:48:24.161668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.136 [2024-07-14 05:48:24.161693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.136 qpair failed and we were unable to recover it. 00:34:17.136 [2024-07-14 05:48:24.161890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.136 [2024-07-14 05:48:24.161916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.136 qpair failed and we were unable to recover it. 00:34:17.136 [2024-07-14 05:48:24.162140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.136 [2024-07-14 05:48:24.162169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.136 qpair failed and we were unable to recover it. 00:34:17.136 [2024-07-14 05:48:24.162346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.162374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.162603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.162629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.162834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.162859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.163026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.163052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.163235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.163261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.163466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.163492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.163680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.163709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.163947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.163973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.164155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.164184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.164380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.164408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.164629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.164655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.164809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.164834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.165015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.165041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.165249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.165275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.165458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.165486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.165665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.165693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.165872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.165899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.166107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.166134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.166315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.166340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.166568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.166594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.166747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.166779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.167037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.167063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.167257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.167284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.167467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.167493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.167642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.167668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.167851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.167886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.168065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.168092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.168296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.168324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.168524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.168551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.168742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.168768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.168922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.168949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.169109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.169136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.169317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.169343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.169504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.169530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.169694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.169720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.169902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.169928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.170110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.170135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.170314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.170340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.170545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.170571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.170756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.170783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.170964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.170990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.171199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.171227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.171466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.171491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.171676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.171702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.171884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.171911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.172122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.172147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.172351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.172377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.172585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.172613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.172785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.172813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.173017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.173044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.173275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.173303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.173506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.173535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.173827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.173855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.174070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.174096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.174331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.174360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.174570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.174595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.174775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.174800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.174967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.174993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.175173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.175200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.175408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.175450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.175656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.175689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.175899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.175926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.176142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.176171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.176375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.176403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.176583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.176609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.176791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.176817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.176976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.177002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.177182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.177208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.177386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.177412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.177597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.177639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.177841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.177881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.178066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.178091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.178278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.178304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.178466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.178491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.178680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.178706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.178945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.178974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.179154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.179180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.179336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.179361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.137 qpair failed and we were unable to recover it. 00:34:17.137 [2024-07-14 05:48:24.179521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.137 [2024-07-14 05:48:24.179546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.138 qpair failed and we were unable to recover it. 00:34:17.138 [2024-07-14 05:48:24.179729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.138 [2024-07-14 05:48:24.179755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.138 qpair failed and we were unable to recover it. 00:34:17.138 [2024-07-14 05:48:24.179919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.138 [2024-07-14 05:48:24.179945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.138 qpair failed and we were unable to recover it. 00:34:17.138 [2024-07-14 05:48:24.180131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.138 [2024-07-14 05:48:24.180157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.138 qpair failed and we were unable to recover it. 00:34:17.138 [2024-07-14 05:48:24.180341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.138 [2024-07-14 05:48:24.180368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.138 qpair failed and we were unable to recover it. 00:34:17.138 [2024-07-14 05:48:24.180522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.138 [2024-07-14 05:48:24.180548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.138 qpair failed and we were unable to recover it. 00:34:17.138 [2024-07-14 05:48:24.180752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.138 [2024-07-14 05:48:24.180781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.138 qpair failed and we were unable to recover it. 00:34:17.138 [2024-07-14 05:48:24.180989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.138 [2024-07-14 05:48:24.181015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.138 qpair failed and we were unable to recover it. 00:34:17.138 [2024-07-14 05:48:24.181172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.138 [2024-07-14 05:48:24.181198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.138 qpair failed and we were unable to recover it. 00:34:17.138 [2024-07-14 05:48:24.181387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.138 [2024-07-14 05:48:24.181412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.138 qpair failed and we were unable to recover it. 00:34:17.138 [2024-07-14 05:48:24.181620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.138 [2024-07-14 05:48:24.181646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.138 qpair failed and we were unable to recover it. 00:34:17.138 [2024-07-14 05:48:24.181843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.138 [2024-07-14 05:48:24.181881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.138 qpair failed and we were unable to recover it. 00:34:17.138 [2024-07-14 05:48:24.182115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.138 [2024-07-14 05:48:24.182141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.138 qpair failed and we were unable to recover it. 00:34:17.138 [2024-07-14 05:48:24.182359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.138 [2024-07-14 05:48:24.182385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.138 qpair failed and we were unable to recover it. 00:34:17.138 [2024-07-14 05:48:24.182594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.138 [2024-07-14 05:48:24.182623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.138 qpair failed and we were unable to recover it. 00:34:17.138 [2024-07-14 05:48:24.182822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.138 [2024-07-14 05:48:24.182850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.138 qpair failed and we were unable to recover it. 00:34:17.138 [2024-07-14 05:48:24.183058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.138 [2024-07-14 05:48:24.183083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.138 qpair failed and we were unable to recover it. 00:34:17.138 [2024-07-14 05:48:24.183260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.138 [2024-07-14 05:48:24.183286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.138 qpair failed and we were unable to recover it. 00:34:17.138 [2024-07-14 05:48:24.183483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.138 [2024-07-14 05:48:24.183512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.138 qpair failed and we were unable to recover it. 00:34:17.138 [2024-07-14 05:48:24.183721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.138 [2024-07-14 05:48:24.183746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.138 qpair failed and we were unable to recover it. 00:34:17.138 [2024-07-14 05:48:24.183949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.138 [2024-07-14 05:48:24.183978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.138 qpair failed and we were unable to recover it. 00:34:17.138 [2024-07-14 05:48:24.184176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.138 [2024-07-14 05:48:24.184204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.138 qpair failed and we were unable to recover it. 00:34:17.138 [2024-07-14 05:48:24.184403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.138 [2024-07-14 05:48:24.184434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.138 qpair failed and we were unable to recover it. 00:34:17.138 [2024-07-14 05:48:24.184621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.138 [2024-07-14 05:48:24.184647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.138 qpair failed and we were unable to recover it. 00:34:17.138 [2024-07-14 05:48:24.184801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.138 [2024-07-14 05:48:24.184827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.138 qpair failed and we were unable to recover it. 00:34:17.138 [2024-07-14 05:48:24.184997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.138 [2024-07-14 05:48:24.185025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.138 qpair failed and we were unable to recover it. 00:34:17.138 [2024-07-14 05:48:24.185232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.138 [2024-07-14 05:48:24.185261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.138 qpair failed and we were unable to recover it. 00:34:17.138 [2024-07-14 05:48:24.185461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.138 [2024-07-14 05:48:24.185490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.138 qpair failed and we were unable to recover it. 00:34:17.138 [2024-07-14 05:48:24.185700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.138 [2024-07-14 05:48:24.185726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.138 qpair failed and we were unable to recover it. 00:34:17.138 [2024-07-14 05:48:24.185888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.138 [2024-07-14 05:48:24.185915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.138 qpair failed and we were unable to recover it. 00:34:17.138 [2024-07-14 05:48:24.186100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.138 [2024-07-14 05:48:24.186127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.138 qpair failed and we were unable to recover it. 00:34:17.138 [2024-07-14 05:48:24.186278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.138 [2024-07-14 05:48:24.186305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.138 qpair failed and we were unable to recover it. 00:34:17.138 [2024-07-14 05:48:24.186515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.138 [2024-07-14 05:48:24.186541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.138 qpair failed and we were unable to recover it. 00:34:17.138 [2024-07-14 05:48:24.186780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.138 [2024-07-14 05:48:24.186806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.138 qpair failed and we were unable to recover it. 00:34:17.138 [2024-07-14 05:48:24.186993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.138 [2024-07-14 05:48:24.187019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.138 qpair failed and we were unable to recover it. 00:34:17.138 [2024-07-14 05:48:24.187239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.138 [2024-07-14 05:48:24.187268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.138 qpair failed and we were unable to recover it. 00:34:17.138 [2024-07-14 05:48:24.187476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.138 [2024-07-14 05:48:24.187505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.138 qpair failed and we were unable to recover it. 00:34:17.138 [2024-07-14 05:48:24.187731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.138 [2024-07-14 05:48:24.187757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.138 qpair failed and we were unable to recover it. 00:34:17.138 [2024-07-14 05:48:24.187940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.138 [2024-07-14 05:48:24.187967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.138 qpair failed and we were unable to recover it. 00:34:17.138 [2024-07-14 05:48:24.188176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.138 [2024-07-14 05:48:24.188204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.138 qpair failed and we were unable to recover it. 00:34:17.138 [2024-07-14 05:48:24.188378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.138 [2024-07-14 05:48:24.188405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.138 qpair failed and we were unable to recover it. 00:34:17.138 [2024-07-14 05:48:24.188634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.138 [2024-07-14 05:48:24.188663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.138 qpair failed and we were unable to recover it. 00:34:17.138 [2024-07-14 05:48:24.188893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.138 [2024-07-14 05:48:24.188923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.138 qpair failed and we were unable to recover it. 00:34:17.138 [2024-07-14 05:48:24.189106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.138 [2024-07-14 05:48:24.189133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.138 qpair failed and we were unable to recover it. 00:34:17.138 [2024-07-14 05:48:24.189298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.138 [2024-07-14 05:48:24.189324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.138 qpair failed and we were unable to recover it. 00:34:17.138 [2024-07-14 05:48:24.189504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.138 [2024-07-14 05:48:24.189529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.138 qpair failed and we were unable to recover it. 00:34:17.138 [2024-07-14 05:48:24.189692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.138 [2024-07-14 05:48:24.189718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.138 qpair failed and we were unable to recover it. 00:34:17.138 [2024-07-14 05:48:24.189929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.138 [2024-07-14 05:48:24.189960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.138 qpair failed and we were unable to recover it. 00:34:17.138 [2024-07-14 05:48:24.190191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.138 [2024-07-14 05:48:24.190217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.138 qpair failed and we were unable to recover it. 00:34:17.138 [2024-07-14 05:48:24.190436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.138 [2024-07-14 05:48:24.190462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.138 qpair failed and we were unable to recover it. 00:34:17.138 [2024-07-14 05:48:24.190650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.138 [2024-07-14 05:48:24.190676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.138 qpair failed and we were unable to recover it. 00:34:17.138 [2024-07-14 05:48:24.190862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.138 [2024-07-14 05:48:24.190895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.138 qpair failed and we were unable to recover it. 00:34:17.138 [2024-07-14 05:48:24.191081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.138 [2024-07-14 05:48:24.191107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.138 qpair failed and we were unable to recover it. 00:34:17.138 [2024-07-14 05:48:24.191342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.138 [2024-07-14 05:48:24.191370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.138 qpair failed and we were unable to recover it. 00:34:17.138 [2024-07-14 05:48:24.191602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.138 [2024-07-14 05:48:24.191631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.138 qpair failed and we were unable to recover it. 00:34:17.138 [2024-07-14 05:48:24.191843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.138 [2024-07-14 05:48:24.191876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.138 qpair failed and we were unable to recover it. 00:34:17.138 [2024-07-14 05:48:24.192090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.138 [2024-07-14 05:48:24.192118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.138 qpair failed and we were unable to recover it. 00:34:17.138 [2024-07-14 05:48:24.192320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.138 [2024-07-14 05:48:24.192349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.138 qpair failed and we were unable to recover it. 00:34:17.138 [2024-07-14 05:48:24.192545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.138 [2024-07-14 05:48:24.192570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.138 qpair failed and we were unable to recover it. 00:34:17.138 [2024-07-14 05:48:24.192742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.138 [2024-07-14 05:48:24.192767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.138 qpair failed and we were unable to recover it. 00:34:17.138 [2024-07-14 05:48:24.192998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.138 [2024-07-14 05:48:24.193024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.138 qpair failed and we were unable to recover it. 00:34:17.138 [2024-07-14 05:48:24.193232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.138 [2024-07-14 05:48:24.193258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.138 qpair failed and we were unable to recover it. 00:34:17.138 [2024-07-14 05:48:24.193468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.138 [2024-07-14 05:48:24.193500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.138 qpair failed and we were unable to recover it. 00:34:17.138 [2024-07-14 05:48:24.193725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.138 [2024-07-14 05:48:24.193754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.138 qpair failed and we were unable to recover it. 00:34:17.138 [2024-07-14 05:48:24.193953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.138 [2024-07-14 05:48:24.193979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.138 qpair failed and we were unable to recover it. 00:34:17.138 [2024-07-14 05:48:24.194170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.138 [2024-07-14 05:48:24.194196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.138 qpair failed and we were unable to recover it. 00:34:17.138 [2024-07-14 05:48:24.194351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.138 [2024-07-14 05:48:24.194377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.138 qpair failed and we were unable to recover it. 00:34:17.138 [2024-07-14 05:48:24.194588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.138 [2024-07-14 05:48:24.194614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.138 qpair failed and we were unable to recover it. 00:34:17.138 [2024-07-14 05:48:24.194844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.138 [2024-07-14 05:48:24.194880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.138 qpair failed and we were unable to recover it. 00:34:17.139 [2024-07-14 05:48:24.195090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.139 [2024-07-14 05:48:24.195118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.139 qpair failed and we were unable to recover it. 00:34:17.139 [2024-07-14 05:48:24.195345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.139 [2024-07-14 05:48:24.195370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.139 qpair failed and we were unable to recover it. 00:34:17.139 [2024-07-14 05:48:24.195554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.139 [2024-07-14 05:48:24.195580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.139 qpair failed and we were unable to recover it. 00:34:17.139 [2024-07-14 05:48:24.195768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.139 [2024-07-14 05:48:24.195794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.139 qpair failed and we were unable to recover it. 00:34:17.139 [2024-07-14 05:48:24.195979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.139 [2024-07-14 05:48:24.196005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.139 qpair failed and we were unable to recover it. 00:34:17.139 [2024-07-14 05:48:24.196166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.139 [2024-07-14 05:48:24.196191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.139 qpair failed and we were unable to recover it. 00:34:17.139 [2024-07-14 05:48:24.196427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.139 [2024-07-14 05:48:24.196455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.139 qpair failed and we were unable to recover it. 00:34:17.139 [2024-07-14 05:48:24.196645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.139 [2024-07-14 05:48:24.196670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.139 qpair failed and we were unable to recover it. 00:34:17.139 [2024-07-14 05:48:24.196834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.139 [2024-07-14 05:48:24.196860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.139 qpair failed and we were unable to recover it. 00:34:17.139 [2024-07-14 05:48:24.197024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.139 [2024-07-14 05:48:24.197050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.139 qpair failed and we were unable to recover it. 00:34:17.139 [2024-07-14 05:48:24.197233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.139 [2024-07-14 05:48:24.197259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.139 qpair failed and we were unable to recover it. 00:34:17.139 [2024-07-14 05:48:24.197461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.139 [2024-07-14 05:48:24.197490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.139 qpair failed and we were unable to recover it. 00:34:17.139 [2024-07-14 05:48:24.197712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.139 [2024-07-14 05:48:24.197738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.139 qpair failed and we were unable to recover it. 00:34:17.139 [2024-07-14 05:48:24.197887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.139 [2024-07-14 05:48:24.197914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.139 qpair failed and we were unable to recover it. 00:34:17.139 [2024-07-14 05:48:24.198121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.139 [2024-07-14 05:48:24.198149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.139 qpair failed and we were unable to recover it. 00:34:17.139 [2024-07-14 05:48:24.198343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.139 [2024-07-14 05:48:24.198371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.139 qpair failed and we were unable to recover it. 00:34:17.139 [2024-07-14 05:48:24.198571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.139 [2024-07-14 05:48:24.198597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.139 qpair failed and we were unable to recover it. 00:34:17.139 [2024-07-14 05:48:24.198749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.139 [2024-07-14 05:48:24.198775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.139 qpair failed and we were unable to recover it. 00:34:17.139 [2024-07-14 05:48:24.198964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.139 [2024-07-14 05:48:24.198991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.139 qpair failed and we were unable to recover it. 00:34:17.139 [2024-07-14 05:48:24.199174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.139 [2024-07-14 05:48:24.199199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.139 qpair failed and we were unable to recover it. 00:34:17.139 [2024-07-14 05:48:24.199412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.139 [2024-07-14 05:48:24.199440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.139 qpair failed and we were unable to recover it. 00:34:17.139 [2024-07-14 05:48:24.199650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.139 [2024-07-14 05:48:24.199678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.139 qpair failed and we were unable to recover it. 00:34:17.139 [2024-07-14 05:48:24.199876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.139 [2024-07-14 05:48:24.199901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.139 qpair failed and we were unable to recover it. 00:34:17.139 [2024-07-14 05:48:24.200083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.139 [2024-07-14 05:48:24.200109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.139 qpair failed and we were unable to recover it. 00:34:17.139 [2024-07-14 05:48:24.200315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.139 [2024-07-14 05:48:24.200357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.139 qpair failed and we were unable to recover it. 00:34:17.139 [2024-07-14 05:48:24.200563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.139 [2024-07-14 05:48:24.200588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.139 qpair failed and we were unable to recover it. 00:34:17.139 [2024-07-14 05:48:24.200788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.139 [2024-07-14 05:48:24.200816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.139 qpair failed and we were unable to recover it. 00:34:17.139 [2024-07-14 05:48:24.201035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.139 [2024-07-14 05:48:24.201061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.139 qpair failed and we were unable to recover it. 00:34:17.139 [2024-07-14 05:48:24.201237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.139 [2024-07-14 05:48:24.201263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.139 qpair failed and we were unable to recover it. 00:34:17.139 [2024-07-14 05:48:24.201440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.139 [2024-07-14 05:48:24.201466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.139 qpair failed and we were unable to recover it. 00:34:17.139 [2024-07-14 05:48:24.201655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.139 [2024-07-14 05:48:24.201682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.139 qpair failed and we were unable to recover it. 00:34:17.139 [2024-07-14 05:48:24.201878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.139 [2024-07-14 05:48:24.201917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.139 qpair failed and we were unable to recover it. 00:34:17.139 [2024-07-14 05:48:24.202120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.139 [2024-07-14 05:48:24.202146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.139 qpair failed and we were unable to recover it. 00:34:17.139 [2024-07-14 05:48:24.202384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.139 [2024-07-14 05:48:24.202418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.139 qpair failed and we were unable to recover it. 00:34:17.139 [2024-07-14 05:48:24.202627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.139 [2024-07-14 05:48:24.202653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.139 qpair failed and we were unable to recover it. 00:34:17.139 [2024-07-14 05:48:24.202812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.139 [2024-07-14 05:48:24.202838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.139 qpair failed and we were unable to recover it. 00:34:17.139 [2024-07-14 05:48:24.203056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.139 [2024-07-14 05:48:24.203082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.139 qpair failed and we were unable to recover it. 00:34:17.139 [2024-07-14 05:48:24.203236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.139 [2024-07-14 05:48:24.203261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.139 qpair failed and we were unable to recover it. 00:34:17.139 [2024-07-14 05:48:24.203450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.139 [2024-07-14 05:48:24.203477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.139 qpair failed and we were unable to recover it. 00:34:17.139 [2024-07-14 05:48:24.203698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.139 [2024-07-14 05:48:24.203740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.139 qpair failed and we were unable to recover it. 00:34:17.139 [2024-07-14 05:48:24.203998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.139 [2024-07-14 05:48:24.204026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.139 qpair failed and we were unable to recover it. 00:34:17.139 [2024-07-14 05:48:24.204190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.139 [2024-07-14 05:48:24.204215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.139 qpair failed and we were unable to recover it. 00:34:17.139 [2024-07-14 05:48:24.204380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.139 [2024-07-14 05:48:24.204407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.139 qpair failed and we were unable to recover it. 00:34:17.139 [2024-07-14 05:48:24.204563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.139 [2024-07-14 05:48:24.204589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.139 qpair failed and we were unable to recover it. 00:34:17.139 [2024-07-14 05:48:24.204770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.139 [2024-07-14 05:48:24.204796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.139 qpair failed and we were unable to recover it. 00:34:17.139 [2024-07-14 05:48:24.204982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.139 [2024-07-14 05:48:24.205009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.139 qpair failed and we were unable to recover it. 00:34:17.139 [2024-07-14 05:48:24.205199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.139 [2024-07-14 05:48:24.205226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.139 qpair failed and we were unable to recover it. 00:34:17.139 [2024-07-14 05:48:24.205423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.139 [2024-07-14 05:48:24.205460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.139 qpair failed and we were unable to recover it. 00:34:17.409 [2024-07-14 05:48:24.205672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.409 [2024-07-14 05:48:24.205701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.409 qpair failed and we were unable to recover it. 00:34:17.409 [2024-07-14 05:48:24.205930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.409 [2024-07-14 05:48:24.205957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.409 qpair failed and we were unable to recover it. 00:34:17.409 [2024-07-14 05:48:24.206140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.409 [2024-07-14 05:48:24.206169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.409 qpair failed and we were unable to recover it. 00:34:17.409 [2024-07-14 05:48:24.206343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.409 [2024-07-14 05:48:24.206371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.409 qpair failed and we were unable to recover it. 00:34:17.409 [2024-07-14 05:48:24.206568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.409 [2024-07-14 05:48:24.206593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.409 qpair failed and we were unable to recover it. 00:34:17.409 [2024-07-14 05:48:24.206753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.409 [2024-07-14 05:48:24.206779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.409 qpair failed and we were unable to recover it. 00:34:17.409 [2024-07-14 05:48:24.206994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.409 [2024-07-14 05:48:24.207023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.409 qpair failed and we were unable to recover it. 00:34:17.409 [2024-07-14 05:48:24.207222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.409 [2024-07-14 05:48:24.207248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.409 qpair failed and we were unable to recover it. 00:34:17.409 [2024-07-14 05:48:24.207457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.409 [2024-07-14 05:48:24.207485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.409 qpair failed and we were unable to recover it. 00:34:17.409 [2024-07-14 05:48:24.207662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.409 [2024-07-14 05:48:24.207691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.409 qpair failed and we were unable to recover it. 00:34:17.409 [2024-07-14 05:48:24.207881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.409 [2024-07-14 05:48:24.207907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.409 qpair failed and we were unable to recover it. 00:34:17.409 [2024-07-14 05:48:24.208059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.409 [2024-07-14 05:48:24.208085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.409 qpair failed and we were unable to recover it. 00:34:17.409 [2024-07-14 05:48:24.208248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.409 [2024-07-14 05:48:24.208274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.409 qpair failed and we were unable to recover it. 00:34:17.409 [2024-07-14 05:48:24.208459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.409 [2024-07-14 05:48:24.208484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.409 qpair failed and we were unable to recover it. 00:34:17.409 [2024-07-14 05:48:24.208703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.409 [2024-07-14 05:48:24.208731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.409 qpair failed and we were unable to recover it. 00:34:17.409 [2024-07-14 05:48:24.208907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.409 [2024-07-14 05:48:24.208936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.409 qpair failed and we were unable to recover it. 00:34:17.409 [2024-07-14 05:48:24.209166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.409 [2024-07-14 05:48:24.209191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.409 qpair failed and we were unable to recover it. 00:34:17.409 [2024-07-14 05:48:24.209346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.409 [2024-07-14 05:48:24.209372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.409 qpair failed and we were unable to recover it. 00:34:17.409 [2024-07-14 05:48:24.209566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.409 [2024-07-14 05:48:24.209592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.409 qpair failed and we were unable to recover it. 00:34:17.409 [2024-07-14 05:48:24.209806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.410 [2024-07-14 05:48:24.209831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.410 qpair failed and we were unable to recover it. 00:34:17.410 [2024-07-14 05:48:24.210032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.410 [2024-07-14 05:48:24.210059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.410 qpair failed and we were unable to recover it. 00:34:17.410 [2024-07-14 05:48:24.210269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.410 [2024-07-14 05:48:24.210295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.410 qpair failed and we were unable to recover it. 00:34:17.410 [2024-07-14 05:48:24.210479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.410 [2024-07-14 05:48:24.210505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.410 qpair failed and we were unable to recover it. 00:34:17.410 [2024-07-14 05:48:24.210676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.410 [2024-07-14 05:48:24.210704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.410 qpair failed and we were unable to recover it. 00:34:17.410 [2024-07-14 05:48:24.210904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.410 [2024-07-14 05:48:24.210933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.410 qpair failed and we were unable to recover it. 00:34:17.410 [2024-07-14 05:48:24.211140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.410 [2024-07-14 05:48:24.211172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.410 qpair failed and we were unable to recover it. 00:34:17.410 [2024-07-14 05:48:24.211383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.410 [2024-07-14 05:48:24.211408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.410 qpair failed and we were unable to recover it. 00:34:17.410 [2024-07-14 05:48:24.211664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.410 [2024-07-14 05:48:24.211693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.410 qpair failed and we were unable to recover it. 00:34:17.410 [2024-07-14 05:48:24.211902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.410 [2024-07-14 05:48:24.211938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.410 qpair failed and we were unable to recover it. 00:34:17.410 [2024-07-14 05:48:24.212123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.410 [2024-07-14 05:48:24.212149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.410 qpair failed and we were unable to recover it. 00:34:17.410 [2024-07-14 05:48:24.212337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.410 [2024-07-14 05:48:24.212363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.410 qpair failed and we were unable to recover it. 00:34:17.410 [2024-07-14 05:48:24.212550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.410 [2024-07-14 05:48:24.212576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.410 qpair failed and we were unable to recover it. 00:34:17.410 [2024-07-14 05:48:24.212739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.410 [2024-07-14 05:48:24.212764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.410 qpair failed and we were unable to recover it. 00:34:17.410 [2024-07-14 05:48:24.212949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.410 [2024-07-14 05:48:24.212976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.410 qpair failed and we were unable to recover it. 00:34:17.410 [2024-07-14 05:48:24.213163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.410 [2024-07-14 05:48:24.213189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.410 qpair failed and we were unable to recover it. 00:34:17.410 [2024-07-14 05:48:24.213335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.410 [2024-07-14 05:48:24.213360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.410 qpair failed and we were unable to recover it. 00:34:17.410 [2024-07-14 05:48:24.213546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.410 [2024-07-14 05:48:24.213572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.410 qpair failed and we were unable to recover it. 00:34:17.410 [2024-07-14 05:48:24.213818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.410 [2024-07-14 05:48:24.213846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.410 qpair failed and we were unable to recover it. 00:34:17.410 [2024-07-14 05:48:24.214035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.410 [2024-07-14 05:48:24.214063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.410 qpair failed and we were unable to recover it. 00:34:17.410 [2024-07-14 05:48:24.214280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.410 [2024-07-14 05:48:24.214306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.410 qpair failed and we were unable to recover it. 00:34:17.410 [2024-07-14 05:48:24.214509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.410 [2024-07-14 05:48:24.214535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.410 qpair failed and we were unable to recover it. 00:34:17.410 [2024-07-14 05:48:24.214716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.410 [2024-07-14 05:48:24.214742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.410 qpair failed and we were unable to recover it. 00:34:17.410 [2024-07-14 05:48:24.214948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.410 [2024-07-14 05:48:24.214978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.410 qpair failed and we were unable to recover it. 00:34:17.410 [2024-07-14 05:48:24.215183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.410 [2024-07-14 05:48:24.215208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.410 qpair failed and we were unable to recover it. 00:34:17.410 [2024-07-14 05:48:24.215413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.410 [2024-07-14 05:48:24.215442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.410 qpair failed and we were unable to recover it. 00:34:17.410 [2024-07-14 05:48:24.215644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.410 [2024-07-14 05:48:24.215673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.410 qpair failed and we were unable to recover it. 00:34:17.410 [2024-07-14 05:48:24.215849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.410 [2024-07-14 05:48:24.215882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.410 qpair failed and we were unable to recover it. 00:34:17.410 [2024-07-14 05:48:24.216064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.410 [2024-07-14 05:48:24.216090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.410 qpair failed and we were unable to recover it. 00:34:17.410 [2024-07-14 05:48:24.216287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.410 [2024-07-14 05:48:24.216316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.410 qpair failed and we were unable to recover it. 00:34:17.410 [2024-07-14 05:48:24.216547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.410 [2024-07-14 05:48:24.216573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.410 qpair failed and we were unable to recover it. 00:34:17.410 [2024-07-14 05:48:24.216818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.410 [2024-07-14 05:48:24.216847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.410 qpair failed and we were unable to recover it. 00:34:17.410 [2024-07-14 05:48:24.217038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.410 [2024-07-14 05:48:24.217066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.410 qpair failed and we were unable to recover it. 00:34:17.410 [2024-07-14 05:48:24.217275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.410 [2024-07-14 05:48:24.217301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.410 qpair failed and we were unable to recover it. 00:34:17.410 [2024-07-14 05:48:24.217511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.410 [2024-07-14 05:48:24.217537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.410 qpair failed and we were unable to recover it. 00:34:17.410 [2024-07-14 05:48:24.217715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.410 [2024-07-14 05:48:24.217740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.410 qpair failed and we were unable to recover it. 00:34:17.410 [2024-07-14 05:48:24.217922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.410 [2024-07-14 05:48:24.217948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.410 qpair failed and we were unable to recover it. 00:34:17.410 [2024-07-14 05:48:24.218105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.410 [2024-07-14 05:48:24.218131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.410 qpair failed and we were unable to recover it. 00:34:17.410 [2024-07-14 05:48:24.218316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.410 [2024-07-14 05:48:24.218342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.410 qpair failed and we were unable to recover it. 00:34:17.410 [2024-07-14 05:48:24.218519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.410 [2024-07-14 05:48:24.218544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.410 qpair failed and we were unable to recover it. 00:34:17.410 [2024-07-14 05:48:24.218770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.410 [2024-07-14 05:48:24.218799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.410 qpair failed and we were unable to recover it. 00:34:17.410 [2024-07-14 05:48:24.218987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.410 [2024-07-14 05:48:24.219014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.410 qpair failed and we were unable to recover it. 00:34:17.410 [2024-07-14 05:48:24.219173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.410 [2024-07-14 05:48:24.219200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.410 qpair failed and we were unable to recover it. 00:34:17.410 [2024-07-14 05:48:24.219385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.410 [2024-07-14 05:48:24.219427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.410 qpair failed and we were unable to recover it. 00:34:17.410 [2024-07-14 05:48:24.219665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.410 [2024-07-14 05:48:24.219694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.410 qpair failed and we were unable to recover it. 00:34:17.410 [2024-07-14 05:48:24.219898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.410 [2024-07-14 05:48:24.219925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.410 qpair failed and we were unable to recover it. 00:34:17.410 [2024-07-14 05:48:24.220104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.410 [2024-07-14 05:48:24.220138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.410 qpair failed and we were unable to recover it. 00:34:17.410 [2024-07-14 05:48:24.220365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.410 [2024-07-14 05:48:24.220393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.410 qpair failed and we were unable to recover it. 00:34:17.411 [2024-07-14 05:48:24.220595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.411 [2024-07-14 05:48:24.220621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.411 qpair failed and we were unable to recover it. 00:34:17.411 [2024-07-14 05:48:24.220803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.411 [2024-07-14 05:48:24.220829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.411 qpair failed and we were unable to recover it. 00:34:17.411 [2024-07-14 05:48:24.221022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.411 [2024-07-14 05:48:24.221048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.411 qpair failed and we were unable to recover it. 00:34:17.411 [2024-07-14 05:48:24.221232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.411 [2024-07-14 05:48:24.221258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.411 qpair failed and we were unable to recover it. 00:34:17.411 [2024-07-14 05:48:24.221441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.411 [2024-07-14 05:48:24.221466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.411 qpair failed and we were unable to recover it. 00:34:17.411 [2024-07-14 05:48:24.221650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.411 [2024-07-14 05:48:24.221676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.411 qpair failed and we were unable to recover it. 00:34:17.411 [2024-07-14 05:48:24.221878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.411 [2024-07-14 05:48:24.221908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.411 qpair failed and we were unable to recover it. 00:34:17.411 [2024-07-14 05:48:24.222109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.411 [2024-07-14 05:48:24.222135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.411 qpair failed and we were unable to recover it. 00:34:17.411 [2024-07-14 05:48:24.222319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.411 [2024-07-14 05:48:24.222348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.411 qpair failed and we were unable to recover it. 00:34:17.411 [2024-07-14 05:48:24.222531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.411 [2024-07-14 05:48:24.222557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.411 qpair failed and we were unable to recover it. 00:34:17.411 [2024-07-14 05:48:24.222736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.411 [2024-07-14 05:48:24.222761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.411 qpair failed and we were unable to recover it. 00:34:17.411 [2024-07-14 05:48:24.222941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.411 [2024-07-14 05:48:24.222967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.411 qpair failed and we were unable to recover it. 00:34:17.411 [2024-07-14 05:48:24.223124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.411 [2024-07-14 05:48:24.223151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.411 qpair failed and we were unable to recover it. 00:34:17.411 [2024-07-14 05:48:24.223312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.411 [2024-07-14 05:48:24.223355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.411 qpair failed and we were unable to recover it. 00:34:17.411 [2024-07-14 05:48:24.223526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.411 [2024-07-14 05:48:24.223554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.411 qpair failed and we were unable to recover it. 00:34:17.411 [2024-07-14 05:48:24.223789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.411 [2024-07-14 05:48:24.223815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.411 qpair failed and we were unable to recover it. 00:34:17.411 [2024-07-14 05:48:24.223995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.411 [2024-07-14 05:48:24.224024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.411 qpair failed and we were unable to recover it. 00:34:17.411 [2024-07-14 05:48:24.224219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.411 [2024-07-14 05:48:24.224247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.411 qpair failed and we were unable to recover it. 00:34:17.411 [2024-07-14 05:48:24.224465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.411 [2024-07-14 05:48:24.224491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.411 qpair failed and we were unable to recover it. 00:34:17.411 [2024-07-14 05:48:24.224652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.411 [2024-07-14 05:48:24.224677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.411 qpair failed and we were unable to recover it. 00:34:17.411 [2024-07-14 05:48:24.224863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.411 [2024-07-14 05:48:24.224895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.411 qpair failed and we were unable to recover it. 00:34:17.411 [2024-07-14 05:48:24.225077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.411 [2024-07-14 05:48:24.225103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.411 qpair failed and we were unable to recover it. 00:34:17.411 [2024-07-14 05:48:24.225308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.411 [2024-07-14 05:48:24.225337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.411 qpair failed and we were unable to recover it. 00:34:17.411 [2024-07-14 05:48:24.225538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.411 [2024-07-14 05:48:24.225566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.411 qpair failed and we were unable to recover it. 00:34:17.411 [2024-07-14 05:48:24.225742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.411 [2024-07-14 05:48:24.225767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.411 qpair failed and we were unable to recover it. 00:34:17.411 [2024-07-14 05:48:24.225942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.411 [2024-07-14 05:48:24.225968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.411 qpair failed and we were unable to recover it. 00:34:17.411 [2024-07-14 05:48:24.226177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.411 [2024-07-14 05:48:24.226203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.411 qpair failed and we were unable to recover it. 00:34:17.411 [2024-07-14 05:48:24.226386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.411 [2024-07-14 05:48:24.226411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.411 qpair failed and we were unable to recover it. 00:34:17.411 [2024-07-14 05:48:24.226649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.411 [2024-07-14 05:48:24.226677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.411 qpair failed and we were unable to recover it. 00:34:17.411 [2024-07-14 05:48:24.226844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.411 [2024-07-14 05:48:24.226879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.411 qpair failed and we were unable to recover it. 00:34:17.411 [2024-07-14 05:48:24.227097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.411 [2024-07-14 05:48:24.227123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.411 qpair failed and we were unable to recover it. 00:34:17.411 [2024-07-14 05:48:24.227333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.411 [2024-07-14 05:48:24.227361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.411 qpair failed and we were unable to recover it. 00:34:17.411 [2024-07-14 05:48:24.227549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.411 [2024-07-14 05:48:24.227574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.411 qpair failed and we were unable to recover it. 00:34:17.411 [2024-07-14 05:48:24.227760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.411 [2024-07-14 05:48:24.227787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.411 qpair failed and we were unable to recover it. 00:34:17.411 [2024-07-14 05:48:24.227973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.411 [2024-07-14 05:48:24.228000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.411 qpair failed and we were unable to recover it. 00:34:17.411 [2024-07-14 05:48:24.228161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.411 [2024-07-14 05:48:24.228187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.411 qpair failed and we were unable to recover it. 00:34:17.411 [2024-07-14 05:48:24.228346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.411 [2024-07-14 05:48:24.228373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.411 qpair failed and we were unable to recover it. 00:34:17.411 [2024-07-14 05:48:24.228580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.411 [2024-07-14 05:48:24.228610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.411 qpair failed and we were unable to recover it. 00:34:17.411 [2024-07-14 05:48:24.228778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.411 [2024-07-14 05:48:24.228811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.411 qpair failed and we were unable to recover it. 00:34:17.411 [2024-07-14 05:48:24.229016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.411 [2024-07-14 05:48:24.229043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.411 qpair failed and we were unable to recover it. 00:34:17.411 [2024-07-14 05:48:24.229203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.411 [2024-07-14 05:48:24.229228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.411 qpair failed and we were unable to recover it. 00:34:17.411 [2024-07-14 05:48:24.229436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.411 [2024-07-14 05:48:24.229462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.411 qpair failed and we were unable to recover it. 00:34:17.411 [2024-07-14 05:48:24.229617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.411 [2024-07-14 05:48:24.229658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.411 qpair failed and we were unable to recover it. 00:34:17.411 [2024-07-14 05:48:24.229840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.411 [2024-07-14 05:48:24.229875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.411 qpair failed and we were unable to recover it. 00:34:17.411 [2024-07-14 05:48:24.230053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.411 [2024-07-14 05:48:24.230078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.411 qpair failed and we were unable to recover it. 00:34:17.411 [2024-07-14 05:48:24.230263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.411 [2024-07-14 05:48:24.230290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.411 qpair failed and we were unable to recover it. 00:34:17.411 [2024-07-14 05:48:24.230474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.411 [2024-07-14 05:48:24.230500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.411 qpair failed and we were unable to recover it. 00:34:17.411 [2024-07-14 05:48:24.230680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.411 [2024-07-14 05:48:24.230705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.412 qpair failed and we were unable to recover it. 00:34:17.412 [2024-07-14 05:48:24.230891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.412 [2024-07-14 05:48:24.230918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.412 qpair failed and we were unable to recover it. 00:34:17.412 [2024-07-14 05:48:24.231067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.412 [2024-07-14 05:48:24.231093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.412 qpair failed and we were unable to recover it. 00:34:17.412 [2024-07-14 05:48:24.231270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.412 [2024-07-14 05:48:24.231300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.412 qpair failed and we were unable to recover it. 00:34:17.412 [2024-07-14 05:48:24.231499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.412 [2024-07-14 05:48:24.231525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.412 qpair failed and we were unable to recover it. 00:34:17.412 [2024-07-14 05:48:24.231713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.412 [2024-07-14 05:48:24.231743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.412 qpair failed and we were unable to recover it. 00:34:17.412 [2024-07-14 05:48:24.231956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.412 [2024-07-14 05:48:24.231982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.412 qpair failed and we were unable to recover it. 00:34:17.412 [2024-07-14 05:48:24.232168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.412 [2024-07-14 05:48:24.232193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.412 qpair failed and we were unable to recover it. 00:34:17.412 [2024-07-14 05:48:24.232376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.412 [2024-07-14 05:48:24.232402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.412 qpair failed and we were unable to recover it. 00:34:17.412 [2024-07-14 05:48:24.232608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.412 [2024-07-14 05:48:24.232634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.412 qpair failed and we were unable to recover it. 00:34:17.412 [2024-07-14 05:48:24.232788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.412 [2024-07-14 05:48:24.232815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.412 qpair failed and we were unable to recover it. 00:34:17.412 [2024-07-14 05:48:24.233031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.412 [2024-07-14 05:48:24.233061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.412 qpair failed and we were unable to recover it. 00:34:17.412 [2024-07-14 05:48:24.233288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.412 [2024-07-14 05:48:24.233316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.412 qpair failed and we were unable to recover it. 00:34:17.412 [2024-07-14 05:48:24.233500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.412 [2024-07-14 05:48:24.233526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.412 qpair failed and we were unable to recover it. 00:34:17.412 [2024-07-14 05:48:24.233708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.412 [2024-07-14 05:48:24.233733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.412 qpair failed and we were unable to recover it. 00:34:17.412 [2024-07-14 05:48:24.233901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.412 [2024-07-14 05:48:24.233928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.412 qpair failed and we were unable to recover it. 00:34:17.412 [2024-07-14 05:48:24.234113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.412 [2024-07-14 05:48:24.234139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.412 qpair failed and we were unable to recover it. 00:34:17.412 [2024-07-14 05:48:24.234321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.412 [2024-07-14 05:48:24.234347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.412 qpair failed and we were unable to recover it. 00:34:17.412 [2024-07-14 05:48:24.234574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.412 [2024-07-14 05:48:24.234602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.412 qpair failed and we were unable to recover it. 00:34:17.412 [2024-07-14 05:48:24.234776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.412 [2024-07-14 05:48:24.234802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.412 qpair failed and we were unable to recover it. 00:34:17.412 [2024-07-14 05:48:24.235011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.412 [2024-07-14 05:48:24.235040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.412 qpair failed and we were unable to recover it. 00:34:17.412 [2024-07-14 05:48:24.235230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.412 [2024-07-14 05:48:24.235259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.412 qpair failed and we were unable to recover it. 00:34:17.412 [2024-07-14 05:48:24.235480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.412 [2024-07-14 05:48:24.235505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.412 qpair failed and we were unable to recover it. 00:34:17.412 [2024-07-14 05:48:24.235685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.412 [2024-07-14 05:48:24.235711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.412 qpair failed and we were unable to recover it. 00:34:17.412 [2024-07-14 05:48:24.235887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.412 [2024-07-14 05:48:24.235914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.412 qpair failed and we were unable to recover it. 00:34:17.412 [2024-07-14 05:48:24.236068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.412 [2024-07-14 05:48:24.236094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.412 qpair failed and we were unable to recover it. 00:34:17.412 [2024-07-14 05:48:24.236279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.412 [2024-07-14 05:48:24.236304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.412 qpair failed and we were unable to recover it. 00:34:17.412 [2024-07-14 05:48:24.236513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.412 [2024-07-14 05:48:24.236541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.412 qpair failed and we were unable to recover it. 00:34:17.412 [2024-07-14 05:48:24.236742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.412 [2024-07-14 05:48:24.236771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.412 qpair failed and we were unable to recover it. 00:34:17.412 [2024-07-14 05:48:24.236980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.412 [2024-07-14 05:48:24.237007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.412 qpair failed and we were unable to recover it. 00:34:17.412 [2024-07-14 05:48:24.237169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.412 [2024-07-14 05:48:24.237194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.412 qpair failed and we were unable to recover it. 00:34:17.412 [2024-07-14 05:48:24.237380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.412 [2024-07-14 05:48:24.237411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.412 qpair failed and we were unable to recover it. 00:34:17.412 [2024-07-14 05:48:24.237618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.412 [2024-07-14 05:48:24.237646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.412 qpair failed and we were unable to recover it. 00:34:17.412 [2024-07-14 05:48:24.237877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.412 [2024-07-14 05:48:24.237906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.412 qpair failed and we were unable to recover it. 00:34:17.412 [2024-07-14 05:48:24.238084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.412 [2024-07-14 05:48:24.238111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.412 qpair failed and we were unable to recover it. 00:34:17.412 [2024-07-14 05:48:24.238347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.412 [2024-07-14 05:48:24.238375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.412 qpair failed and we were unable to recover it. 00:34:17.412 [2024-07-14 05:48:24.238616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.412 [2024-07-14 05:48:24.238642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.412 qpair failed and we were unable to recover it. 00:34:17.412 [2024-07-14 05:48:24.238852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.412 [2024-07-14 05:48:24.238885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.412 qpair failed and we were unable to recover it. 00:34:17.412 [2024-07-14 05:48:24.239085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.412 [2024-07-14 05:48:24.239110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.412 qpair failed and we were unable to recover it. 00:34:17.412 [2024-07-14 05:48:24.239311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.412 [2024-07-14 05:48:24.239336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.412 qpair failed and we were unable to recover it. 00:34:17.412 [2024-07-14 05:48:24.239517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.412 [2024-07-14 05:48:24.239543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.412 qpair failed and we were unable to recover it. 00:34:17.412 [2024-07-14 05:48:24.239724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.412 [2024-07-14 05:48:24.239749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.412 qpair failed and we were unable to recover it. 00:34:17.412 [2024-07-14 05:48:24.239902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.412 [2024-07-14 05:48:24.239946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.412 qpair failed and we were unable to recover it. 00:34:17.412 [2024-07-14 05:48:24.240127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.412 [2024-07-14 05:48:24.240153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.412 qpair failed and we were unable to recover it. 00:34:17.412 [2024-07-14 05:48:24.240360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.412 [2024-07-14 05:48:24.240386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.412 qpair failed and we were unable to recover it. 00:34:17.412 [2024-07-14 05:48:24.240554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.412 [2024-07-14 05:48:24.240580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.412 qpair failed and we were unable to recover it. 00:34:17.412 [2024-07-14 05:48:24.240731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.412 [2024-07-14 05:48:24.240756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.412 qpair failed and we were unable to recover it. 00:34:17.412 [2024-07-14 05:48:24.240914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.413 [2024-07-14 05:48:24.240941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.413 qpair failed and we were unable to recover it. 00:34:17.413 [2024-07-14 05:48:24.241124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.413 [2024-07-14 05:48:24.241150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.413 qpair failed and we were unable to recover it. 00:34:17.413 [2024-07-14 05:48:24.241331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.413 [2024-07-14 05:48:24.241357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.413 qpair failed and we were unable to recover it. 00:34:17.413 [2024-07-14 05:48:24.241509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.413 [2024-07-14 05:48:24.241536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.413 qpair failed and we were unable to recover it. 00:34:17.413 [2024-07-14 05:48:24.241720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.413 [2024-07-14 05:48:24.241761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.413 qpair failed and we were unable to recover it. 00:34:17.413 [2024-07-14 05:48:24.241949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.413 [2024-07-14 05:48:24.241980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.413 qpair failed and we were unable to recover it. 00:34:17.413 [2024-07-14 05:48:24.242179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.413 [2024-07-14 05:48:24.242204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.413 qpair failed and we were unable to recover it. 00:34:17.413 [2024-07-14 05:48:24.242391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.413 [2024-07-14 05:48:24.242416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.413 qpair failed and we were unable to recover it. 00:34:17.413 [2024-07-14 05:48:24.242608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.413 [2024-07-14 05:48:24.242636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.413 qpair failed and we were unable to recover it. 00:34:17.413 [2024-07-14 05:48:24.242840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.413 [2024-07-14 05:48:24.242876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.413 qpair failed and we were unable to recover it. 00:34:17.413 [2024-07-14 05:48:24.243062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.413 [2024-07-14 05:48:24.243088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.413 qpair failed and we were unable to recover it. 00:34:17.413 [2024-07-14 05:48:24.243337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.413 [2024-07-14 05:48:24.243379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.413 qpair failed and we were unable to recover it. 00:34:17.413 [2024-07-14 05:48:24.243571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.413 [2024-07-14 05:48:24.243618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.413 qpair failed and we were unable to recover it. 00:34:17.413 [2024-07-14 05:48:24.243827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.413 [2024-07-14 05:48:24.243880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.413 qpair failed and we were unable to recover it. 00:34:17.413 [2024-07-14 05:48:24.244089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.413 [2024-07-14 05:48:24.244116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.413 qpair failed and we were unable to recover it. 00:34:17.413 [2024-07-14 05:48:24.244342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.413 [2024-07-14 05:48:24.244372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.413 qpair failed and we were unable to recover it. 00:34:17.413 [2024-07-14 05:48:24.244569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.413 [2024-07-14 05:48:24.244599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.413 qpair failed and we were unable to recover it. 00:34:17.413 [2024-07-14 05:48:24.244826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.413 [2024-07-14 05:48:24.244855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.413 qpair failed and we were unable to recover it. 00:34:17.413 [2024-07-14 05:48:24.245101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.413 [2024-07-14 05:48:24.245128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.413 qpair failed and we were unable to recover it. 00:34:17.413 [2024-07-14 05:48:24.245376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.413 [2024-07-14 05:48:24.245424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.413 qpair failed and we were unable to recover it. 00:34:17.413 [2024-07-14 05:48:24.245658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.413 [2024-07-14 05:48:24.245686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.413 qpair failed and we were unable to recover it. 00:34:17.413 [2024-07-14 05:48:24.245870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.413 [2024-07-14 05:48:24.245914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.413 qpair failed and we were unable to recover it. 00:34:17.413 [2024-07-14 05:48:24.246135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.413 [2024-07-14 05:48:24.246165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.413 qpair failed and we were unable to recover it. 00:34:17.413 [2024-07-14 05:48:24.246386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.413 [2024-07-14 05:48:24.246430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.413 qpair failed and we were unable to recover it. 00:34:17.413 [2024-07-14 05:48:24.246635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.413 [2024-07-14 05:48:24.246666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.413 qpair failed and we were unable to recover it. 00:34:17.413 [2024-07-14 05:48:24.246849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.413 [2024-07-14 05:48:24.246904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.413 qpair failed and we were unable to recover it. 00:34:17.413 [2024-07-14 05:48:24.247096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.413 [2024-07-14 05:48:24.247124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.413 qpair failed and we were unable to recover it. 00:34:17.413 [2024-07-14 05:48:24.247341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.413 [2024-07-14 05:48:24.247370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.413 qpair failed and we were unable to recover it. 00:34:17.413 [2024-07-14 05:48:24.247574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.413 [2024-07-14 05:48:24.247603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.413 qpair failed and we were unable to recover it. 00:34:17.413 [2024-07-14 05:48:24.247786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.413 [2024-07-14 05:48:24.247814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.413 qpair failed and we were unable to recover it. 00:34:17.413 [2024-07-14 05:48:24.247994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.413 [2024-07-14 05:48:24.248021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.413 qpair failed and we were unable to recover it. 00:34:17.413 [2024-07-14 05:48:24.248262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.413 [2024-07-14 05:48:24.248311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.413 qpair failed and we were unable to recover it. 00:34:17.413 [2024-07-14 05:48:24.248494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.413 [2024-07-14 05:48:24.248523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.413 qpair failed and we were unable to recover it. 00:34:17.413 [2024-07-14 05:48:24.248729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.413 [2024-07-14 05:48:24.248759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.413 qpair failed and we were unable to recover it. 00:34:17.413 [2024-07-14 05:48:24.248941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.413 [2024-07-14 05:48:24.248967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.413 qpair failed and we were unable to recover it. 00:34:17.413 [2024-07-14 05:48:24.249147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.413 [2024-07-14 05:48:24.249173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.413 qpair failed and we were unable to recover it. 00:34:17.413 [2024-07-14 05:48:24.249382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.413 [2024-07-14 05:48:24.249410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.413 qpair failed and we were unable to recover it. 00:34:17.413 [2024-07-14 05:48:24.249573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.413 [2024-07-14 05:48:24.249601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.413 qpair failed and we were unable to recover it. 00:34:17.413 [2024-07-14 05:48:24.249849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.413 [2024-07-14 05:48:24.249882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.413 qpair failed and we were unable to recover it. 00:34:17.413 [2024-07-14 05:48:24.250068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.413 [2024-07-14 05:48:24.250093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.413 qpair failed and we were unable to recover it. 00:34:17.413 [2024-07-14 05:48:24.250279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.413 [2024-07-14 05:48:24.250304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.413 qpair failed and we were unable to recover it. 00:34:17.414 [2024-07-14 05:48:24.250514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.414 [2024-07-14 05:48:24.250540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.414 qpair failed and we were unable to recover it. 00:34:17.414 [2024-07-14 05:48:24.250722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.414 [2024-07-14 05:48:24.250751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.414 qpair failed and we were unable to recover it. 00:34:17.414 [2024-07-14 05:48:24.250977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.414 [2024-07-14 05:48:24.251004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.414 qpair failed and we were unable to recover it. 00:34:17.414 [2024-07-14 05:48:24.251148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.414 [2024-07-14 05:48:24.251173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.414 qpair failed and we were unable to recover it. 00:34:17.414 [2024-07-14 05:48:24.251350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.414 [2024-07-14 05:48:24.251380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.414 qpair failed and we were unable to recover it. 00:34:17.414 [2024-07-14 05:48:24.251681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.414 [2024-07-14 05:48:24.251735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.414 qpair failed and we were unable to recover it. 00:34:17.414 [2024-07-14 05:48:24.251972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.414 [2024-07-14 05:48:24.251999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.414 qpair failed and we were unable to recover it. 00:34:17.414 [2024-07-14 05:48:24.252179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.414 [2024-07-14 05:48:24.252204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.414 qpair failed and we were unable to recover it. 00:34:17.414 [2024-07-14 05:48:24.252382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.414 [2024-07-14 05:48:24.252410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.414 qpair failed and we were unable to recover it. 00:34:17.414 [2024-07-14 05:48:24.252617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.414 [2024-07-14 05:48:24.252647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.414 qpair failed and we were unable to recover it. 00:34:17.414 [2024-07-14 05:48:24.252873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.414 [2024-07-14 05:48:24.252914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.414 qpair failed and we were unable to recover it. 00:34:17.414 [2024-07-14 05:48:24.253112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.414 [2024-07-14 05:48:24.253153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.414 qpair failed and we were unable to recover it. 00:34:17.414 [2024-07-14 05:48:24.253402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.414 [2024-07-14 05:48:24.253447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.414 qpair failed and we were unable to recover it. 00:34:17.414 [2024-07-14 05:48:24.253681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.414 [2024-07-14 05:48:24.253733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.414 qpair failed and we were unable to recover it. 00:34:17.414 [2024-07-14 05:48:24.253929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.414 [2024-07-14 05:48:24.253956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.414 qpair failed and we were unable to recover it. 00:34:17.414 [2024-07-14 05:48:24.254119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.414 [2024-07-14 05:48:24.254145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.414 qpair failed and we were unable to recover it. 00:34:17.414 [2024-07-14 05:48:24.254304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.414 [2024-07-14 05:48:24.254330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.414 qpair failed and we were unable to recover it. 00:34:17.414 [2024-07-14 05:48:24.254509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.414 [2024-07-14 05:48:24.254539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.414 qpair failed and we were unable to recover it. 00:34:17.414 [2024-07-14 05:48:24.254738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.414 [2024-07-14 05:48:24.254766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.414 qpair failed and we were unable to recover it. 00:34:17.414 [2024-07-14 05:48:24.254959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.414 [2024-07-14 05:48:24.254986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.414 qpair failed and we were unable to recover it. 00:34:17.414 [2024-07-14 05:48:24.255145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.414 [2024-07-14 05:48:24.255170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.414 qpair failed and we were unable to recover it. 00:34:17.414 [2024-07-14 05:48:24.255429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.414 [2024-07-14 05:48:24.255480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.414 qpair failed and we were unable to recover it. 00:34:17.414 [2024-07-14 05:48:24.255803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.414 [2024-07-14 05:48:24.255850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.414 qpair failed and we were unable to recover it. 00:34:17.414 [2024-07-14 05:48:24.256072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.414 [2024-07-14 05:48:24.256104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.414 qpair failed and we were unable to recover it. 00:34:17.414 [2024-07-14 05:48:24.256313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.414 [2024-07-14 05:48:24.256338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.414 qpair failed and we were unable to recover it. 00:34:17.414 [2024-07-14 05:48:24.256582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.414 [2024-07-14 05:48:24.256640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.414 qpair failed and we were unable to recover it. 00:34:17.414 [2024-07-14 05:48:24.256803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.414 [2024-07-14 05:48:24.256831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.414 qpair failed and we were unable to recover it. 00:34:17.414 [2024-07-14 05:48:24.257033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.414 [2024-07-14 05:48:24.257063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.414 qpair failed and we were unable to recover it. 00:34:17.414 [2024-07-14 05:48:24.257287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.414 [2024-07-14 05:48:24.257313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.414 qpair failed and we were unable to recover it. 00:34:17.414 [2024-07-14 05:48:24.257554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.414 [2024-07-14 05:48:24.257606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.414 qpair failed and we were unable to recover it. 00:34:17.414 [2024-07-14 05:48:24.257830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.414 [2024-07-14 05:48:24.257859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.414 qpair failed and we were unable to recover it. 00:34:17.414 [2024-07-14 05:48:24.258072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.414 [2024-07-14 05:48:24.258099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.414 qpair failed and we were unable to recover it. 00:34:17.414 [2024-07-14 05:48:24.258318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.414 [2024-07-14 05:48:24.258347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.414 qpair failed and we were unable to recover it. 00:34:17.414 [2024-07-14 05:48:24.258555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.414 [2024-07-14 05:48:24.258584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.414 qpair failed and we were unable to recover it. 00:34:17.414 [2024-07-14 05:48:24.258820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.414 [2024-07-14 05:48:24.258849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.414 qpair failed and we were unable to recover it. 00:34:17.414 [2024-07-14 05:48:24.259041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.414 [2024-07-14 05:48:24.259067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.414 qpair failed and we were unable to recover it. 00:34:17.414 [2024-07-14 05:48:24.259278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.414 [2024-07-14 05:48:24.259306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.414 qpair failed and we were unable to recover it. 00:34:17.414 [2024-07-14 05:48:24.259613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.414 [2024-07-14 05:48:24.259670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.414 qpair failed and we were unable to recover it. 00:34:17.414 [2024-07-14 05:48:24.259873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.414 [2024-07-14 05:48:24.259903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.414 qpair failed and we were unable to recover it. 00:34:17.414 [2024-07-14 05:48:24.260084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.414 [2024-07-14 05:48:24.260110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.414 qpair failed and we were unable to recover it. 00:34:17.414 [2024-07-14 05:48:24.260325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.414 [2024-07-14 05:48:24.260354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.414 qpair failed and we were unable to recover it. 00:34:17.414 [2024-07-14 05:48:24.260617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.414 [2024-07-14 05:48:24.260668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.414 qpair failed and we were unable to recover it. 00:34:17.414 [2024-07-14 05:48:24.260924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.414 [2024-07-14 05:48:24.260951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.414 qpair failed and we were unable to recover it. 00:34:17.414 [2024-07-14 05:48:24.261157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.414 [2024-07-14 05:48:24.261186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.414 qpair failed and we were unable to recover it. 00:34:17.414 [2024-07-14 05:48:24.261386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.414 [2024-07-14 05:48:24.261415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.414 qpair failed and we were unable to recover it. 00:34:17.414 [2024-07-14 05:48:24.261660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.414 [2024-07-14 05:48:24.261703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.414 qpair failed and we were unable to recover it. 00:34:17.414 [2024-07-14 05:48:24.261931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.414 [2024-07-14 05:48:24.261958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.414 qpair failed and we were unable to recover it. 00:34:17.414 [2024-07-14 05:48:24.262117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.415 [2024-07-14 05:48:24.262159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.415 qpair failed and we were unable to recover it. 00:34:17.415 [2024-07-14 05:48:24.262411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.415 [2024-07-14 05:48:24.262439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.415 qpair failed and we were unable to recover it. 00:34:17.415 [2024-07-14 05:48:24.262667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.415 [2024-07-14 05:48:24.262695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.415 qpair failed and we were unable to recover it. 00:34:17.415 [2024-07-14 05:48:24.262941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.415 [2024-07-14 05:48:24.262982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.415 qpair failed and we were unable to recover it. 00:34:17.415 [2024-07-14 05:48:24.263151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.415 [2024-07-14 05:48:24.263178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.415 qpair failed and we were unable to recover it. 00:34:17.415 [2024-07-14 05:48:24.263413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.415 [2024-07-14 05:48:24.263456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.415 qpair failed and we were unable to recover it. 00:34:17.415 [2024-07-14 05:48:24.263787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.415 [2024-07-14 05:48:24.263850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.415 qpair failed and we were unable to recover it. 00:34:17.415 [2024-07-14 05:48:24.264042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.415 [2024-07-14 05:48:24.264068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.415 qpair failed and we were unable to recover it. 00:34:17.415 [2024-07-14 05:48:24.264281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.415 [2024-07-14 05:48:24.264325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.415 qpair failed and we were unable to recover it. 00:34:17.415 [2024-07-14 05:48:24.264548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.415 [2024-07-14 05:48:24.264593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.415 qpair failed and we were unable to recover it. 00:34:17.415 [2024-07-14 05:48:24.264774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.415 [2024-07-14 05:48:24.264801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.415 qpair failed and we were unable to recover it. 00:34:17.415 [2024-07-14 05:48:24.264952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.415 [2024-07-14 05:48:24.264979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.415 qpair failed and we were unable to recover it. 00:34:17.415 [2024-07-14 05:48:24.265219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.415 [2024-07-14 05:48:24.265262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.415 qpair failed and we were unable to recover it. 00:34:17.415 [2024-07-14 05:48:24.265508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.415 [2024-07-14 05:48:24.265552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.415 qpair failed and we were unable to recover it. 00:34:17.415 [2024-07-14 05:48:24.265735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.415 [2024-07-14 05:48:24.265761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.415 qpair failed and we were unable to recover it. 00:34:17.415 [2024-07-14 05:48:24.265947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.415 [2024-07-14 05:48:24.265974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.415 qpair failed and we were unable to recover it. 00:34:17.415 [2024-07-14 05:48:24.266215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.415 [2024-07-14 05:48:24.266258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.415 qpair failed and we were unable to recover it. 00:34:17.415 [2024-07-14 05:48:24.266503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.415 [2024-07-14 05:48:24.266547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.415 qpair failed and we were unable to recover it. 00:34:17.415 [2024-07-14 05:48:24.266733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.415 [2024-07-14 05:48:24.266758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.415 qpair failed and we were unable to recover it. 00:34:17.415 [2024-07-14 05:48:24.266966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.415 [2024-07-14 05:48:24.266997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.415 qpair failed and we were unable to recover it. 00:34:17.415 [2024-07-14 05:48:24.267228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.415 [2024-07-14 05:48:24.267271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.415 qpair failed and we were unable to recover it. 00:34:17.415 [2024-07-14 05:48:24.267480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.415 [2024-07-14 05:48:24.267523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.415 qpair failed and we were unable to recover it. 00:34:17.415 [2024-07-14 05:48:24.267682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.415 [2024-07-14 05:48:24.267708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.415 qpair failed and we were unable to recover it. 00:34:17.415 [2024-07-14 05:48:24.267917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.415 [2024-07-14 05:48:24.267943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.415 qpair failed and we were unable to recover it. 00:34:17.415 [2024-07-14 05:48:24.268114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.415 [2024-07-14 05:48:24.268143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.415 qpair failed and we were unable to recover it. 00:34:17.415 [2024-07-14 05:48:24.268400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.415 [2024-07-14 05:48:24.268429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.415 qpair failed and we were unable to recover it. 00:34:17.415 [2024-07-14 05:48:24.268756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.415 [2024-07-14 05:48:24.268807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.415 qpair failed and we were unable to recover it. 00:34:17.415 [2024-07-14 05:48:24.269032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.415 [2024-07-14 05:48:24.269059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.415 qpair failed and we were unable to recover it. 00:34:17.415 [2024-07-14 05:48:24.269266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.415 [2024-07-14 05:48:24.269310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.415 qpair failed and we were unable to recover it. 00:34:17.415 [2024-07-14 05:48:24.269491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.415 [2024-07-14 05:48:24.269535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.415 qpair failed and we were unable to recover it. 00:34:17.415 [2024-07-14 05:48:24.269748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.415 [2024-07-14 05:48:24.269773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.415 qpair failed and we were unable to recover it. 00:34:17.415 [2024-07-14 05:48:24.269978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.415 [2024-07-14 05:48:24.270022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.415 qpair failed and we were unable to recover it. 00:34:17.415 [2024-07-14 05:48:24.270235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.415 [2024-07-14 05:48:24.270279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.415 qpair failed and we were unable to recover it. 00:34:17.415 [2024-07-14 05:48:24.270492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.415 [2024-07-14 05:48:24.270535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.415 qpair failed and we were unable to recover it. 00:34:17.415 [2024-07-14 05:48:24.270717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.415 [2024-07-14 05:48:24.270742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.415 qpair failed and we were unable to recover it. 00:34:17.415 [2024-07-14 05:48:24.270971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.415 [2024-07-14 05:48:24.271015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.415 qpair failed and we were unable to recover it. 00:34:17.415 [2024-07-14 05:48:24.271253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.415 [2024-07-14 05:48:24.271297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.415 qpair failed and we were unable to recover it. 00:34:17.415 [2024-07-14 05:48:24.271530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.415 [2024-07-14 05:48:24.271574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.415 qpair failed and we were unable to recover it. 00:34:17.415 [2024-07-14 05:48:24.271795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.415 [2024-07-14 05:48:24.271822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.415 qpair failed and we were unable to recover it. 00:34:17.415 [2024-07-14 05:48:24.272043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.415 [2024-07-14 05:48:24.272069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.415 qpair failed and we were unable to recover it. 00:34:17.415 [2024-07-14 05:48:24.272345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.415 [2024-07-14 05:48:24.272390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.415 qpair failed and we were unable to recover it. 00:34:17.415 [2024-07-14 05:48:24.272570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.415 [2024-07-14 05:48:24.272600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.415 qpair failed and we were unable to recover it. 00:34:17.415 [2024-07-14 05:48:24.272800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.415 [2024-07-14 05:48:24.272829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.415 qpair failed and we were unable to recover it. 00:34:17.415 [2024-07-14 05:48:24.273045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.415 [2024-07-14 05:48:24.273078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.415 qpair failed and we were unable to recover it. 00:34:17.415 [2024-07-14 05:48:24.273446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.415 [2024-07-14 05:48:24.273511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.415 qpair failed and we were unable to recover it. 00:34:17.415 [2024-07-14 05:48:24.273713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.415 [2024-07-14 05:48:24.273742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.415 qpair failed and we were unable to recover it. 00:34:17.415 [2024-07-14 05:48:24.273937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.415 [2024-07-14 05:48:24.273964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.415 qpair failed and we were unable to recover it. 00:34:17.415 [2024-07-14 05:48:24.274172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.415 [2024-07-14 05:48:24.274200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.415 qpair failed and we were unable to recover it. 00:34:17.415 [2024-07-14 05:48:24.274457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.416 [2024-07-14 05:48:24.274485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.416 qpair failed and we were unable to recover it. 00:34:17.416 [2024-07-14 05:48:24.274705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.416 [2024-07-14 05:48:24.274749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.416 qpair failed and we were unable to recover it. 00:34:17.416 [2024-07-14 05:48:24.274937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.416 [2024-07-14 05:48:24.274963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.416 qpair failed and we were unable to recover it. 00:34:17.416 [2024-07-14 05:48:24.275176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.416 [2024-07-14 05:48:24.275205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.416 qpair failed and we were unable to recover it. 00:34:17.416 [2024-07-14 05:48:24.275456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.416 [2024-07-14 05:48:24.275502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.416 qpair failed and we were unable to recover it. 00:34:17.416 [2024-07-14 05:48:24.275702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.416 [2024-07-14 05:48:24.275745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.416 qpair failed and we were unable to recover it. 00:34:17.416 [2024-07-14 05:48:24.275934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.416 [2024-07-14 05:48:24.275977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.416 qpair failed and we were unable to recover it. 00:34:17.416 [2024-07-14 05:48:24.276166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.416 [2024-07-14 05:48:24.276210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.416 qpair failed and we were unable to recover it. 00:34:17.416 [2024-07-14 05:48:24.276622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.416 [2024-07-14 05:48:24.276677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.416 qpair failed and we were unable to recover it. 00:34:17.416 [2024-07-14 05:48:24.276910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.416 [2024-07-14 05:48:24.276937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.416 qpair failed and we were unable to recover it. 00:34:17.416 [2024-07-14 05:48:24.277128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.416 [2024-07-14 05:48:24.277172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.416 qpair failed and we were unable to recover it. 00:34:17.416 [2024-07-14 05:48:24.277472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.416 [2024-07-14 05:48:24.277529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.416 qpair failed and we were unable to recover it. 00:34:17.416 [2024-07-14 05:48:24.277804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.416 [2024-07-14 05:48:24.277833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.416 qpair failed and we were unable to recover it. 00:34:17.416 [2024-07-14 05:48:24.278042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.416 [2024-07-14 05:48:24.278070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.416 qpair failed and we were unable to recover it. 00:34:17.416 [2024-07-14 05:48:24.278284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.416 [2024-07-14 05:48:24.278312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.416 qpair failed and we were unable to recover it. 00:34:17.416 [2024-07-14 05:48:24.278570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.416 [2024-07-14 05:48:24.278617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.416 qpair failed and we were unable to recover it. 00:34:17.416 [2024-07-14 05:48:24.278839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.416 [2024-07-14 05:48:24.278886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.416 qpair failed and we were unable to recover it. 00:34:17.416 [2024-07-14 05:48:24.279108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.416 [2024-07-14 05:48:24.279135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.416 qpair failed and we were unable to recover it. 00:34:17.416 [2024-07-14 05:48:24.279318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.416 [2024-07-14 05:48:24.279361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.416 qpair failed and we were unable to recover it. 00:34:17.416 [2024-07-14 05:48:24.279576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.416 [2024-07-14 05:48:24.279619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.416 qpair failed and we were unable to recover it. 00:34:17.416 [2024-07-14 05:48:24.279802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.416 [2024-07-14 05:48:24.279827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.416 qpair failed and we were unable to recover it. 00:34:17.416 [2024-07-14 05:48:24.280024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.416 [2024-07-14 05:48:24.280050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.416 qpair failed and we were unable to recover it. 00:34:17.416 [2024-07-14 05:48:24.280242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.416 [2024-07-14 05:48:24.280286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.416 qpair failed and we were unable to recover it. 00:34:17.416 [2024-07-14 05:48:24.280471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.416 [2024-07-14 05:48:24.280515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.416 qpair failed and we were unable to recover it. 00:34:17.416 [2024-07-14 05:48:24.280751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.416 [2024-07-14 05:48:24.280794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.416 qpair failed and we were unable to recover it. 00:34:17.416 [2024-07-14 05:48:24.280974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.416 [2024-07-14 05:48:24.281019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.416 qpair failed and we were unable to recover it. 00:34:17.416 [2024-07-14 05:48:24.281234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.416 [2024-07-14 05:48:24.281278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.416 qpair failed and we were unable to recover it. 00:34:17.416 [2024-07-14 05:48:24.281466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.416 [2024-07-14 05:48:24.281509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.416 qpair failed and we were unable to recover it. 00:34:17.416 [2024-07-14 05:48:24.281693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.416 [2024-07-14 05:48:24.281718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.416 qpair failed and we were unable to recover it. 00:34:17.416 [2024-07-14 05:48:24.281953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.416 [2024-07-14 05:48:24.281996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.416 qpair failed and we were unable to recover it. 00:34:17.416 [2024-07-14 05:48:24.282202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.416 [2024-07-14 05:48:24.282245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.416 qpair failed and we were unable to recover it. 00:34:17.416 [2024-07-14 05:48:24.282423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.416 [2024-07-14 05:48:24.282467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.416 qpair failed and we were unable to recover it. 00:34:17.416 [2024-07-14 05:48:24.282693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.416 [2024-07-14 05:48:24.282733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.416 qpair failed and we were unable to recover it. 00:34:17.416 [2024-07-14 05:48:24.282934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.416 [2024-07-14 05:48:24.282963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.416 qpair failed and we were unable to recover it. 00:34:17.416 [2024-07-14 05:48:24.283197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.416 [2024-07-14 05:48:24.283226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.416 qpair failed and we were unable to recover it. 00:34:17.416 [2024-07-14 05:48:24.283454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.416 [2024-07-14 05:48:24.283490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.416 qpair failed and we were unable to recover it. 00:34:17.416 [2024-07-14 05:48:24.283779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.416 [2024-07-14 05:48:24.283830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.416 qpair failed and we were unable to recover it. 00:34:17.416 [2024-07-14 05:48:24.284027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.416 [2024-07-14 05:48:24.284055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.416 qpair failed and we were unable to recover it. 00:34:17.416 [2024-07-14 05:48:24.284244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.416 [2024-07-14 05:48:24.284272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.416 qpair failed and we were unable to recover it. 00:34:17.416 [2024-07-14 05:48:24.284478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.416 [2024-07-14 05:48:24.284507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.416 qpair failed and we were unable to recover it. 00:34:17.416 [2024-07-14 05:48:24.284818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.416 [2024-07-14 05:48:24.284886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.416 qpair failed and we were unable to recover it. 00:34:17.416 [2024-07-14 05:48:24.285102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.416 [2024-07-14 05:48:24.285128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.416 qpair failed and we were unable to recover it. 00:34:17.416 [2024-07-14 05:48:24.285347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.417 [2024-07-14 05:48:24.285376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.417 qpair failed and we were unable to recover it. 00:34:17.417 [2024-07-14 05:48:24.285628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.417 [2024-07-14 05:48:24.285674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.417 qpair failed and we were unable to recover it. 00:34:17.417 [2024-07-14 05:48:24.285907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.417 [2024-07-14 05:48:24.285933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.417 qpair failed and we were unable to recover it. 00:34:17.417 [2024-07-14 05:48:24.286116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.417 [2024-07-14 05:48:24.286158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.417 qpair failed and we were unable to recover it. 00:34:17.417 [2024-07-14 05:48:24.286361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.417 [2024-07-14 05:48:24.286391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.417 qpair failed and we were unable to recover it. 00:34:17.417 [2024-07-14 05:48:24.286611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.417 [2024-07-14 05:48:24.286641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.417 qpair failed and we were unable to recover it. 00:34:17.417 [2024-07-14 05:48:24.286838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.417 [2024-07-14 05:48:24.286874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.417 qpair failed and we were unable to recover it. 00:34:17.417 [2024-07-14 05:48:24.287084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.417 [2024-07-14 05:48:24.287111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.417 qpair failed and we were unable to recover it. 00:34:17.417 [2024-07-14 05:48:24.287356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.417 [2024-07-14 05:48:24.287382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.417 qpair failed and we were unable to recover it. 00:34:17.417 [2024-07-14 05:48:24.287647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.417 [2024-07-14 05:48:24.287693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.417 qpair failed and we were unable to recover it. 00:34:17.417 [2024-07-14 05:48:24.287927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.417 [2024-07-14 05:48:24.287954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.417 qpair failed and we were unable to recover it. 00:34:17.417 [2024-07-14 05:48:24.288117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.417 [2024-07-14 05:48:24.288142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.417 qpair failed and we were unable to recover it. 00:34:17.417 [2024-07-14 05:48:24.288333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.417 [2024-07-14 05:48:24.288363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.417 qpair failed and we were unable to recover it. 00:34:17.417 [2024-07-14 05:48:24.288705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.417 [2024-07-14 05:48:24.288764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.417 qpair failed and we were unable to recover it. 00:34:17.417 [2024-07-14 05:48:24.289001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.417 [2024-07-14 05:48:24.289028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.417 qpair failed and we were unable to recover it. 00:34:17.417 [2024-07-14 05:48:24.289238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.417 [2024-07-14 05:48:24.289268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.417 qpair failed and we were unable to recover it. 00:34:17.417 [2024-07-14 05:48:24.289499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.417 [2024-07-14 05:48:24.289529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.417 qpair failed and we were unable to recover it. 00:34:17.417 [2024-07-14 05:48:24.289751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.417 [2024-07-14 05:48:24.289780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.417 qpair failed and we were unable to recover it. 00:34:17.417 [2024-07-14 05:48:24.289977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.417 [2024-07-14 05:48:24.290003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.417 qpair failed and we were unable to recover it. 00:34:17.417 [2024-07-14 05:48:24.290170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.417 [2024-07-14 05:48:24.290195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.417 qpair failed and we were unable to recover it. 00:34:17.417 [2024-07-14 05:48:24.290404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.417 [2024-07-14 05:48:24.290435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.417 qpair failed and we were unable to recover it. 00:34:17.417 [2024-07-14 05:48:24.290678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.417 [2024-07-14 05:48:24.290706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.417 qpair failed and we were unable to recover it. 00:34:17.417 [2024-07-14 05:48:24.290913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.417 [2024-07-14 05:48:24.290939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.417 qpair failed and we were unable to recover it. 00:34:17.417 [2024-07-14 05:48:24.291127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.417 [2024-07-14 05:48:24.291171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.417 qpair failed and we were unable to recover it. 00:34:17.417 [2024-07-14 05:48:24.291354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.417 [2024-07-14 05:48:24.291383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.417 qpair failed and we were unable to recover it. 00:34:17.417 [2024-07-14 05:48:24.291585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.417 [2024-07-14 05:48:24.291614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.417 qpair failed and we were unable to recover it. 00:34:17.417 [2024-07-14 05:48:24.291816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.417 [2024-07-14 05:48:24.291844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.417 qpair failed and we were unable to recover it. 00:34:17.417 [2024-07-14 05:48:24.292063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.417 [2024-07-14 05:48:24.292089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.417 qpair failed and we were unable to recover it. 00:34:17.417 [2024-07-14 05:48:24.292302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.417 [2024-07-14 05:48:24.292331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.417 qpair failed and we were unable to recover it. 00:34:17.417 [2024-07-14 05:48:24.292528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.417 [2024-07-14 05:48:24.292557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.417 qpair failed and we were unable to recover it. 00:34:17.417 [2024-07-14 05:48:24.292755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.417 [2024-07-14 05:48:24.292784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.417 qpair failed and we were unable to recover it. 00:34:17.417 [2024-07-14 05:48:24.292975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.417 [2024-07-14 05:48:24.293002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.417 qpair failed and we were unable to recover it. 00:34:17.417 [2024-07-14 05:48:24.293211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.417 [2024-07-14 05:48:24.293240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.417 qpair failed and we were unable to recover it. 00:34:17.417 [2024-07-14 05:48:24.293519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.417 [2024-07-14 05:48:24.293552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.417 qpair failed and we were unable to recover it. 00:34:17.417 [2024-07-14 05:48:24.293753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.417 [2024-07-14 05:48:24.293782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.417 qpair failed and we were unable to recover it. 00:34:17.417 [2024-07-14 05:48:24.293995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.417 [2024-07-14 05:48:24.294021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.417 qpair failed and we were unable to recover it. 00:34:17.417 [2024-07-14 05:48:24.294227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.417 [2024-07-14 05:48:24.294256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.417 qpair failed and we were unable to recover it. 00:34:17.417 [2024-07-14 05:48:24.294491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.417 [2024-07-14 05:48:24.294538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.417 qpair failed and we were unable to recover it. 00:34:17.417 [2024-07-14 05:48:24.294750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.417 [2024-07-14 05:48:24.294779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.417 qpair failed and we were unable to recover it. 00:34:17.417 [2024-07-14 05:48:24.294992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.417 [2024-07-14 05:48:24.295018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.417 qpair failed and we were unable to recover it. 00:34:17.417 [2024-07-14 05:48:24.295201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.417 [2024-07-14 05:48:24.295227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.417 qpair failed and we were unable to recover it. 00:34:17.417 [2024-07-14 05:48:24.295522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.417 [2024-07-14 05:48:24.295583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.417 qpair failed and we were unable to recover it. 00:34:17.417 [2024-07-14 05:48:24.295820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.417 [2024-07-14 05:48:24.295849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.417 qpair failed and we were unable to recover it. 00:34:17.417 [2024-07-14 05:48:24.296038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.417 [2024-07-14 05:48:24.296066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.417 qpair failed and we were unable to recover it. 00:34:17.417 [2024-07-14 05:48:24.296295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.417 [2024-07-14 05:48:24.296324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.417 qpair failed and we were unable to recover it. 00:34:17.417 [2024-07-14 05:48:24.296628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.417 [2024-07-14 05:48:24.296682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.417 qpair failed and we were unable to recover it. 00:34:17.417 [2024-07-14 05:48:24.296913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.417 [2024-07-14 05:48:24.296953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.417 qpair failed and we were unable to recover it. 00:34:17.417 [2024-07-14 05:48:24.297182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.417 [2024-07-14 05:48:24.297210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.418 qpair failed and we were unable to recover it. 00:34:17.418 [2024-07-14 05:48:24.297426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.418 [2024-07-14 05:48:24.297469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.418 qpair failed and we were unable to recover it. 00:34:17.418 [2024-07-14 05:48:24.297731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.418 [2024-07-14 05:48:24.297777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.418 qpair failed and we were unable to recover it. 00:34:17.418 [2024-07-14 05:48:24.297966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.418 [2024-07-14 05:48:24.297992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.418 qpair failed and we were unable to recover it. 00:34:17.418 [2024-07-14 05:48:24.298210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.418 [2024-07-14 05:48:24.298254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.418 qpair failed and we were unable to recover it. 00:34:17.418 [2024-07-14 05:48:24.298490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.418 [2024-07-14 05:48:24.298532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.418 qpair failed and we were unable to recover it. 00:34:17.418 [2024-07-14 05:48:24.298886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.418 [2024-07-14 05:48:24.298931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.418 qpair failed and we were unable to recover it. 00:34:17.418 [2024-07-14 05:48:24.299117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.418 [2024-07-14 05:48:24.299142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.418 qpair failed and we were unable to recover it. 00:34:17.418 [2024-07-14 05:48:24.299357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.418 [2024-07-14 05:48:24.299402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.418 qpair failed and we were unable to recover it. 00:34:17.418 [2024-07-14 05:48:24.299614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.418 [2024-07-14 05:48:24.299656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.418 qpair failed and we were unable to recover it. 00:34:17.418 [2024-07-14 05:48:24.299884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.418 [2024-07-14 05:48:24.299911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.418 qpair failed and we were unable to recover it. 00:34:17.418 [2024-07-14 05:48:24.300095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.418 [2024-07-14 05:48:24.300120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.418 qpair failed and we were unable to recover it. 00:34:17.418 [2024-07-14 05:48:24.300335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.418 [2024-07-14 05:48:24.300379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.418 qpair failed and we were unable to recover it. 00:34:17.418 [2024-07-14 05:48:24.300592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.418 [2024-07-14 05:48:24.300635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.418 qpair failed and we were unable to recover it. 00:34:17.418 [2024-07-14 05:48:24.300796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.418 [2024-07-14 05:48:24.300821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.418 qpair failed and we were unable to recover it. 00:34:17.418 [2024-07-14 05:48:24.301016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.418 [2024-07-14 05:48:24.301042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.418 qpair failed and we were unable to recover it. 00:34:17.418 [2024-07-14 05:48:24.301252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.418 [2024-07-14 05:48:24.301294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.418 qpair failed and we were unable to recover it. 00:34:17.418 [2024-07-14 05:48:24.301469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.418 [2024-07-14 05:48:24.301511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.418 qpair failed and we were unable to recover it. 00:34:17.418 [2024-07-14 05:48:24.301751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.418 [2024-07-14 05:48:24.301794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.418 qpair failed and we were unable to recover it. 00:34:17.418 [2024-07-14 05:48:24.301983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.418 [2024-07-14 05:48:24.302009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.418 qpair failed and we were unable to recover it. 00:34:17.418 [2024-07-14 05:48:24.302244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.418 [2024-07-14 05:48:24.302287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.418 qpair failed and we were unable to recover it. 00:34:17.418 [2024-07-14 05:48:24.302530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.418 [2024-07-14 05:48:24.302573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.418 qpair failed and we were unable to recover it. 00:34:17.418 [2024-07-14 05:48:24.302765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.418 [2024-07-14 05:48:24.302791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.418 qpair failed and we were unable to recover it. 00:34:17.418 [2024-07-14 05:48:24.302947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.418 [2024-07-14 05:48:24.302973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.418 qpair failed and we were unable to recover it. 00:34:17.418 [2024-07-14 05:48:24.303183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.418 [2024-07-14 05:48:24.303211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.418 qpair failed and we were unable to recover it. 00:34:17.418 [2024-07-14 05:48:24.303502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.418 [2024-07-14 05:48:24.303545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.418 qpair failed and we were unable to recover it. 00:34:17.418 [2024-07-14 05:48:24.303764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.418 [2024-07-14 05:48:24.303794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.418 qpair failed and we were unable to recover it. 00:34:17.418 [2024-07-14 05:48:24.303949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.418 [2024-07-14 05:48:24.303976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.418 qpair failed and we were unable to recover it. 00:34:17.418 [2024-07-14 05:48:24.304187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.418 [2024-07-14 05:48:24.304216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.418 qpair failed and we were unable to recover it. 00:34:17.418 [2024-07-14 05:48:24.304482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.418 [2024-07-14 05:48:24.304525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.418 qpair failed and we were unable to recover it. 00:34:17.418 [2024-07-14 05:48:24.304677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.418 [2024-07-14 05:48:24.304702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.418 qpair failed and we were unable to recover it. 00:34:17.418 [2024-07-14 05:48:24.304875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.418 [2024-07-14 05:48:24.304901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.418 qpair failed and we were unable to recover it. 00:34:17.418 [2024-07-14 05:48:24.305112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.418 [2024-07-14 05:48:24.305138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.418 qpair failed and we were unable to recover it. 00:34:17.418 [2024-07-14 05:48:24.305337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.418 [2024-07-14 05:48:24.305380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.418 qpair failed and we were unable to recover it. 00:34:17.418 [2024-07-14 05:48:24.305625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.418 [2024-07-14 05:48:24.305668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.418 qpair failed and we were unable to recover it. 00:34:17.418 [2024-07-14 05:48:24.305855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.418 [2024-07-14 05:48:24.305889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.418 qpair failed and we were unable to recover it. 00:34:17.418 [2024-07-14 05:48:24.306079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.418 [2024-07-14 05:48:24.306106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.418 qpair failed and we were unable to recover it. 00:34:17.418 [2024-07-14 05:48:24.306345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.418 [2024-07-14 05:48:24.306388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.418 qpair failed and we were unable to recover it. 00:34:17.418 [2024-07-14 05:48:24.306683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.418 [2024-07-14 05:48:24.306737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.418 qpair failed and we were unable to recover it. 00:34:17.418 [2024-07-14 05:48:24.306950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.418 [2024-07-14 05:48:24.306977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.418 qpair failed and we were unable to recover it. 00:34:17.418 [2024-07-14 05:48:24.307226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.418 [2024-07-14 05:48:24.307269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.418 qpair failed and we were unable to recover it. 00:34:17.418 [2024-07-14 05:48:24.307467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.418 [2024-07-14 05:48:24.307509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.418 qpair failed and we were unable to recover it. 00:34:17.418 [2024-07-14 05:48:24.307693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.418 [2024-07-14 05:48:24.307719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.418 qpair failed and we were unable to recover it. 00:34:17.418 [2024-07-14 05:48:24.307976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.418 [2024-07-14 05:48:24.308006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.418 qpair failed and we were unable to recover it. 00:34:17.418 [2024-07-14 05:48:24.308227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.418 [2024-07-14 05:48:24.308271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.418 qpair failed and we were unable to recover it. 00:34:17.418 [2024-07-14 05:48:24.308472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.418 [2024-07-14 05:48:24.308515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.418 qpair failed and we were unable to recover it. 00:34:17.418 [2024-07-14 05:48:24.308726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.418 [2024-07-14 05:48:24.308751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.418 qpair failed and we were unable to recover it. 00:34:17.418 [2024-07-14 05:48:24.308988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.418 [2024-07-14 05:48:24.309032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.418 qpair failed and we were unable to recover it. 00:34:17.418 [2024-07-14 05:48:24.309261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.419 [2024-07-14 05:48:24.309303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.419 qpair failed and we were unable to recover it. 00:34:17.419 [2024-07-14 05:48:24.309509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.419 [2024-07-14 05:48:24.309553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.419 qpair failed and we were unable to recover it. 00:34:17.419 [2024-07-14 05:48:24.309733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.419 [2024-07-14 05:48:24.309758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.419 qpair failed and we were unable to recover it. 00:34:17.419 [2024-07-14 05:48:24.309964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.419 [2024-07-14 05:48:24.310006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.419 qpair failed and we were unable to recover it. 00:34:17.419 [2024-07-14 05:48:24.310241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.419 [2024-07-14 05:48:24.310283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.419 qpair failed and we were unable to recover it. 00:34:17.419 [2024-07-14 05:48:24.310529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.419 [2024-07-14 05:48:24.310572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.419 qpair failed and we were unable to recover it. 00:34:17.419 [2024-07-14 05:48:24.310767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.419 [2024-07-14 05:48:24.310793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.419 qpair failed and we were unable to recover it. 00:34:17.419 [2024-07-14 05:48:24.311004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.419 [2024-07-14 05:48:24.311047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.419 qpair failed and we were unable to recover it. 00:34:17.419 [2024-07-14 05:48:24.311260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.419 [2024-07-14 05:48:24.311303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.419 qpair failed and we were unable to recover it. 00:34:17.419 [2024-07-14 05:48:24.311487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.419 [2024-07-14 05:48:24.311530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.419 qpair failed and we were unable to recover it. 00:34:17.419 [2024-07-14 05:48:24.311718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.419 [2024-07-14 05:48:24.311743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.419 qpair failed and we were unable to recover it. 00:34:17.419 [2024-07-14 05:48:24.311969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.419 [2024-07-14 05:48:24.312014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.419 qpair failed and we were unable to recover it. 00:34:17.419 [2024-07-14 05:48:24.312222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.419 [2024-07-14 05:48:24.312266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.419 qpair failed and we were unable to recover it. 00:34:17.419 [2024-07-14 05:48:24.312469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.419 [2024-07-14 05:48:24.312512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.419 qpair failed and we were unable to recover it. 00:34:17.419 [2024-07-14 05:48:24.312699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.419 [2024-07-14 05:48:24.312725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.419 qpair failed and we were unable to recover it. 00:34:17.419 [2024-07-14 05:48:24.312941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.419 [2024-07-14 05:48:24.312984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.419 qpair failed and we were unable to recover it. 00:34:17.419 [2024-07-14 05:48:24.313249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.419 [2024-07-14 05:48:24.313292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.419 qpair failed and we were unable to recover it. 00:34:17.419 [2024-07-14 05:48:24.313511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.419 [2024-07-14 05:48:24.313537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.419 qpair failed and we were unable to recover it. 00:34:17.419 [2024-07-14 05:48:24.313745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.419 [2024-07-14 05:48:24.313775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.419 qpair failed and we were unable to recover it. 00:34:17.419 [2024-07-14 05:48:24.313994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.419 [2024-07-14 05:48:24.314039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.419 qpair failed and we were unable to recover it. 00:34:17.419 [2024-07-14 05:48:24.314247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.419 [2024-07-14 05:48:24.314290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.419 qpair failed and we were unable to recover it. 00:34:17.419 [2024-07-14 05:48:24.314468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.419 [2024-07-14 05:48:24.314511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.419 qpair failed and we were unable to recover it. 00:34:17.419 [2024-07-14 05:48:24.314697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.419 [2024-07-14 05:48:24.314723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.419 qpair failed and we were unable to recover it. 00:34:17.419 [2024-07-14 05:48:24.314951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.419 [2024-07-14 05:48:24.314981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.419 qpair failed and we were unable to recover it. 00:34:17.419 [2024-07-14 05:48:24.315210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.419 [2024-07-14 05:48:24.315253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.419 qpair failed and we were unable to recover it. 00:34:17.419 [2024-07-14 05:48:24.315436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.419 [2024-07-14 05:48:24.315479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.419 qpair failed and we were unable to recover it. 00:34:17.419 [2024-07-14 05:48:24.315663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.419 [2024-07-14 05:48:24.315689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.419 qpair failed and we were unable to recover it. 00:34:17.419 [2024-07-14 05:48:24.315880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.419 [2024-07-14 05:48:24.315916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.419 qpair failed and we were unable to recover it. 00:34:17.419 [2024-07-14 05:48:24.316162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.419 [2024-07-14 05:48:24.316205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.419 qpair failed and we were unable to recover it. 00:34:17.419 [2024-07-14 05:48:24.316411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.419 [2024-07-14 05:48:24.316455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.419 qpair failed and we were unable to recover it. 00:34:17.419 [2024-07-14 05:48:24.316691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.419 [2024-07-14 05:48:24.316720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.419 qpair failed and we were unable to recover it. 00:34:17.419 [2024-07-14 05:48:24.316937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.419 [2024-07-14 05:48:24.316980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.419 qpair failed and we were unable to recover it. 00:34:17.419 [2024-07-14 05:48:24.317197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.419 [2024-07-14 05:48:24.317240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.419 qpair failed and we were unable to recover it. 00:34:17.419 [2024-07-14 05:48:24.317450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.419 [2024-07-14 05:48:24.317492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.419 qpair failed and we were unable to recover it. 00:34:17.419 [2024-07-14 05:48:24.317696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.419 [2024-07-14 05:48:24.317739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.419 qpair failed and we were unable to recover it. 00:34:17.419 [2024-07-14 05:48:24.317971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.419 [2024-07-14 05:48:24.318014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.419 qpair failed and we were unable to recover it. 00:34:17.419 [2024-07-14 05:48:24.318221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.419 [2024-07-14 05:48:24.318264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.419 qpair failed and we were unable to recover it. 00:34:17.419 [2024-07-14 05:48:24.318479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.419 [2024-07-14 05:48:24.318522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.419 qpair failed and we were unable to recover it. 00:34:17.419 [2024-07-14 05:48:24.318677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.419 [2024-07-14 05:48:24.318703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.419 qpair failed and we were unable to recover it. 00:34:17.419 [2024-07-14 05:48:24.318891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.419 [2024-07-14 05:48:24.318917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.419 qpair failed and we were unable to recover it. 00:34:17.419 [2024-07-14 05:48:24.319130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.419 [2024-07-14 05:48:24.319174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.419 qpair failed and we were unable to recover it. 00:34:17.419 [2024-07-14 05:48:24.319392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.419 [2024-07-14 05:48:24.319419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.419 qpair failed and we were unable to recover it. 00:34:17.419 [2024-07-14 05:48:24.319653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.419 [2024-07-14 05:48:24.319696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.419 qpair failed and we were unable to recover it. 00:34:17.419 [2024-07-14 05:48:24.319886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.419 [2024-07-14 05:48:24.319913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.419 qpair failed and we were unable to recover it. 00:34:17.419 [2024-07-14 05:48:24.320150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.419 [2024-07-14 05:48:24.320192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.419 qpair failed and we were unable to recover it. 00:34:17.419 [2024-07-14 05:48:24.320400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.419 [2024-07-14 05:48:24.320444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.419 qpair failed and we were unable to recover it. 00:34:17.419 [2024-07-14 05:48:24.320676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.419 [2024-07-14 05:48:24.320718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.419 qpair failed and we were unable to recover it. 00:34:17.419 [2024-07-14 05:48:24.320918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.419 [2024-07-14 05:48:24.320962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.419 qpair failed and we were unable to recover it. 00:34:17.420 [2024-07-14 05:48:24.321175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.420 [2024-07-14 05:48:24.321217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.420 qpair failed and we were unable to recover it. 00:34:17.420 [2024-07-14 05:48:24.321397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.420 [2024-07-14 05:48:24.321441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.420 qpair failed and we were unable to recover it. 00:34:17.420 [2024-07-14 05:48:24.321644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.420 [2024-07-14 05:48:24.321688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.420 qpair failed and we were unable to recover it. 00:34:17.420 [2024-07-14 05:48:24.321848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.420 [2024-07-14 05:48:24.321879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.420 qpair failed and we were unable to recover it. 00:34:17.420 [2024-07-14 05:48:24.322089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.420 [2024-07-14 05:48:24.322140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.420 qpair failed and we were unable to recover it. 00:34:17.420 [2024-07-14 05:48:24.322351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.420 [2024-07-14 05:48:24.322392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.420 qpair failed and we were unable to recover it. 00:34:17.420 [2024-07-14 05:48:24.322604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.420 [2024-07-14 05:48:24.322647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.420 qpair failed and we were unable to recover it. 00:34:17.420 [2024-07-14 05:48:24.322872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.420 [2024-07-14 05:48:24.322898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.420 qpair failed and we were unable to recover it. 00:34:17.420 [2024-07-14 05:48:24.323081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.420 [2024-07-14 05:48:24.323106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.420 qpair failed and we were unable to recover it. 00:34:17.420 [2024-07-14 05:48:24.323314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.420 [2024-07-14 05:48:24.323342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.420 qpair failed and we were unable to recover it. 00:34:17.420 [2024-07-14 05:48:24.323557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.420 [2024-07-14 05:48:24.323603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.420 qpair failed and we were unable to recover it. 00:34:17.420 [2024-07-14 05:48:24.323811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.420 [2024-07-14 05:48:24.323836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.420 qpair failed and we were unable to recover it. 00:34:17.420 [2024-07-14 05:48:24.324028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.420 [2024-07-14 05:48:24.324054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.420 qpair failed and we were unable to recover it. 00:34:17.420 [2024-07-14 05:48:24.324255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.420 [2024-07-14 05:48:24.324298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.420 qpair failed and we were unable to recover it. 00:34:17.420 [2024-07-14 05:48:24.324512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.420 [2024-07-14 05:48:24.324554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.420 qpair failed and we were unable to recover it. 00:34:17.420 [2024-07-14 05:48:24.324760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.420 [2024-07-14 05:48:24.324786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.420 qpair failed and we were unable to recover it. 00:34:17.420 [2024-07-14 05:48:24.324970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.420 [2024-07-14 05:48:24.324997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.420 qpair failed and we were unable to recover it. 00:34:17.420 [2024-07-14 05:48:24.325208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.420 [2024-07-14 05:48:24.325236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.420 qpair failed and we were unable to recover it. 00:34:17.420 [2024-07-14 05:48:24.325435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.420 [2024-07-14 05:48:24.325481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.420 qpair failed and we were unable to recover it. 00:34:17.420 [2024-07-14 05:48:24.325661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.420 [2024-07-14 05:48:24.325687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.420 qpair failed and we were unable to recover it. 00:34:17.420 [2024-07-14 05:48:24.325844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.420 [2024-07-14 05:48:24.325875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.420 qpair failed and we were unable to recover it. 00:34:17.420 [2024-07-14 05:48:24.326084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.420 [2024-07-14 05:48:24.326128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.420 qpair failed and we were unable to recover it. 00:34:17.420 [2024-07-14 05:48:24.326321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.420 [2024-07-14 05:48:24.326350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.420 qpair failed and we were unable to recover it. 00:34:17.420 [2024-07-14 05:48:24.326549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.420 [2024-07-14 05:48:24.326592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.420 qpair failed and we were unable to recover it. 00:34:17.420 [2024-07-14 05:48:24.326818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.420 [2024-07-14 05:48:24.326844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.420 qpair failed and we were unable to recover it. 00:34:17.420 [2024-07-14 05:48:24.327028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.420 [2024-07-14 05:48:24.327056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.420 qpair failed and we were unable to recover it. 00:34:17.420 [2024-07-14 05:48:24.327269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.420 [2024-07-14 05:48:24.327311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.420 qpair failed and we were unable to recover it. 00:34:17.420 [2024-07-14 05:48:24.327523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.420 [2024-07-14 05:48:24.327565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.420 qpair failed and we were unable to recover it. 00:34:17.420 [2024-07-14 05:48:24.327776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.420 [2024-07-14 05:48:24.327801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.420 qpair failed and we were unable to recover it. 00:34:17.420 [2024-07-14 05:48:24.327991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.420 [2024-07-14 05:48:24.328017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.420 qpair failed and we were unable to recover it. 00:34:17.420 [2024-07-14 05:48:24.328248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.420 [2024-07-14 05:48:24.328291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.420 qpair failed and we were unable to recover it. 00:34:17.420 [2024-07-14 05:48:24.328506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.420 [2024-07-14 05:48:24.328548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.420 qpair failed and we were unable to recover it. 00:34:17.420 [2024-07-14 05:48:24.328735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.420 [2024-07-14 05:48:24.328760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.420 qpair failed and we were unable to recover it. 00:34:17.420 [2024-07-14 05:48:24.328948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.420 [2024-07-14 05:48:24.328975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.420 qpair failed and we were unable to recover it. 00:34:17.420 [2024-07-14 05:48:24.329186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.420 [2024-07-14 05:48:24.329215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.420 qpair failed and we were unable to recover it. 00:34:17.420 [2024-07-14 05:48:24.329443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.420 [2024-07-14 05:48:24.329488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.420 qpair failed and we were unable to recover it. 00:34:17.420 [2024-07-14 05:48:24.329726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.420 [2024-07-14 05:48:24.329769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.420 qpair failed and we were unable to recover it. 00:34:17.420 [2024-07-14 05:48:24.330013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.420 [2024-07-14 05:48:24.330057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.420 qpair failed and we were unable to recover it. 00:34:17.420 [2024-07-14 05:48:24.330293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.420 [2024-07-14 05:48:24.330336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.420 qpair failed and we were unable to recover it. 00:34:17.420 [2024-07-14 05:48:24.330548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.420 [2024-07-14 05:48:24.330590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.420 qpair failed and we were unable to recover it. 00:34:17.420 [2024-07-14 05:48:24.330752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.420 [2024-07-14 05:48:24.330778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.420 qpair failed and we were unable to recover it. 00:34:17.420 [2024-07-14 05:48:24.330987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.421 [2024-07-14 05:48:24.331031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.421 qpair failed and we were unable to recover it. 00:34:17.421 [2024-07-14 05:48:24.331190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.421 [2024-07-14 05:48:24.331216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.421 qpair failed and we were unable to recover it. 00:34:17.421 [2024-07-14 05:48:24.331402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.421 [2024-07-14 05:48:24.331446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.421 qpair failed and we were unable to recover it. 00:34:17.421 [2024-07-14 05:48:24.331695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.421 [2024-07-14 05:48:24.331722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.421 qpair failed and we were unable to recover it. 00:34:17.421 [2024-07-14 05:48:24.331884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.421 [2024-07-14 05:48:24.331911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.421 qpair failed and we were unable to recover it. 00:34:17.421 [2024-07-14 05:48:24.332096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.421 [2024-07-14 05:48:24.332139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.421 qpair failed and we were unable to recover it. 00:34:17.421 [2024-07-14 05:48:24.332347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.421 [2024-07-14 05:48:24.332389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.421 qpair failed and we were unable to recover it. 00:34:17.421 [2024-07-14 05:48:24.332601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.421 [2024-07-14 05:48:24.332628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.421 qpair failed and we were unable to recover it. 00:34:17.421 [2024-07-14 05:48:24.332806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.421 [2024-07-14 05:48:24.332832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.421 qpair failed and we were unable to recover it. 00:34:17.421 [2024-07-14 05:48:24.333050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.421 [2024-07-14 05:48:24.333098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.421 qpair failed and we were unable to recover it. 00:34:17.421 [2024-07-14 05:48:24.333305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.421 [2024-07-14 05:48:24.333347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.421 qpair failed and we were unable to recover it. 00:34:17.421 [2024-07-14 05:48:24.333534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.421 [2024-07-14 05:48:24.333576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.421 qpair failed and we were unable to recover it. 00:34:17.421 [2024-07-14 05:48:24.333757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.421 [2024-07-14 05:48:24.333781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.421 qpair failed and we were unable to recover it. 00:34:17.421 [2024-07-14 05:48:24.333983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.421 [2024-07-14 05:48:24.334027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.421 qpair failed and we were unable to recover it. 00:34:17.421 [2024-07-14 05:48:24.334205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.421 [2024-07-14 05:48:24.334248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.421 qpair failed and we were unable to recover it. 00:34:17.421 [2024-07-14 05:48:24.334462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.421 [2024-07-14 05:48:24.334505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.421 qpair failed and we were unable to recover it. 00:34:17.421 [2024-07-14 05:48:24.334726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.421 [2024-07-14 05:48:24.334752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.421 qpair failed and we were unable to recover it. 00:34:17.421 [2024-07-14 05:48:24.334927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.421 [2024-07-14 05:48:24.334956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.421 qpair failed and we were unable to recover it. 00:34:17.421 [2024-07-14 05:48:24.335191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.421 [2024-07-14 05:48:24.335236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.421 qpair failed and we were unable to recover it. 00:34:17.421 [2024-07-14 05:48:24.335474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.421 [2024-07-14 05:48:24.335517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.421 qpair failed and we were unable to recover it. 00:34:17.421 [2024-07-14 05:48:24.335730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.421 [2024-07-14 05:48:24.335757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.421 qpair failed and we were unable to recover it. 00:34:17.421 [2024-07-14 05:48:24.335918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.421 [2024-07-14 05:48:24.335945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.421 qpair failed and we were unable to recover it. 00:34:17.421 [2024-07-14 05:48:24.336159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.421 [2024-07-14 05:48:24.336201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.421 qpair failed and we were unable to recover it. 00:34:17.421 [2024-07-14 05:48:24.336381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.421 [2024-07-14 05:48:24.336426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.421 qpair failed and we were unable to recover it. 00:34:17.421 [2024-07-14 05:48:24.336663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.421 [2024-07-14 05:48:24.336705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.421 qpair failed and we were unable to recover it. 00:34:17.421 [2024-07-14 05:48:24.336895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.421 [2024-07-14 05:48:24.336920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.421 qpair failed and we were unable to recover it. 00:34:17.421 [2024-07-14 05:48:24.337118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.421 [2024-07-14 05:48:24.337159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.421 qpair failed and we were unable to recover it. 00:34:17.421 [2024-07-14 05:48:24.337387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.421 [2024-07-14 05:48:24.337431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.421 qpair failed and we were unable to recover it. 00:34:17.421 [2024-07-14 05:48:24.337632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.421 [2024-07-14 05:48:24.337676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.421 qpair failed and we were unable to recover it. 00:34:17.421 [2024-07-14 05:48:24.337861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.421 [2024-07-14 05:48:24.337894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.421 qpair failed and we were unable to recover it. 00:34:17.421 [2024-07-14 05:48:24.338105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.421 [2024-07-14 05:48:24.338149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.421 qpair failed and we were unable to recover it. 00:34:17.421 [2024-07-14 05:48:24.338386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.421 [2024-07-14 05:48:24.338430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.421 qpair failed and we were unable to recover it. 00:34:17.421 [2024-07-14 05:48:24.338673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.421 [2024-07-14 05:48:24.338717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.421 qpair failed and we were unable to recover it. 00:34:17.421 [2024-07-14 05:48:24.338929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.421 [2024-07-14 05:48:24.338955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.421 qpair failed and we were unable to recover it. 00:34:17.421 [2024-07-14 05:48:24.339138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.421 [2024-07-14 05:48:24.339182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.421 qpair failed and we were unable to recover it. 00:34:17.421 [2024-07-14 05:48:24.339415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.421 [2024-07-14 05:48:24.339457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.421 qpair failed and we were unable to recover it. 00:34:17.421 [2024-07-14 05:48:24.339676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.421 [2024-07-14 05:48:24.339720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.421 qpair failed and we were unable to recover it. 00:34:17.421 [2024-07-14 05:48:24.339924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.421 [2024-07-14 05:48:24.339955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.421 qpair failed and we were unable to recover it. 00:34:17.421 [2024-07-14 05:48:24.340213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.421 [2024-07-14 05:48:24.340256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.421 qpair failed and we were unable to recover it. 00:34:17.421 [2024-07-14 05:48:24.340443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.421 [2024-07-14 05:48:24.340486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.421 qpair failed and we were unable to recover it. 00:34:17.421 [2024-07-14 05:48:24.340664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.421 [2024-07-14 05:48:24.340691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.421 qpair failed and we were unable to recover it. 00:34:17.421 [2024-07-14 05:48:24.340879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.421 [2024-07-14 05:48:24.340905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.421 qpair failed and we were unable to recover it. 00:34:17.421 [2024-07-14 05:48:24.341122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.421 [2024-07-14 05:48:24.341165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.421 qpair failed and we were unable to recover it. 00:34:17.421 [2024-07-14 05:48:24.341351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.421 [2024-07-14 05:48:24.341381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.421 qpair failed and we were unable to recover it. 00:34:17.421 [2024-07-14 05:48:24.341637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.421 [2024-07-14 05:48:24.341681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.421 qpair failed and we were unable to recover it. 00:34:17.421 [2024-07-14 05:48:24.341876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.421 [2024-07-14 05:48:24.341902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.421 qpair failed and we were unable to recover it. 00:34:17.421 [2024-07-14 05:48:24.342114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.421 [2024-07-14 05:48:24.342140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.421 qpair failed and we were unable to recover it. 00:34:17.421 [2024-07-14 05:48:24.342324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.422 [2024-07-14 05:48:24.342365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.422 qpair failed and we were unable to recover it. 00:34:17.422 [2024-07-14 05:48:24.342574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.422 [2024-07-14 05:48:24.342616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.422 qpair failed and we were unable to recover it. 00:34:17.422 [2024-07-14 05:48:24.342824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.422 [2024-07-14 05:48:24.342853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.422 qpair failed and we were unable to recover it. 00:34:17.422 [2024-07-14 05:48:24.343050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.422 [2024-07-14 05:48:24.343076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.422 qpair failed and we were unable to recover it. 00:34:17.422 [2024-07-14 05:48:24.343229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.422 [2024-07-14 05:48:24.343256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.422 qpair failed and we were unable to recover it. 00:34:17.422 [2024-07-14 05:48:24.343457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.422 [2024-07-14 05:48:24.343500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.422 qpair failed and we were unable to recover it. 00:34:17.422 [2024-07-14 05:48:24.343679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.422 [2024-07-14 05:48:24.343720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.422 qpair failed and we were unable to recover it. 00:34:17.422 [2024-07-14 05:48:24.343918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.422 [2024-07-14 05:48:24.343961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.422 qpair failed and we were unable to recover it. 00:34:17.422 [2024-07-14 05:48:24.344216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.422 [2024-07-14 05:48:24.344258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.422 qpair failed and we were unable to recover it. 00:34:17.422 [2024-07-14 05:48:24.344461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.422 [2024-07-14 05:48:24.344504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.422 qpair failed and we were unable to recover it. 00:34:17.422 [2024-07-14 05:48:24.344690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.422 [2024-07-14 05:48:24.344716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.422 qpair failed and we were unable to recover it. 00:34:17.422 [2024-07-14 05:48:24.344927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.422 [2024-07-14 05:48:24.344956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.422 qpair failed and we were unable to recover it. 00:34:17.422 [2024-07-14 05:48:24.345172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.422 [2024-07-14 05:48:24.345215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.422 qpair failed and we were unable to recover it. 00:34:17.422 [2024-07-14 05:48:24.345446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.422 [2024-07-14 05:48:24.345488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.422 qpair failed and we were unable to recover it. 00:34:17.422 [2024-07-14 05:48:24.345648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.422 [2024-07-14 05:48:24.345674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.422 qpair failed and we were unable to recover it. 00:34:17.422 [2024-07-14 05:48:24.345887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.422 [2024-07-14 05:48:24.345913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.422 qpair failed and we were unable to recover it. 00:34:17.422 [2024-07-14 05:48:24.346099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.422 [2024-07-14 05:48:24.346142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.422 qpair failed and we were unable to recover it. 00:34:17.422 [2024-07-14 05:48:24.346355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.422 [2024-07-14 05:48:24.346398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.422 qpair failed and we were unable to recover it. 00:34:17.422 [2024-07-14 05:48:24.346603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.422 [2024-07-14 05:48:24.346645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.422 qpair failed and we were unable to recover it. 00:34:17.422 [2024-07-14 05:48:24.346829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.422 [2024-07-14 05:48:24.346854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.422 qpair failed and we were unable to recover it. 00:34:17.422 [2024-07-14 05:48:24.347098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.422 [2024-07-14 05:48:24.347140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.422 qpair failed and we were unable to recover it. 00:34:17.422 [2024-07-14 05:48:24.347335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.422 [2024-07-14 05:48:24.347361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.422 qpair failed and we were unable to recover it. 00:34:17.422 [2024-07-14 05:48:24.347572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.422 [2024-07-14 05:48:24.347616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.422 qpair failed and we were unable to recover it. 00:34:17.422 [2024-07-14 05:48:24.347793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.422 [2024-07-14 05:48:24.347819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.422 qpair failed and we were unable to recover it. 00:34:17.422 [2024-07-14 05:48:24.348057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.422 [2024-07-14 05:48:24.348102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.422 qpair failed and we were unable to recover it. 00:34:17.422 [2024-07-14 05:48:24.348347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.422 [2024-07-14 05:48:24.348390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.422 qpair failed and we were unable to recover it. 00:34:17.422 [2024-07-14 05:48:24.348624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.422 [2024-07-14 05:48:24.348666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.422 qpair failed and we were unable to recover it. 00:34:17.422 [2024-07-14 05:48:24.348885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.422 [2024-07-14 05:48:24.348912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.422 qpair failed and we were unable to recover it. 00:34:17.422 [2024-07-14 05:48:24.349080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.422 [2024-07-14 05:48:24.349106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:17.422 qpair failed and we were unable to recover it. 00:34:17.422 [2024-07-14 05:48:24.349349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.422 [2024-07-14 05:48:24.349394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.422 qpair failed and we were unable to recover it. 00:34:17.422 [2024-07-14 05:48:24.349603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.422 [2024-07-14 05:48:24.349632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.422 qpair failed and we were unable to recover it. 00:34:17.422 [2024-07-14 05:48:24.349808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.422 [2024-07-14 05:48:24.349836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.422 qpair failed and we were unable to recover it. 00:34:17.422 [2024-07-14 05:48:24.350046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.422 [2024-07-14 05:48:24.350073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.422 qpair failed and we were unable to recover it. 00:34:17.422 [2024-07-14 05:48:24.350284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.422 [2024-07-14 05:48:24.350311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.422 qpair failed and we were unable to recover it. 00:34:17.422 [2024-07-14 05:48:24.350478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.422 [2024-07-14 05:48:24.350506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.422 qpair failed and we were unable to recover it. 00:34:17.422 [2024-07-14 05:48:24.350732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.422 [2024-07-14 05:48:24.350759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.422 qpair failed and we were unable to recover it. 00:34:17.422 [2024-07-14 05:48:24.350968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.422 [2024-07-14 05:48:24.350994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.422 qpair failed and we were unable to recover it. 00:34:17.422 [2024-07-14 05:48:24.351147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.422 [2024-07-14 05:48:24.351188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.422 qpair failed and we were unable to recover it. 00:34:17.422 [2024-07-14 05:48:24.351413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.422 [2024-07-14 05:48:24.351441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.422 qpair failed and we were unable to recover it. 00:34:17.422 [2024-07-14 05:48:24.351618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.422 [2024-07-14 05:48:24.351643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.422 qpair failed and we were unable to recover it. 00:34:17.422 [2024-07-14 05:48:24.351822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.422 [2024-07-14 05:48:24.351847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.422 qpair failed and we were unable to recover it. 00:34:17.422 [2024-07-14 05:48:24.352022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.422 [2024-07-14 05:48:24.352048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.422 qpair failed and we were unable to recover it. 00:34:17.422 [2024-07-14 05:48:24.352280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.422 [2024-07-14 05:48:24.352307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.422 qpair failed and we were unable to recover it. 00:34:17.422 [2024-07-14 05:48:24.352488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.422 [2024-07-14 05:48:24.352515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.422 qpair failed and we were unable to recover it. 00:34:17.422 [2024-07-14 05:48:24.352687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.422 [2024-07-14 05:48:24.352714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.422 qpair failed and we were unable to recover it. 00:34:17.422 [2024-07-14 05:48:24.352920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.422 [2024-07-14 05:48:24.352945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.422 qpair failed and we were unable to recover it. 00:34:17.422 [2024-07-14 05:48:24.353128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.422 [2024-07-14 05:48:24.353171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.422 qpair failed and we were unable to recover it. 00:34:17.422 [2024-07-14 05:48:24.353373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.422 [2024-07-14 05:48:24.353402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.422 qpair failed and we were unable to recover it. 00:34:17.423 [2024-07-14 05:48:24.353609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.423 [2024-07-14 05:48:24.353637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.423 qpair failed and we were unable to recover it. 00:34:17.423 [2024-07-14 05:48:24.353816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.423 [2024-07-14 05:48:24.353841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.423 qpair failed and we were unable to recover it. 00:34:17.423 [2024-07-14 05:48:24.354032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.423 [2024-07-14 05:48:24.354057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.423 qpair failed and we were unable to recover it. 00:34:17.423 [2024-07-14 05:48:24.354204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.423 [2024-07-14 05:48:24.354229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.423 qpair failed and we were unable to recover it. 00:34:17.423 [2024-07-14 05:48:24.354438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.423 [2024-07-14 05:48:24.354466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.423 qpair failed and we were unable to recover it. 00:34:17.423 [2024-07-14 05:48:24.354822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.423 [2024-07-14 05:48:24.354878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.423 qpair failed and we were unable to recover it. 00:34:17.423 [2024-07-14 05:48:24.355093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.423 [2024-07-14 05:48:24.355119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.423 qpair failed and we were unable to recover it. 00:34:17.423 [2024-07-14 05:48:24.355322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.423 [2024-07-14 05:48:24.355349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.423 qpair failed and we were unable to recover it. 00:34:17.423 [2024-07-14 05:48:24.355555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.423 [2024-07-14 05:48:24.355588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.423 qpair failed and we were unable to recover it. 00:34:17.423 [2024-07-14 05:48:24.355804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.423 [2024-07-14 05:48:24.355829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.423 qpair failed and we were unable to recover it. 00:34:17.423 [2024-07-14 05:48:24.355995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.423 [2024-07-14 05:48:24.356021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.423 qpair failed and we were unable to recover it. 00:34:17.423 [2024-07-14 05:48:24.356237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.423 [2024-07-14 05:48:24.356265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.423 qpair failed and we were unable to recover it. 00:34:17.423 [2024-07-14 05:48:24.356493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.423 [2024-07-14 05:48:24.356520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.423 qpair failed and we were unable to recover it. 00:34:17.423 [2024-07-14 05:48:24.356836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.423 [2024-07-14 05:48:24.356892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.423 qpair failed and we were unable to recover it. 00:34:17.423 [2024-07-14 05:48:24.357093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.423 [2024-07-14 05:48:24.357118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.423 qpair failed and we were unable to recover it. 00:34:17.423 [2024-07-14 05:48:24.357330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.423 [2024-07-14 05:48:24.357358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.423 qpair failed and we were unable to recover it. 00:34:17.423 [2024-07-14 05:48:24.357586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.423 [2024-07-14 05:48:24.357614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.423 qpair failed and we were unable to recover it. 00:34:17.423 [2024-07-14 05:48:24.357812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.423 [2024-07-14 05:48:24.357840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.423 qpair failed and we were unable to recover it. 00:34:17.423 [2024-07-14 05:48:24.358046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.423 [2024-07-14 05:48:24.358071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.423 qpair failed and we were unable to recover it. 00:34:17.423 [2024-07-14 05:48:24.358256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.423 [2024-07-14 05:48:24.358283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.423 qpair failed and we were unable to recover it. 00:34:17.423 [2024-07-14 05:48:24.358458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.423 [2024-07-14 05:48:24.358486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.423 qpair failed and we were unable to recover it. 00:34:17.423 [2024-07-14 05:48:24.358844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.423 [2024-07-14 05:48:24.358907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.423 qpair failed and we were unable to recover it. 00:34:17.423 [2024-07-14 05:48:24.359123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.423 [2024-07-14 05:48:24.359148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.423 qpair failed and we were unable to recover it. 00:34:17.423 [2024-07-14 05:48:24.359353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.423 [2024-07-14 05:48:24.359378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.423 qpair failed and we were unable to recover it. 00:34:17.423 [2024-07-14 05:48:24.359612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.423 [2024-07-14 05:48:24.359641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.423 qpair failed and we were unable to recover it. 00:34:17.423 [2024-07-14 05:48:24.359855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.423 [2024-07-14 05:48:24.359886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.423 qpair failed and we were unable to recover it. 00:34:17.423 [2024-07-14 05:48:24.360072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.423 [2024-07-14 05:48:24.360097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.423 qpair failed and we were unable to recover it. 00:34:17.423 [2024-07-14 05:48:24.360337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.423 [2024-07-14 05:48:24.360365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.423 qpair failed and we were unable to recover it. 00:34:17.423 [2024-07-14 05:48:24.360596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.423 [2024-07-14 05:48:24.360624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.423 qpair failed and we were unable to recover it. 00:34:17.423 [2024-07-14 05:48:24.360934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.423 [2024-07-14 05:48:24.360960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.423 qpair failed and we were unable to recover it. 00:34:17.423 [2024-07-14 05:48:24.361115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.423 [2024-07-14 05:48:24.361139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.423 qpair failed and we were unable to recover it. 00:34:17.423 [2024-07-14 05:48:24.361338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.423 [2024-07-14 05:48:24.361366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.423 qpair failed and we were unable to recover it. 00:34:17.423 [2024-07-14 05:48:24.361568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.423 [2024-07-14 05:48:24.361596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.423 qpair failed and we were unable to recover it. 00:34:17.423 [2024-07-14 05:48:24.361789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.423 [2024-07-14 05:48:24.361817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.423 qpair failed and we were unable to recover it. 00:34:17.423 [2024-07-14 05:48:24.362010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.423 [2024-07-14 05:48:24.362035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.423 qpair failed and we were unable to recover it. 00:34:17.423 [2024-07-14 05:48:24.362201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.423 [2024-07-14 05:48:24.362229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.423 qpair failed and we were unable to recover it. 00:34:17.423 [2024-07-14 05:48:24.362432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.423 [2024-07-14 05:48:24.362459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.423 qpair failed and we were unable to recover it. 00:34:17.423 [2024-07-14 05:48:24.362791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.423 [2024-07-14 05:48:24.362842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.423 qpair failed and we were unable to recover it. 00:34:17.423 [2024-07-14 05:48:24.363046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.423 [2024-07-14 05:48:24.363072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.423 qpair failed and we were unable to recover it. 00:34:17.423 [2024-07-14 05:48:24.363276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.423 [2024-07-14 05:48:24.363304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.423 qpair failed and we were unable to recover it. 00:34:17.423 [2024-07-14 05:48:24.363531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.423 [2024-07-14 05:48:24.363559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.423 qpair failed and we were unable to recover it. 00:34:17.423 [2024-07-14 05:48:24.363760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.423 [2024-07-14 05:48:24.363789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.423 qpair failed and we were unable to recover it. 00:34:17.423 [2024-07-14 05:48:24.363976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.423 [2024-07-14 05:48:24.364002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.423 qpair failed and we were unable to recover it. 00:34:17.423 [2024-07-14 05:48:24.364182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.423 [2024-07-14 05:48:24.364207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.423 qpair failed and we were unable to recover it. 00:34:17.423 [2024-07-14 05:48:24.364400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.423 [2024-07-14 05:48:24.364429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.423 qpair failed and we were unable to recover it. 00:34:17.423 [2024-07-14 05:48:24.364769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.423 [2024-07-14 05:48:24.364828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.423 qpair failed and we were unable to recover it. 00:34:17.423 [2024-07-14 05:48:24.365070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.423 [2024-07-14 05:48:24.365095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.423 qpair failed and we were unable to recover it. 00:34:17.423 [2024-07-14 05:48:24.365285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.423 [2024-07-14 05:48:24.365313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.423 qpair failed and we were unable to recover it. 00:34:17.424 [2024-07-14 05:48:24.365515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.424 [2024-07-14 05:48:24.365543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.424 qpair failed and we were unable to recover it. 00:34:17.424 [2024-07-14 05:48:24.365813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.424 [2024-07-14 05:48:24.365841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.424 qpair failed and we were unable to recover it. 00:34:17.424 [2024-07-14 05:48:24.366052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.424 [2024-07-14 05:48:24.366078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.424 qpair failed and we were unable to recover it. 00:34:17.424 [2024-07-14 05:48:24.366287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.424 [2024-07-14 05:48:24.366315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.424 qpair failed and we were unable to recover it. 00:34:17.424 [2024-07-14 05:48:24.366515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.424 [2024-07-14 05:48:24.366543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.424 qpair failed and we were unable to recover it. 00:34:17.424 [2024-07-14 05:48:24.366740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.424 [2024-07-14 05:48:24.366768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.424 qpair failed and we were unable to recover it. 00:34:17.424 [2024-07-14 05:48:24.367004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.424 [2024-07-14 05:48:24.367031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.424 qpair failed and we were unable to recover it. 00:34:17.424 [2024-07-14 05:48:24.367194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.424 [2024-07-14 05:48:24.367219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.424 qpair failed and we were unable to recover it. 00:34:17.424 [2024-07-14 05:48:24.367461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.424 [2024-07-14 05:48:24.367489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.424 qpair failed and we were unable to recover it. 00:34:17.424 [2024-07-14 05:48:24.367723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.424 [2024-07-14 05:48:24.367751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.424 qpair failed and we were unable to recover it. 00:34:17.424 [2024-07-14 05:48:24.367959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.424 [2024-07-14 05:48:24.367986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.424 qpair failed and we were unable to recover it. 00:34:17.424 [2024-07-14 05:48:24.368216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.424 [2024-07-14 05:48:24.368244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.424 qpair failed and we were unable to recover it. 00:34:17.424 [2024-07-14 05:48:24.368420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.424 [2024-07-14 05:48:24.368448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.424 qpair failed and we were unable to recover it. 00:34:17.424 [2024-07-14 05:48:24.368768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.424 [2024-07-14 05:48:24.368818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.424 qpair failed and we were unable to recover it. 00:34:17.424 [2024-07-14 05:48:24.369060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.424 [2024-07-14 05:48:24.369090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.424 qpair failed and we were unable to recover it. 00:34:17.424 [2024-07-14 05:48:24.369329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.424 [2024-07-14 05:48:24.369357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.424 qpair failed and we were unable to recover it. 00:34:17.424 [2024-07-14 05:48:24.369603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.424 [2024-07-14 05:48:24.369631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.424 qpair failed and we were unable to recover it. 00:34:17.424 [2024-07-14 05:48:24.369978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.424 [2024-07-14 05:48:24.370005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.424 qpair failed and we were unable to recover it. 00:34:17.424 [2024-07-14 05:48:24.370213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.424 [2024-07-14 05:48:24.370243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.424 qpair failed and we were unable to recover it. 00:34:17.424 [2024-07-14 05:48:24.370450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.424 [2024-07-14 05:48:24.370478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.424 qpair failed and we were unable to recover it. 00:34:17.424 [2024-07-14 05:48:24.370657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.424 [2024-07-14 05:48:24.370685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.424 qpair failed and we were unable to recover it. 00:34:17.424 [2024-07-14 05:48:24.370856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.424 [2024-07-14 05:48:24.370893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.424 qpair failed and we were unable to recover it. 00:34:17.424 [2024-07-14 05:48:24.371082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.424 [2024-07-14 05:48:24.371108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.424 qpair failed and we were unable to recover it. 00:34:17.424 [2024-07-14 05:48:24.371337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.424 [2024-07-14 05:48:24.371365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.424 qpair failed and we were unable to recover it. 00:34:17.424 [2024-07-14 05:48:24.371572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.424 [2024-07-14 05:48:24.371600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.424 qpair failed and we were unable to recover it. 00:34:17.424 [2024-07-14 05:48:24.371800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.424 [2024-07-14 05:48:24.371827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.424 qpair failed and we were unable to recover it. 00:34:17.424 [2024-07-14 05:48:24.372065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.424 [2024-07-14 05:48:24.372091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.424 qpair failed and we were unable to recover it. 00:34:17.424 [2024-07-14 05:48:24.372248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.424 [2024-07-14 05:48:24.372273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.424 qpair failed and we were unable to recover it. 00:34:17.424 [2024-07-14 05:48:24.372458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.424 [2024-07-14 05:48:24.372483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.424 qpair failed and we were unable to recover it. 00:34:17.424 [2024-07-14 05:48:24.372727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.424 [2024-07-14 05:48:24.372752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.424 qpair failed and we were unable to recover it. 00:34:17.424 [2024-07-14 05:48:24.372935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.424 [2024-07-14 05:48:24.372964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.424 qpair failed and we were unable to recover it. 00:34:17.424 [2024-07-14 05:48:24.373176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.424 [2024-07-14 05:48:24.373201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.424 qpair failed and we were unable to recover it. 00:34:17.424 [2024-07-14 05:48:24.373438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.424 [2024-07-14 05:48:24.373466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.424 qpair failed and we were unable to recover it. 00:34:17.424 [2024-07-14 05:48:24.373660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.424 [2024-07-14 05:48:24.373684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.424 qpair failed and we were unable to recover it. 00:34:17.424 [2024-07-14 05:48:24.373854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.424 [2024-07-14 05:48:24.373889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.424 qpair failed and we were unable to recover it. 00:34:17.424 [2024-07-14 05:48:24.374117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.424 [2024-07-14 05:48:24.374145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.424 qpair failed and we were unable to recover it. 00:34:17.424 [2024-07-14 05:48:24.374387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.424 [2024-07-14 05:48:24.374412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.424 qpair failed and we were unable to recover it. 00:34:17.424 [2024-07-14 05:48:24.374617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.424 [2024-07-14 05:48:24.374642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.424 qpair failed and we were unable to recover it. 00:34:17.424 [2024-07-14 05:48:24.374816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.424 [2024-07-14 05:48:24.374844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.424 qpair failed and we were unable to recover it. 00:34:17.424 [2024-07-14 05:48:24.375089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.424 [2024-07-14 05:48:24.375115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.424 qpair failed and we were unable to recover it. 00:34:17.424 [2024-07-14 05:48:24.375317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.424 [2024-07-14 05:48:24.375345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.424 qpair failed and we were unable to recover it. 00:34:17.424 [2024-07-14 05:48:24.375520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.424 [2024-07-14 05:48:24.375545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.424 qpair failed and we were unable to recover it. 00:34:17.424 [2024-07-14 05:48:24.375753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.424 [2024-07-14 05:48:24.375781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.424 qpair failed and we were unable to recover it. 00:34:17.425 [2024-07-14 05:48:24.375979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.425 [2024-07-14 05:48:24.376005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.425 qpair failed and we were unable to recover it. 00:34:17.425 [2024-07-14 05:48:24.376181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.425 [2024-07-14 05:48:24.376207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.425 qpair failed and we were unable to recover it. 00:34:17.425 [2024-07-14 05:48:24.376387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.425 [2024-07-14 05:48:24.376412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.425 qpair failed and we were unable to recover it. 00:34:17.425 [2024-07-14 05:48:24.376644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.425 [2024-07-14 05:48:24.376672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.425 qpair failed and we were unable to recover it. 00:34:17.425 [2024-07-14 05:48:24.376881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.425 [2024-07-14 05:48:24.376907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.425 qpair failed and we were unable to recover it. 00:34:17.425 [2024-07-14 05:48:24.377082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.425 [2024-07-14 05:48:24.377107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.425 qpair failed and we were unable to recover it. 00:34:17.425 [2024-07-14 05:48:24.377314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.425 [2024-07-14 05:48:24.377339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.425 qpair failed and we were unable to recover it. 00:34:17.425 [2024-07-14 05:48:24.377570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.425 [2024-07-14 05:48:24.377598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.425 qpair failed and we were unable to recover it. 00:34:17.425 [2024-07-14 05:48:24.377774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.425 [2024-07-14 05:48:24.377802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.425 qpair failed and we were unable to recover it. 00:34:17.425 [2024-07-14 05:48:24.378002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.425 [2024-07-14 05:48:24.378028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.425 qpair failed and we were unable to recover it. 00:34:17.425 [2024-07-14 05:48:24.378213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.425 [2024-07-14 05:48:24.378238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.425 qpair failed and we were unable to recover it. 00:34:17.425 [2024-07-14 05:48:24.378442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.425 [2024-07-14 05:48:24.378470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.425 qpair failed and we were unable to recover it. 00:34:17.425 [2024-07-14 05:48:24.378674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.425 [2024-07-14 05:48:24.378718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.425 qpair failed and we were unable to recover it. 00:34:17.425 [2024-07-14 05:48:24.378924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.425 [2024-07-14 05:48:24.378956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.425 qpair failed and we were unable to recover it. 00:34:17.425 [2024-07-14 05:48:24.379178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.425 [2024-07-14 05:48:24.379205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.425 qpair failed and we were unable to recover it. 00:34:17.425 [2024-07-14 05:48:24.379394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.425 [2024-07-14 05:48:24.379420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.425 qpair failed and we were unable to recover it. 00:34:17.425 [2024-07-14 05:48:24.379606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.425 [2024-07-14 05:48:24.379632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.425 qpair failed and we were unable to recover it. 00:34:17.425 [2024-07-14 05:48:24.379807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.425 [2024-07-14 05:48:24.379836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.425 qpair failed and we were unable to recover it. 00:34:17.425 [2024-07-14 05:48:24.380045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.425 [2024-07-14 05:48:24.380071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.425 qpair failed and we were unable to recover it. 00:34:17.425 [2024-07-14 05:48:24.380291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.425 [2024-07-14 05:48:24.380317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.425 qpair failed and we were unable to recover it. 00:34:17.425 [2024-07-14 05:48:24.380627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.425 [2024-07-14 05:48:24.380688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.425 qpair failed and we were unable to recover it. 00:34:17.425 [2024-07-14 05:48:24.380931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.425 [2024-07-14 05:48:24.380958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.425 qpair failed and we were unable to recover it. 00:34:17.425 [2024-07-14 05:48:24.381141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.425 [2024-07-14 05:48:24.381167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.425 qpair failed and we were unable to recover it. 00:34:17.425 [2024-07-14 05:48:24.381348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.425 [2024-07-14 05:48:24.381374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.425 qpair failed and we were unable to recover it. 00:34:17.425 [2024-07-14 05:48:24.381708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.425 [2024-07-14 05:48:24.381761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.425 qpair failed and we were unable to recover it. 00:34:17.425 [2024-07-14 05:48:24.382006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.425 [2024-07-14 05:48:24.382038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.425 qpair failed and we were unable to recover it. 00:34:17.425 [2024-07-14 05:48:24.382200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.425 [2024-07-14 05:48:24.382226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.425 qpair failed and we were unable to recover it. 00:34:17.425 [2024-07-14 05:48:24.382431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.425 [2024-07-14 05:48:24.382457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.425 qpair failed and we were unable to recover it. 00:34:17.425 [2024-07-14 05:48:24.382715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.425 [2024-07-14 05:48:24.382765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.425 qpair failed and we were unable to recover it. 00:34:17.425 [2024-07-14 05:48:24.382998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.425 [2024-07-14 05:48:24.383024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.425 qpair failed and we were unable to recover it. 00:34:17.425 [2024-07-14 05:48:24.383209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.425 [2024-07-14 05:48:24.383235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.425 qpair failed and we were unable to recover it. 00:34:17.425 [2024-07-14 05:48:24.383424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.425 [2024-07-14 05:48:24.383450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.425 qpair failed and we were unable to recover it. 00:34:17.425 [2024-07-14 05:48:24.383763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.425 [2024-07-14 05:48:24.383814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.425 qpair failed and we were unable to recover it. 00:34:17.425 [2024-07-14 05:48:24.384030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.425 [2024-07-14 05:48:24.384059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.425 qpair failed and we were unable to recover it. 00:34:17.425 [2024-07-14 05:48:24.384260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.425 [2024-07-14 05:48:24.384285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.425 qpair failed and we were unable to recover it. 00:34:17.425 [2024-07-14 05:48:24.384448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.425 [2024-07-14 05:48:24.384474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.425 qpair failed and we were unable to recover it. 00:34:17.425 [2024-07-14 05:48:24.384647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.425 [2024-07-14 05:48:24.384675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.425 qpair failed and we were unable to recover it. 00:34:17.425 [2024-07-14 05:48:24.384879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.425 [2024-07-14 05:48:24.384908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.425 qpair failed and we were unable to recover it. 00:34:17.425 [2024-07-14 05:48:24.385087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.425 [2024-07-14 05:48:24.385113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.425 qpair failed and we were unable to recover it. 00:34:17.425 [2024-07-14 05:48:24.385307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.425 [2024-07-14 05:48:24.385334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.425 qpair failed and we were unable to recover it. 00:34:17.425 [2024-07-14 05:48:24.385578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.425 [2024-07-14 05:48:24.385603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.425 qpair failed and we were unable to recover it. 00:34:17.425 [2024-07-14 05:48:24.385792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.425 [2024-07-14 05:48:24.385817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.425 qpair failed and we were unable to recover it. 00:34:17.425 [2024-07-14 05:48:24.386018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.425 [2024-07-14 05:48:24.386045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.425 qpair failed and we were unable to recover it. 00:34:17.425 [2024-07-14 05:48:24.386231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.425 [2024-07-14 05:48:24.386257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.425 qpair failed and we were unable to recover it. 00:34:17.425 [2024-07-14 05:48:24.386470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.425 [2024-07-14 05:48:24.386499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.425 qpair failed and we were unable to recover it. 00:34:17.425 [2024-07-14 05:48:24.386725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.425 [2024-07-14 05:48:24.386753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.425 qpair failed and we were unable to recover it. 00:34:17.425 [2024-07-14 05:48:24.386937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.425 [2024-07-14 05:48:24.386965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.425 qpair failed and we were unable to recover it. 00:34:17.425 [2024-07-14 05:48:24.387141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.425 [2024-07-14 05:48:24.387167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.425 qpair failed and we were unable to recover it. 00:34:17.426 [2024-07-14 05:48:24.387403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.426 [2024-07-14 05:48:24.387432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.426 qpair failed and we were unable to recover it. 00:34:17.426 [2024-07-14 05:48:24.387645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.426 [2024-07-14 05:48:24.387675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.426 qpair failed and we were unable to recover it. 00:34:17.426 [2024-07-14 05:48:24.387876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.426 [2024-07-14 05:48:24.387919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.426 qpair failed and we were unable to recover it. 00:34:17.426 [2024-07-14 05:48:24.388104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.426 [2024-07-14 05:48:24.388130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.426 qpair failed and we were unable to recover it. 00:34:17.426 [2024-07-14 05:48:24.388318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.426 [2024-07-14 05:48:24.388347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.426 qpair failed and we were unable to recover it. 00:34:17.426 [2024-07-14 05:48:24.388526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.426 [2024-07-14 05:48:24.388554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.426 qpair failed and we were unable to recover it. 00:34:17.426 [2024-07-14 05:48:24.388786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.426 [2024-07-14 05:48:24.388811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.426 qpair failed and we were unable to recover it. 00:34:17.426 [2024-07-14 05:48:24.389001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.426 [2024-07-14 05:48:24.389026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.426 qpair failed and we were unable to recover it. 00:34:17.426 [2024-07-14 05:48:24.389213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.426 [2024-07-14 05:48:24.389239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.426 qpair failed and we were unable to recover it. 00:34:17.426 [2024-07-14 05:48:24.389447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.426 [2024-07-14 05:48:24.389476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.426 qpair failed and we were unable to recover it. 00:34:17.426 [2024-07-14 05:48:24.389687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.426 [2024-07-14 05:48:24.389713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.426 qpair failed and we were unable to recover it. 00:34:17.426 [2024-07-14 05:48:24.389876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.426 [2024-07-14 05:48:24.389902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.426 qpair failed and we were unable to recover it. 00:34:17.426 [2024-07-14 05:48:24.390114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.426 [2024-07-14 05:48:24.390140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.426 qpair failed and we were unable to recover it. 00:34:17.426 [2024-07-14 05:48:24.390365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.426 [2024-07-14 05:48:24.390394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.426 qpair failed and we were unable to recover it. 00:34:17.426 [2024-07-14 05:48:24.390623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.426 [2024-07-14 05:48:24.390649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.426 qpair failed and we were unable to recover it. 00:34:17.426 [2024-07-14 05:48:24.390839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.426 [2024-07-14 05:48:24.390872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.426 qpair failed and we were unable to recover it. 00:34:17.426 [2024-07-14 05:48:24.391031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.426 [2024-07-14 05:48:24.391057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.426 qpair failed and we were unable to recover it. 00:34:17.426 [2024-07-14 05:48:24.391261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.426 [2024-07-14 05:48:24.391294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.426 qpair failed and we were unable to recover it. 00:34:17.426 [2024-07-14 05:48:24.391479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.426 [2024-07-14 05:48:24.391506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.426 qpair failed and we were unable to recover it. 00:34:17.426 [2024-07-14 05:48:24.391718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.426 [2024-07-14 05:48:24.391744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.426 qpair failed and we were unable to recover it. 00:34:17.426 [2024-07-14 05:48:24.391946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.426 [2024-07-14 05:48:24.391972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.426 qpair failed and we were unable to recover it. 00:34:17.426 [2024-07-14 05:48:24.392150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.426 [2024-07-14 05:48:24.392192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.426 qpair failed and we were unable to recover it. 00:34:17.426 [2024-07-14 05:48:24.392370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.426 [2024-07-14 05:48:24.392396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.426 qpair failed and we were unable to recover it. 00:34:17.426 [2024-07-14 05:48:24.392585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.426 [2024-07-14 05:48:24.392611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.426 qpair failed and we were unable to recover it. 00:34:17.426 [2024-07-14 05:48:24.392823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.426 [2024-07-14 05:48:24.392852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.426 qpair failed and we were unable to recover it. 00:34:17.426 [2024-07-14 05:48:24.393053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.426 [2024-07-14 05:48:24.393079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.426 qpair failed and we were unable to recover it. 00:34:17.426 [2024-07-14 05:48:24.393242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.426 [2024-07-14 05:48:24.393268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.426 qpair failed and we were unable to recover it. 00:34:17.426 [2024-07-14 05:48:24.393447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.426 [2024-07-14 05:48:24.393472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.426 qpair failed and we were unable to recover it. 00:34:17.426 [2024-07-14 05:48:24.393768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.426 [2024-07-14 05:48:24.393828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.426 qpair failed and we were unable to recover it. 00:34:17.426 [2024-07-14 05:48:24.394053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.426 [2024-07-14 05:48:24.394079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.426 qpair failed and we were unable to recover it. 00:34:17.426 [2024-07-14 05:48:24.394263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.426 [2024-07-14 05:48:24.394289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.426 qpair failed and we were unable to recover it. 00:34:17.426 [2024-07-14 05:48:24.394466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.426 [2024-07-14 05:48:24.394492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.426 qpair failed and we were unable to recover it. 00:34:17.426 [2024-07-14 05:48:24.394701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.426 [2024-07-14 05:48:24.394729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.426 qpair failed and we were unable to recover it. 00:34:17.426 [2024-07-14 05:48:24.394938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.426 [2024-07-14 05:48:24.394967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.426 qpair failed and we were unable to recover it. 00:34:17.426 [2024-07-14 05:48:24.395168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.426 [2024-07-14 05:48:24.395194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.426 qpair failed and we were unable to recover it. 00:34:17.426 [2024-07-14 05:48:24.395401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.426 [2024-07-14 05:48:24.395426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.426 qpair failed and we were unable to recover it. 00:34:17.426 [2024-07-14 05:48:24.395640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.426 [2024-07-14 05:48:24.395667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.426 qpair failed and we were unable to recover it. 00:34:17.426 [2024-07-14 05:48:24.395827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.426 [2024-07-14 05:48:24.395853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.426 qpair failed and we were unable to recover it. 00:34:17.426 [2024-07-14 05:48:24.396031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.426 [2024-07-14 05:48:24.396057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.426 qpair failed and we were unable to recover it. 00:34:17.426 [2024-07-14 05:48:24.396264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.426 [2024-07-14 05:48:24.396289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.426 qpair failed and we were unable to recover it. 00:34:17.426 [2024-07-14 05:48:24.396539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.426 [2024-07-14 05:48:24.396568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.426 qpair failed and we were unable to recover it. 00:34:17.426 [2024-07-14 05:48:24.396744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.426 [2024-07-14 05:48:24.396772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.426 qpair failed and we were unable to recover it. 00:34:17.426 [2024-07-14 05:48:24.396970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.426 [2024-07-14 05:48:24.396996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.426 qpair failed and we were unable to recover it. 00:34:17.426 [2024-07-14 05:48:24.397171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.426 [2024-07-14 05:48:24.397197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.426 qpair failed and we were unable to recover it. 00:34:17.426 [2024-07-14 05:48:24.397378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.426 [2024-07-14 05:48:24.397408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.426 qpair failed and we were unable to recover it. 00:34:17.426 [2024-07-14 05:48:24.397618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.426 [2024-07-14 05:48:24.397661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.426 qpair failed and we were unable to recover it. 00:34:17.426 [2024-07-14 05:48:24.397870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.426 [2024-07-14 05:48:24.397899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.426 qpair failed and we were unable to recover it. 00:34:17.426 [2024-07-14 05:48:24.398105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.426 [2024-07-14 05:48:24.398130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.426 qpair failed and we were unable to recover it. 00:34:17.427 [2024-07-14 05:48:24.398350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.427 [2024-07-14 05:48:24.398379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.427 qpair failed and we were unable to recover it. 00:34:17.427 [2024-07-14 05:48:24.398589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.427 [2024-07-14 05:48:24.398615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.427 qpair failed and we were unable to recover it. 00:34:17.427 [2024-07-14 05:48:24.398825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.427 [2024-07-14 05:48:24.398850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.427 qpair failed and we were unable to recover it. 00:34:17.427 [2024-07-14 05:48:24.399044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.427 [2024-07-14 05:48:24.399070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.427 qpair failed and we were unable to recover it. 00:34:17.427 [2024-07-14 05:48:24.399243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.427 [2024-07-14 05:48:24.399271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.427 qpair failed and we were unable to recover it. 00:34:17.427 [2024-07-14 05:48:24.399477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.427 [2024-07-14 05:48:24.399505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.427 qpair failed and we were unable to recover it. 00:34:17.427 [2024-07-14 05:48:24.399710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.427 [2024-07-14 05:48:24.399735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.427 qpair failed and we were unable to recover it. 00:34:17.427 [2024-07-14 05:48:24.399922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.427 [2024-07-14 05:48:24.399959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.427 qpair failed and we were unable to recover it. 00:34:17.427 [2024-07-14 05:48:24.400175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.427 [2024-07-14 05:48:24.400204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.427 qpair failed and we were unable to recover it. 00:34:17.427 [2024-07-14 05:48:24.400414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.427 [2024-07-14 05:48:24.400442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.427 qpair failed and we were unable to recover it. 00:34:17.427 [2024-07-14 05:48:24.400654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.427 [2024-07-14 05:48:24.400681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.427 qpair failed and we were unable to recover it. 00:34:17.427 [2024-07-14 05:48:24.400897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.427 [2024-07-14 05:48:24.400923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.427 qpair failed and we were unable to recover it. 00:34:17.427 [2024-07-14 05:48:24.401124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.427 [2024-07-14 05:48:24.401153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.427 qpair failed and we were unable to recover it. 00:34:17.427 [2024-07-14 05:48:24.401326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.427 [2024-07-14 05:48:24.401355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.427 qpair failed and we were unable to recover it. 00:34:17.427 [2024-07-14 05:48:24.401560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.427 [2024-07-14 05:48:24.401586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.427 qpair failed and we were unable to recover it. 00:34:17.427 [2024-07-14 05:48:24.401768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.427 [2024-07-14 05:48:24.401794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.427 qpair failed and we were unable to recover it. 00:34:17.427 [2024-07-14 05:48:24.401966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.427 [2024-07-14 05:48:24.401996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.427 qpair failed and we were unable to recover it. 00:34:17.427 [2024-07-14 05:48:24.402200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.427 [2024-07-14 05:48:24.402229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.427 qpair failed and we were unable to recover it. 00:34:17.427 [2024-07-14 05:48:24.402438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.427 [2024-07-14 05:48:24.402464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.427 qpair failed and we were unable to recover it. 00:34:17.427 [2024-07-14 05:48:24.402650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.427 [2024-07-14 05:48:24.402675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.427 qpair failed and we were unable to recover it. 00:34:17.427 [2024-07-14 05:48:24.402852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.427 [2024-07-14 05:48:24.402887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.427 qpair failed and we were unable to recover it. 00:34:17.427 [2024-07-14 05:48:24.403056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.427 [2024-07-14 05:48:24.403085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.427 qpair failed and we were unable to recover it. 00:34:17.427 [2024-07-14 05:48:24.403318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.427 [2024-07-14 05:48:24.403344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.427 qpair failed and we were unable to recover it. 00:34:17.427 [2024-07-14 05:48:24.403558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.427 [2024-07-14 05:48:24.403584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.427 qpair failed and we were unable to recover it. 00:34:17.427 [2024-07-14 05:48:24.403766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.427 [2024-07-14 05:48:24.403793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.427 qpair failed and we were unable to recover it. 00:34:17.427 [2024-07-14 05:48:24.403982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.427 [2024-07-14 05:48:24.404017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.427 qpair failed and we were unable to recover it. 00:34:17.427 [2024-07-14 05:48:24.404223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.427 [2024-07-14 05:48:24.404248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.427 qpair failed and we were unable to recover it. 00:34:17.427 [2024-07-14 05:48:24.404433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.427 [2024-07-14 05:48:24.404459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.427 qpair failed and we were unable to recover it. 00:34:17.427 [2024-07-14 05:48:24.404665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.427 [2024-07-14 05:48:24.404690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.427 qpair failed and we were unable to recover it. 00:34:17.427 [2024-07-14 05:48:24.404950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.427 [2024-07-14 05:48:24.404976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.427 qpair failed and we were unable to recover it. 00:34:17.427 [2024-07-14 05:48:24.405167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.427 [2024-07-14 05:48:24.405192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.427 qpair failed and we were unable to recover it. 00:34:17.427 [2024-07-14 05:48:24.405376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.427 [2024-07-14 05:48:24.405402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.427 qpair failed and we were unable to recover it. 00:34:17.427 [2024-07-14 05:48:24.405556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.427 [2024-07-14 05:48:24.405582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.427 qpair failed and we were unable to recover it. 00:34:17.427 [2024-07-14 05:48:24.405758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.427 [2024-07-14 05:48:24.405784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.427 qpair failed and we were unable to recover it. 00:34:17.427 [2024-07-14 05:48:24.405946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.427 [2024-07-14 05:48:24.405974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.427 qpair failed and we were unable to recover it. 00:34:17.427 [2024-07-14 05:48:24.406154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.427 [2024-07-14 05:48:24.406181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.427 qpair failed and we were unable to recover it. 00:34:17.427 [2024-07-14 05:48:24.406413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.427 [2024-07-14 05:48:24.406447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.427 qpair failed and we were unable to recover it. 00:34:17.427 [2024-07-14 05:48:24.406657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.427 [2024-07-14 05:48:24.406687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.427 qpair failed and we were unable to recover it. 00:34:17.427 [2024-07-14 05:48:24.406902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.427 [2024-07-14 05:48:24.406928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.427 qpair failed and we were unable to recover it. 00:34:17.427 [2024-07-14 05:48:24.407096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.427 [2024-07-14 05:48:24.407122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.427 qpair failed and we were unable to recover it. 00:34:17.427 [2024-07-14 05:48:24.407313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.427 [2024-07-14 05:48:24.407339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.427 qpair failed and we were unable to recover it. 00:34:17.427 [2024-07-14 05:48:24.407550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.427 [2024-07-14 05:48:24.407578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.427 qpair failed and we were unable to recover it. 00:34:17.427 [2024-07-14 05:48:24.407754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.427 [2024-07-14 05:48:24.407780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.427 qpair failed and we were unable to recover it. 00:34:17.428 [2024-07-14 05:48:24.407950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.428 [2024-07-14 05:48:24.407977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.428 qpair failed and we were unable to recover it. 00:34:17.428 [2024-07-14 05:48:24.408223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.428 [2024-07-14 05:48:24.408252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.428 qpair failed and we were unable to recover it. 00:34:17.428 [2024-07-14 05:48:24.408455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.428 [2024-07-14 05:48:24.408484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.428 qpair failed and we were unable to recover it. 00:34:17.428 [2024-07-14 05:48:24.408690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.428 [2024-07-14 05:48:24.408716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.428 qpair failed and we were unable to recover it. 00:34:17.428 [2024-07-14 05:48:24.408904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.428 [2024-07-14 05:48:24.408936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.428 qpair failed and we were unable to recover it. 00:34:17.428 [2024-07-14 05:48:24.409103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.428 [2024-07-14 05:48:24.409129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.428 qpair failed and we were unable to recover it. 00:34:17.428 [2024-07-14 05:48:24.409315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.428 [2024-07-14 05:48:24.409340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.428 qpair failed and we were unable to recover it. 00:34:17.428 [2024-07-14 05:48:24.409571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.428 [2024-07-14 05:48:24.409597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.428 qpair failed and we were unable to recover it. 00:34:17.428 [2024-07-14 05:48:24.409780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.428 [2024-07-14 05:48:24.409806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.428 qpair failed and we were unable to recover it. 00:34:17.428 [2024-07-14 05:48:24.409989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.428 [2024-07-14 05:48:24.410016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.428 qpair failed and we were unable to recover it. 00:34:17.428 [2024-07-14 05:48:24.410221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.428 [2024-07-14 05:48:24.410250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.428 qpair failed and we were unable to recover it. 00:34:17.428 [2024-07-14 05:48:24.410424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.428 [2024-07-14 05:48:24.410450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.428 qpair failed and we were unable to recover it. 00:34:17.428 [2024-07-14 05:48:24.410634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.428 [2024-07-14 05:48:24.410661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.428 qpair failed and we were unable to recover it. 00:34:17.428 [2024-07-14 05:48:24.410846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.428 [2024-07-14 05:48:24.410881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.428 qpair failed and we were unable to recover it. 00:34:17.428 [2024-07-14 05:48:24.411079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.428 [2024-07-14 05:48:24.411105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.428 qpair failed and we were unable to recover it. 00:34:17.428 [2024-07-14 05:48:24.411308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.428 [2024-07-14 05:48:24.411334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.428 qpair failed and we were unable to recover it. 00:34:17.428 [2024-07-14 05:48:24.411492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.428 [2024-07-14 05:48:24.411517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.428 qpair failed and we were unable to recover it. 00:34:17.428 [2024-07-14 05:48:24.411745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.428 [2024-07-14 05:48:24.411799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.428 qpair failed and we were unable to recover it. 00:34:17.428 [2024-07-14 05:48:24.412021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.428 [2024-07-14 05:48:24.412050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.428 qpair failed and we were unable to recover it. 00:34:17.428 [2024-07-14 05:48:24.412237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.428 [2024-07-14 05:48:24.412263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.428 qpair failed and we were unable to recover it. 00:34:17.428 [2024-07-14 05:48:24.412475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.428 [2024-07-14 05:48:24.412501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.428 qpair failed and we were unable to recover it. 00:34:17.428 [2024-07-14 05:48:24.412678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.428 [2024-07-14 05:48:24.412708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.428 qpair failed and we were unable to recover it. 00:34:17.428 [2024-07-14 05:48:24.412944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.428 [2024-07-14 05:48:24.412970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.428 qpair failed and we were unable to recover it. 00:34:17.428 [2024-07-14 05:48:24.413185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.428 [2024-07-14 05:48:24.413211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.428 qpair failed and we were unable to recover it. 00:34:17.428 [2024-07-14 05:48:24.413367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.428 [2024-07-14 05:48:24.413392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.428 qpair failed and we were unable to recover it. 00:34:17.428 [2024-07-14 05:48:24.413568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.428 [2024-07-14 05:48:24.413594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.428 qpair failed and we were unable to recover it. 00:34:17.428 [2024-07-14 05:48:24.413747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.428 [2024-07-14 05:48:24.413792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.428 qpair failed and we were unable to recover it. 00:34:17.428 [2024-07-14 05:48:24.414030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.428 [2024-07-14 05:48:24.414057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.428 qpair failed and we were unable to recover it. 00:34:17.428 [2024-07-14 05:48:24.414282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.428 [2024-07-14 05:48:24.414308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.428 qpair failed and we were unable to recover it. 00:34:17.428 [2024-07-14 05:48:24.414510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.428 [2024-07-14 05:48:24.414540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.428 qpair failed and we were unable to recover it. 00:34:17.428 [2024-07-14 05:48:24.414737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.428 [2024-07-14 05:48:24.414766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.428 qpair failed and we were unable to recover it. 00:34:17.428 [2024-07-14 05:48:24.414997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.428 [2024-07-14 05:48:24.415024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.428 qpair failed and we were unable to recover it. 00:34:17.428 [2024-07-14 05:48:24.415206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.428 [2024-07-14 05:48:24.415233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.428 qpair failed and we were unable to recover it. 00:34:17.428 [2024-07-14 05:48:24.415468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.428 [2024-07-14 05:48:24.415502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.428 qpair failed and we were unable to recover it. 00:34:17.428 [2024-07-14 05:48:24.415696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.428 [2024-07-14 05:48:24.415724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.428 qpair failed and we were unable to recover it. 00:34:17.428 [2024-07-14 05:48:24.415917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.428 [2024-07-14 05:48:24.415943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.428 qpair failed and we were unable to recover it. 00:34:17.428 [2024-07-14 05:48:24.416149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.428 [2024-07-14 05:48:24.416175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.428 qpair failed and we were unable to recover it. 00:34:17.428 [2024-07-14 05:48:24.416410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.428 [2024-07-14 05:48:24.416438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.428 qpair failed and we were unable to recover it. 00:34:17.428 [2024-07-14 05:48:24.416650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.428 [2024-07-14 05:48:24.416679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.428 qpair failed and we were unable to recover it. 00:34:17.428 [2024-07-14 05:48:24.416870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.428 [2024-07-14 05:48:24.416914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.428 qpair failed and we were unable to recover it. 00:34:17.428 [2024-07-14 05:48:24.417083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.428 [2024-07-14 05:48:24.417108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.428 qpair failed and we were unable to recover it. 00:34:17.428 [2024-07-14 05:48:24.417294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.428 [2024-07-14 05:48:24.417322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.428 qpair failed and we were unable to recover it. 00:34:17.428 [2024-07-14 05:48:24.417522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.428 [2024-07-14 05:48:24.417550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.428 qpair failed and we were unable to recover it. 00:34:17.428 [2024-07-14 05:48:24.417760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.428 [2024-07-14 05:48:24.417786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.428 qpair failed and we were unable to recover it. 00:34:17.428 [2024-07-14 05:48:24.417971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.428 [2024-07-14 05:48:24.418004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.428 qpair failed and we were unable to recover it. 00:34:17.428 [2024-07-14 05:48:24.418214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.428 [2024-07-14 05:48:24.418243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.428 qpair failed and we were unable to recover it. 00:34:17.428 [2024-07-14 05:48:24.418420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.428 [2024-07-14 05:48:24.418449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.428 qpair failed and we were unable to recover it. 00:34:17.428 [2024-07-14 05:48:24.418630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.428 [2024-07-14 05:48:24.418656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.428 qpair failed and we were unable to recover it. 00:34:17.429 [2024-07-14 05:48:24.418814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.429 [2024-07-14 05:48:24.418840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.429 qpair failed and we were unable to recover it. 00:34:17.429 [2024-07-14 05:48:24.419025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.429 [2024-07-14 05:48:24.419051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.429 qpair failed and we were unable to recover it. 00:34:17.429 [2024-07-14 05:48:24.419233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.429 [2024-07-14 05:48:24.419263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.429 qpair failed and we were unable to recover it. 00:34:17.429 [2024-07-14 05:48:24.419462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.429 [2024-07-14 05:48:24.419488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.429 qpair failed and we were unable to recover it. 00:34:17.429 [2024-07-14 05:48:24.419677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.429 [2024-07-14 05:48:24.419703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.429 qpair failed and we were unable to recover it. 00:34:17.429 [2024-07-14 05:48:24.419855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.429 [2024-07-14 05:48:24.419885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.429 qpair failed and we were unable to recover it. 00:34:17.429 [2024-07-14 05:48:24.420139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.429 [2024-07-14 05:48:24.420169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.429 qpair failed and we were unable to recover it. 00:34:17.429 [2024-07-14 05:48:24.420402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.429 [2024-07-14 05:48:24.420427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.429 qpair failed and we were unable to recover it. 00:34:17.429 [2024-07-14 05:48:24.420639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.429 [2024-07-14 05:48:24.420665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.429 qpair failed and we were unable to recover it. 00:34:17.429 [2024-07-14 05:48:24.420882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.429 [2024-07-14 05:48:24.420911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.429 qpair failed and we were unable to recover it. 00:34:17.429 [2024-07-14 05:48:24.421121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.429 [2024-07-14 05:48:24.421147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.429 qpair failed and we were unable to recover it. 00:34:17.429 [2024-07-14 05:48:24.421335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.429 [2024-07-14 05:48:24.421361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.429 qpair failed and we were unable to recover it. 00:34:17.429 [2024-07-14 05:48:24.421585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.429 [2024-07-14 05:48:24.421611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.429 qpair failed and we were unable to recover it. 00:34:17.429 [2024-07-14 05:48:24.421809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.429 [2024-07-14 05:48:24.421837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.429 qpair failed and we were unable to recover it. 00:34:17.429 [2024-07-14 05:48:24.422082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.429 [2024-07-14 05:48:24.422108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.429 qpair failed and we were unable to recover it. 00:34:17.429 [2024-07-14 05:48:24.422261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.429 [2024-07-14 05:48:24.422287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.429 qpair failed and we were unable to recover it. 00:34:17.429 [2024-07-14 05:48:24.422448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.429 [2024-07-14 05:48:24.422474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.429 qpair failed and we were unable to recover it. 00:34:17.429 [2024-07-14 05:48:24.422700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.429 [2024-07-14 05:48:24.422728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.429 qpair failed and we were unable to recover it. 00:34:17.429 [2024-07-14 05:48:24.422939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.429 [2024-07-14 05:48:24.422966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.429 qpair failed and we were unable to recover it. 00:34:17.429 [2024-07-14 05:48:24.423152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.429 [2024-07-14 05:48:24.423177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.429 qpair failed and we were unable to recover it. 00:34:17.429 [2024-07-14 05:48:24.423360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.429 [2024-07-14 05:48:24.423386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.429 qpair failed and we were unable to recover it. 00:34:17.429 [2024-07-14 05:48:24.423537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.429 [2024-07-14 05:48:24.423562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.429 qpair failed and we were unable to recover it. 00:34:17.429 [2024-07-14 05:48:24.423809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.429 [2024-07-14 05:48:24.423837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.429 qpair failed and we were unable to recover it. 00:34:17.429 [2024-07-14 05:48:24.424051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.429 [2024-07-14 05:48:24.424077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.429 qpair failed and we were unable to recover it. 00:34:17.429 [2024-07-14 05:48:24.424240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.429 [2024-07-14 05:48:24.424265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.429 qpair failed and we were unable to recover it. 00:34:17.429 [2024-07-14 05:48:24.424447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.429 [2024-07-14 05:48:24.424477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.429 qpair failed and we were unable to recover it. 00:34:17.429 [2024-07-14 05:48:24.424664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.429 [2024-07-14 05:48:24.424690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.429 qpair failed and we were unable to recover it. 00:34:17.429 [2024-07-14 05:48:24.424912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.429 [2024-07-14 05:48:24.424938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.429 qpair failed and we were unable to recover it. 00:34:17.429 [2024-07-14 05:48:24.425109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.429 [2024-07-14 05:48:24.425135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.429 qpair failed and we were unable to recover it. 00:34:17.429 [2024-07-14 05:48:24.425370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.429 [2024-07-14 05:48:24.425398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.429 qpair failed and we were unable to recover it. 00:34:17.429 [2024-07-14 05:48:24.425600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.429 [2024-07-14 05:48:24.425628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.429 qpair failed and we were unable to recover it. 00:34:17.429 [2024-07-14 05:48:24.425831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.429 [2024-07-14 05:48:24.425856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.429 qpair failed and we were unable to recover it. 00:34:17.429 [2024-07-14 05:48:24.426087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.429 [2024-07-14 05:48:24.426113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.429 qpair failed and we were unable to recover it. 00:34:17.429 [2024-07-14 05:48:24.426323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.429 [2024-07-14 05:48:24.426351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.429 qpair failed and we were unable to recover it. 00:34:17.429 [2024-07-14 05:48:24.426552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.429 [2024-07-14 05:48:24.426580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.429 qpair failed and we were unable to recover it. 00:34:17.429 [2024-07-14 05:48:24.426753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.429 [2024-07-14 05:48:24.426779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.429 qpair failed and we were unable to recover it. 00:34:17.429 [2024-07-14 05:48:24.426929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.429 [2024-07-14 05:48:24.426955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.429 qpair failed and we were unable to recover it. 00:34:17.429 [2024-07-14 05:48:24.427114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.429 [2024-07-14 05:48:24.427140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.429 qpair failed and we were unable to recover it. 00:34:17.429 [2024-07-14 05:48:24.427370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.429 [2024-07-14 05:48:24.427398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.429 qpair failed and we were unable to recover it. 00:34:17.429 [2024-07-14 05:48:24.427640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.429 [2024-07-14 05:48:24.427665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.429 qpair failed and we were unable to recover it. 00:34:17.429 [2024-07-14 05:48:24.427822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.429 [2024-07-14 05:48:24.427848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.429 qpair failed and we were unable to recover it. 00:34:17.429 [2024-07-14 05:48:24.428017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.429 [2024-07-14 05:48:24.428045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.429 qpair failed and we were unable to recover it. 00:34:17.429 [2024-07-14 05:48:24.428252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.429 [2024-07-14 05:48:24.428281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.429 qpair failed and we were unable to recover it. 00:34:17.429 [2024-07-14 05:48:24.428480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.429 [2024-07-14 05:48:24.428506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.429 qpair failed and we were unable to recover it. 00:34:17.429 [2024-07-14 05:48:24.428721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.429 [2024-07-14 05:48:24.428747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.429 qpair failed and we were unable to recover it. 00:34:17.429 [2024-07-14 05:48:24.428989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.429 [2024-07-14 05:48:24.429015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.429 qpair failed and we were unable to recover it. 00:34:17.429 [2024-07-14 05:48:24.429184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.429 [2024-07-14 05:48:24.429210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.429 qpair failed and we were unable to recover it. 00:34:17.429 [2024-07-14 05:48:24.429392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.429 [2024-07-14 05:48:24.429418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.430 qpair failed and we were unable to recover it. 00:34:17.430 [2024-07-14 05:48:24.429599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.430 [2024-07-14 05:48:24.429624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.430 qpair failed and we were unable to recover it. 00:34:17.430 [2024-07-14 05:48:24.429830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.430 [2024-07-14 05:48:24.429858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.430 qpair failed and we were unable to recover it. 00:34:17.430 [2024-07-14 05:48:24.430049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.430 [2024-07-14 05:48:24.430077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.430 qpair failed and we were unable to recover it. 00:34:17.430 [2024-07-14 05:48:24.430286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.430 [2024-07-14 05:48:24.430312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.430 qpair failed and we were unable to recover it. 00:34:17.430 [2024-07-14 05:48:24.430477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.430 [2024-07-14 05:48:24.430503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.430 qpair failed and we were unable to recover it. 00:34:17.430 [2024-07-14 05:48:24.430681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.430 [2024-07-14 05:48:24.430709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.430 qpair failed and we were unable to recover it. 00:34:17.430 [2024-07-14 05:48:24.430922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.430 [2024-07-14 05:48:24.430948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.430 qpair failed and we were unable to recover it. 00:34:17.430 [2024-07-14 05:48:24.431137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.430 [2024-07-14 05:48:24.431163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.430 qpair failed and we were unable to recover it. 00:34:17.430 [2024-07-14 05:48:24.431344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.430 [2024-07-14 05:48:24.431369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.430 qpair failed and we were unable to recover it. 00:34:17.430 [2024-07-14 05:48:24.431551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.430 [2024-07-14 05:48:24.431577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.430 qpair failed and we were unable to recover it. 00:34:17.430 [2024-07-14 05:48:24.431739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.430 [2024-07-14 05:48:24.431764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.430 qpair failed and we were unable to recover it. 00:34:17.430 [2024-07-14 05:48:24.431951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.430 [2024-07-14 05:48:24.431977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.430 qpair failed and we were unable to recover it. 00:34:17.430 [2024-07-14 05:48:24.432155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.430 [2024-07-14 05:48:24.432180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.430 qpair failed and we were unable to recover it. 00:34:17.430 [2024-07-14 05:48:24.432363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.430 [2024-07-14 05:48:24.432388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.430 qpair failed and we were unable to recover it. 00:34:17.430 [2024-07-14 05:48:24.432602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.430 [2024-07-14 05:48:24.432631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.430 qpair failed and we were unable to recover it. 00:34:17.430 [2024-07-14 05:48:24.432807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.430 [2024-07-14 05:48:24.432833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.430 qpair failed and we were unable to recover it. 00:34:17.430 [2024-07-14 05:48:24.433015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.430 [2024-07-14 05:48:24.433041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.430 qpair failed and we were unable to recover it. 00:34:17.430 [2024-07-14 05:48:24.433245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.430 [2024-07-14 05:48:24.433278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.430 qpair failed and we were unable to recover it. 00:34:17.430 [2024-07-14 05:48:24.433511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.430 [2024-07-14 05:48:24.433540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.430 qpair failed and we were unable to recover it. 00:34:17.430 [2024-07-14 05:48:24.433783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.430 [2024-07-14 05:48:24.433808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.430 qpair failed and we were unable to recover it. 00:34:17.430 [2024-07-14 05:48:24.433994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.430 [2024-07-14 05:48:24.434021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.430 qpair failed and we were unable to recover it. 00:34:17.430 [2024-07-14 05:48:24.434223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.430 [2024-07-14 05:48:24.434252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.430 qpair failed and we were unable to recover it. 00:34:17.430 [2024-07-14 05:48:24.434447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.430 [2024-07-14 05:48:24.434475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.430 qpair failed and we were unable to recover it. 00:34:17.430 [2024-07-14 05:48:24.434679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.430 [2024-07-14 05:48:24.434705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.430 qpair failed and we were unable to recover it. 00:34:17.430 [2024-07-14 05:48:24.434914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.430 [2024-07-14 05:48:24.434940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.430 qpair failed and we were unable to recover it. 00:34:17.430 [2024-07-14 05:48:24.435150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.430 [2024-07-14 05:48:24.435179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.430 qpair failed and we were unable to recover it. 00:34:17.430 [2024-07-14 05:48:24.435402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.430 [2024-07-14 05:48:24.435430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.430 qpair failed and we were unable to recover it. 00:34:17.430 [2024-07-14 05:48:24.435638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.430 [2024-07-14 05:48:24.435663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.430 qpair failed and we were unable to recover it. 00:34:17.430 [2024-07-14 05:48:24.435845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.430 [2024-07-14 05:48:24.435884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.430 qpair failed and we were unable to recover it. 00:34:17.430 [2024-07-14 05:48:24.436125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.430 [2024-07-14 05:48:24.436154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.430 qpair failed and we were unable to recover it. 00:34:17.430 [2024-07-14 05:48:24.436365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.430 [2024-07-14 05:48:24.436391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.430 qpair failed and we were unable to recover it. 00:34:17.430 [2024-07-14 05:48:24.436549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.430 [2024-07-14 05:48:24.436574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.430 qpair failed and we were unable to recover it. 00:34:17.430 [2024-07-14 05:48:24.436786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.430 [2024-07-14 05:48:24.436814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.430 qpair failed and we were unable to recover it. 00:34:17.430 [2024-07-14 05:48:24.436992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.430 [2024-07-14 05:48:24.437019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.430 qpair failed and we were unable to recover it. 00:34:17.430 [2024-07-14 05:48:24.437182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.430 [2024-07-14 05:48:24.437207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.430 qpair failed and we were unable to recover it. 00:34:17.430 [2024-07-14 05:48:24.437386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.430 [2024-07-14 05:48:24.437412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.430 qpair failed and we were unable to recover it. 00:34:17.430 [2024-07-14 05:48:24.437608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.430 [2024-07-14 05:48:24.437635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.430 qpair failed and we were unable to recover it. 00:34:17.430 [2024-07-14 05:48:24.437874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.430 [2024-07-14 05:48:24.437904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.430 qpair failed and we were unable to recover it. 00:34:17.430 [2024-07-14 05:48:24.438105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.430 [2024-07-14 05:48:24.438134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.430 qpair failed and we were unable to recover it. 00:34:17.430 [2024-07-14 05:48:24.438340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.430 [2024-07-14 05:48:24.438366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.430 qpair failed and we were unable to recover it. 00:34:17.430 [2024-07-14 05:48:24.438544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.430 [2024-07-14 05:48:24.438570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.430 qpair failed and we were unable to recover it. 00:34:17.430 [2024-07-14 05:48:24.438770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.430 [2024-07-14 05:48:24.438798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.430 qpair failed and we were unable to recover it. 00:34:17.430 [2024-07-14 05:48:24.439015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.430 [2024-07-14 05:48:24.439045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.430 qpair failed and we were unable to recover it. 00:34:17.430 [2024-07-14 05:48:24.439287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.430 [2024-07-14 05:48:24.439312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.430 qpair failed and we were unable to recover it. 00:34:17.430 [2024-07-14 05:48:24.439527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.430 [2024-07-14 05:48:24.439553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.430 qpair failed and we were unable to recover it. 00:34:17.430 [2024-07-14 05:48:24.439766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.430 [2024-07-14 05:48:24.439795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.430 qpair failed and we were unable to recover it. 00:34:17.430 [2024-07-14 05:48:24.440020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.430 [2024-07-14 05:48:24.440048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.430 qpair failed and we were unable to recover it. 00:34:17.430 [2024-07-14 05:48:24.440275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.431 [2024-07-14 05:48:24.440301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.431 qpair failed and we were unable to recover it. 00:34:17.431 [2024-07-14 05:48:24.440510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.431 [2024-07-14 05:48:24.440536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.431 qpair failed and we were unable to recover it. 00:34:17.431 [2024-07-14 05:48:24.440756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.431 [2024-07-14 05:48:24.440782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.431 qpair failed and we were unable to recover it. 00:34:17.431 [2024-07-14 05:48:24.440966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.431 [2024-07-14 05:48:24.440992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.431 qpair failed and we were unable to recover it. 00:34:17.431 [2024-07-14 05:48:24.441198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.431 [2024-07-14 05:48:24.441223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.431 qpair failed and we were unable to recover it. 00:34:17.431 [2024-07-14 05:48:24.441380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.431 [2024-07-14 05:48:24.441408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.431 qpair failed and we were unable to recover it. 00:34:17.431 [2024-07-14 05:48:24.441583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.431 [2024-07-14 05:48:24.441611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.431 qpair failed and we were unable to recover it. 00:34:17.431 [2024-07-14 05:48:24.441809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.431 [2024-07-14 05:48:24.441837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.431 qpair failed and we were unable to recover it. 00:34:17.431 [2024-07-14 05:48:24.442047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.431 [2024-07-14 05:48:24.442074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.431 qpair failed and we were unable to recover it. 00:34:17.431 [2024-07-14 05:48:24.442257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.431 [2024-07-14 05:48:24.442282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.431 qpair failed and we were unable to recover it. 00:34:17.431 [2024-07-14 05:48:24.442442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.431 [2024-07-14 05:48:24.442471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.431 qpair failed and we were unable to recover it. 00:34:17.431 [2024-07-14 05:48:24.442715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.431 [2024-07-14 05:48:24.442744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.431 qpair failed and we were unable to recover it. 00:34:17.431 [2024-07-14 05:48:24.442975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.431 [2024-07-14 05:48:24.443002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.431 qpair failed and we were unable to recover it. 00:34:17.431 [2024-07-14 05:48:24.443167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.431 [2024-07-14 05:48:24.443193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.431 qpair failed and we were unable to recover it. 00:34:17.431 [2024-07-14 05:48:24.443400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.431 [2024-07-14 05:48:24.443429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.431 qpair failed and we were unable to recover it. 00:34:17.431 [2024-07-14 05:48:24.443602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.431 [2024-07-14 05:48:24.443629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.431 qpair failed and we were unable to recover it. 00:34:17.431 [2024-07-14 05:48:24.443794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.431 [2024-07-14 05:48:24.443820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.431 qpair failed and we were unable to recover it. 00:34:17.431 [2024-07-14 05:48:24.444006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.431 [2024-07-14 05:48:24.444033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.431 qpair failed and we were unable to recover it. 00:34:17.431 [2024-07-14 05:48:24.444229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.431 [2024-07-14 05:48:24.444255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.431 qpair failed and we were unable to recover it. 00:34:17.431 [2024-07-14 05:48:24.444405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.431 [2024-07-14 05:48:24.444431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.431 qpair failed and we were unable to recover it. 00:34:17.431 [2024-07-14 05:48:24.444614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.431 [2024-07-14 05:48:24.444640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.431 qpair failed and we were unable to recover it. 00:34:17.431 [2024-07-14 05:48:24.444819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.431 [2024-07-14 05:48:24.444844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.431 qpair failed and we were unable to recover it. 00:34:17.431 [2024-07-14 05:48:24.445032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.431 [2024-07-14 05:48:24.445058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.431 qpair failed and we were unable to recover it. 00:34:17.431 [2024-07-14 05:48:24.445225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.431 [2024-07-14 05:48:24.445250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.431 qpair failed and we were unable to recover it. 00:34:17.431 [2024-07-14 05:48:24.445445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.431 [2024-07-14 05:48:24.445470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.431 qpair failed and we were unable to recover it. 00:34:17.431 [2024-07-14 05:48:24.445660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.431 [2024-07-14 05:48:24.445685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.431 qpair failed and we were unable to recover it. 00:34:17.431 [2024-07-14 05:48:24.445863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.431 [2024-07-14 05:48:24.445894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.431 qpair failed and we were unable to recover it. 00:34:17.431 [2024-07-14 05:48:24.446108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.431 [2024-07-14 05:48:24.446152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.431 qpair failed and we were unable to recover it. 00:34:17.431 [2024-07-14 05:48:24.446381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.431 [2024-07-14 05:48:24.446407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.431 qpair failed and we were unable to recover it. 00:34:17.431 [2024-07-14 05:48:24.446592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.431 [2024-07-14 05:48:24.446618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.431 qpair failed and we were unable to recover it. 00:34:17.431 [2024-07-14 05:48:24.446802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.431 [2024-07-14 05:48:24.446828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.431 qpair failed and we were unable to recover it. 00:34:17.431 [2024-07-14 05:48:24.447053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.431 [2024-07-14 05:48:24.447080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.431 qpair failed and we were unable to recover it. 00:34:17.431 [2024-07-14 05:48:24.447242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.431 [2024-07-14 05:48:24.447267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.431 qpair failed and we were unable to recover it. 00:34:17.431 [2024-07-14 05:48:24.447415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.431 [2024-07-14 05:48:24.447440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.431 qpair failed and we were unable to recover it. 00:34:17.431 [2024-07-14 05:48:24.447652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.431 [2024-07-14 05:48:24.447693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.431 qpair failed and we were unable to recover it. 00:34:17.431 [2024-07-14 05:48:24.447881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.431 [2024-07-14 05:48:24.447908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.431 qpair failed and we were unable to recover it. 00:34:17.431 [2024-07-14 05:48:24.448093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.431 [2024-07-14 05:48:24.448119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.431 qpair failed and we were unable to recover it. 00:34:17.431 [2024-07-14 05:48:24.448336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.431 [2024-07-14 05:48:24.448361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.431 qpair failed and we were unable to recover it. 00:34:17.431 [2024-07-14 05:48:24.448518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.431 [2024-07-14 05:48:24.448543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.431 qpair failed and we were unable to recover it. 00:34:17.431 [2024-07-14 05:48:24.448754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.431 [2024-07-14 05:48:24.448779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.431 qpair failed and we were unable to recover it. 00:34:17.431 [2024-07-14 05:48:24.448944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.431 [2024-07-14 05:48:24.448970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.431 qpair failed and we were unable to recover it. 00:34:17.431 [2024-07-14 05:48:24.449183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.431 [2024-07-14 05:48:24.449209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.431 qpair failed and we were unable to recover it. 00:34:17.431 [2024-07-14 05:48:24.449415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.431 [2024-07-14 05:48:24.449441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.431 qpair failed and we were unable to recover it. 00:34:17.431 [2024-07-14 05:48:24.449616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.432 [2024-07-14 05:48:24.449641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.432 qpair failed and we were unable to recover it. 00:34:17.432 [2024-07-14 05:48:24.449852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.432 [2024-07-14 05:48:24.449885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.432 qpair failed and we were unable to recover it. 00:34:17.432 [2024-07-14 05:48:24.450095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.432 [2024-07-14 05:48:24.450120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.432 qpair failed and we were unable to recover it. 00:34:17.432 [2024-07-14 05:48:24.450302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.432 [2024-07-14 05:48:24.450327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.432 qpair failed and we were unable to recover it. 00:34:17.432 [2024-07-14 05:48:24.450511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.432 [2024-07-14 05:48:24.450537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.432 qpair failed and we were unable to recover it. 00:34:17.432 [2024-07-14 05:48:24.450693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.432 [2024-07-14 05:48:24.450718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.432 qpair failed and we were unable to recover it. 00:34:17.432 [2024-07-14 05:48:24.450877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.432 [2024-07-14 05:48:24.450903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.432 qpair failed and we were unable to recover it. 00:34:17.432 [2024-07-14 05:48:24.451105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.432 [2024-07-14 05:48:24.451137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.432 qpair failed and we were unable to recover it. 00:34:17.432 [2024-07-14 05:48:24.451318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.432 [2024-07-14 05:48:24.451346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.432 qpair failed and we were unable to recover it. 00:34:17.432 [2024-07-14 05:48:24.451528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.432 [2024-07-14 05:48:24.451554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.432 qpair failed and we were unable to recover it. 00:34:17.432 [2024-07-14 05:48:24.451738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.432 [2024-07-14 05:48:24.451765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.432 qpair failed and we were unable to recover it. 00:34:17.432 [2024-07-14 05:48:24.451930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.432 [2024-07-14 05:48:24.451956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.432 qpair failed and we were unable to recover it. 00:34:17.432 [2024-07-14 05:48:24.452180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.432 [2024-07-14 05:48:24.452208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.432 qpair failed and we were unable to recover it. 00:34:17.432 [2024-07-14 05:48:24.452418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.432 [2024-07-14 05:48:24.452443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.432 qpair failed and we were unable to recover it. 00:34:17.432 [2024-07-14 05:48:24.452628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.432 [2024-07-14 05:48:24.452653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.432 qpair failed and we were unable to recover it. 00:34:17.432 [2024-07-14 05:48:24.452862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.432 [2024-07-14 05:48:24.452896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.432 qpair failed and we were unable to recover it. 00:34:17.432 [2024-07-14 05:48:24.453096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.432 [2024-07-14 05:48:24.453121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.432 qpair failed and we were unable to recover it. 00:34:17.432 [2024-07-14 05:48:24.453280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.432 [2024-07-14 05:48:24.453306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.432 qpair failed and we were unable to recover it. 00:34:17.432 [2024-07-14 05:48:24.453504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.432 [2024-07-14 05:48:24.453530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.432 qpair failed and we were unable to recover it. 00:34:17.432 [2024-07-14 05:48:24.453714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.432 [2024-07-14 05:48:24.453743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.432 qpair failed and we were unable to recover it. 00:34:17.432 [2024-07-14 05:48:24.453956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.432 [2024-07-14 05:48:24.453983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.432 qpair failed and we were unable to recover it. 00:34:17.432 [2024-07-14 05:48:24.454149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.432 [2024-07-14 05:48:24.454175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.432 qpair failed and we were unable to recover it. 00:34:17.432 [2024-07-14 05:48:24.454366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.432 [2024-07-14 05:48:24.454392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.432 qpair failed and we were unable to recover it. 00:34:17.432 [2024-07-14 05:48:24.454542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.432 [2024-07-14 05:48:24.454568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.432 qpair failed and we were unable to recover it. 00:34:17.432 [2024-07-14 05:48:24.454744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.432 [2024-07-14 05:48:24.454769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.432 qpair failed and we were unable to recover it. 00:34:17.432 [2024-07-14 05:48:24.454949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.432 [2024-07-14 05:48:24.454975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.432 qpair failed and we were unable to recover it. 00:34:17.432 [2024-07-14 05:48:24.455127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.432 [2024-07-14 05:48:24.455151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.432 qpair failed and we were unable to recover it. 00:34:17.432 [2024-07-14 05:48:24.455315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.432 [2024-07-14 05:48:24.455340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.432 qpair failed and we were unable to recover it. 00:34:17.432 [2024-07-14 05:48:24.455498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.432 [2024-07-14 05:48:24.455524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.432 qpair failed and we were unable to recover it. 00:34:17.432 [2024-07-14 05:48:24.455708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.432 [2024-07-14 05:48:24.455733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.432 qpair failed and we were unable to recover it. 00:34:17.432 [2024-07-14 05:48:24.455900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.432 [2024-07-14 05:48:24.455927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.432 qpair failed and we were unable to recover it. 00:34:17.432 [2024-07-14 05:48:24.456082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.432 [2024-07-14 05:48:24.456109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.432 qpair failed and we were unable to recover it. 00:34:17.432 [2024-07-14 05:48:24.456319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.432 [2024-07-14 05:48:24.456345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.432 qpair failed and we were unable to recover it. 00:34:17.432 [2024-07-14 05:48:24.456592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.432 [2024-07-14 05:48:24.456621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.432 qpair failed and we were unable to recover it. 00:34:17.432 [2024-07-14 05:48:24.456858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.432 [2024-07-14 05:48:24.456891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.432 qpair failed and we were unable to recover it. 00:34:17.432 [2024-07-14 05:48:24.457130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.432 [2024-07-14 05:48:24.457159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.432 qpair failed and we were unable to recover it. 00:34:17.432 [2024-07-14 05:48:24.457333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.432 [2024-07-14 05:48:24.457361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.432 qpair failed and we were unable to recover it. 00:34:17.432 [2024-07-14 05:48:24.457561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.432 [2024-07-14 05:48:24.457587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.432 qpair failed and we were unable to recover it. 00:34:17.432 [2024-07-14 05:48:24.457754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.432 [2024-07-14 05:48:24.457779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.432 qpair failed and we were unable to recover it. 00:34:17.432 [2024-07-14 05:48:24.457959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.432 [2024-07-14 05:48:24.457986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.432 qpair failed and we were unable to recover it. 00:34:17.432 [2024-07-14 05:48:24.458149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.432 [2024-07-14 05:48:24.458174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.432 qpair failed and we were unable to recover it. 00:34:17.432 [2024-07-14 05:48:24.458354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.432 [2024-07-14 05:48:24.458379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.432 qpair failed and we were unable to recover it. 00:34:17.432 [2024-07-14 05:48:24.458583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.432 [2024-07-14 05:48:24.458607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.432 qpair failed and we were unable to recover it. 00:34:17.432 [2024-07-14 05:48:24.458792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.432 [2024-07-14 05:48:24.458819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.432 qpair failed and we were unable to recover it. 00:34:17.432 [2024-07-14 05:48:24.459028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.432 [2024-07-14 05:48:24.459057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.432 qpair failed and we were unable to recover it. 00:34:17.432 [2024-07-14 05:48:24.459258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.432 [2024-07-14 05:48:24.459285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.432 qpair failed and we were unable to recover it. 00:34:17.432 [2024-07-14 05:48:24.459463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.432 [2024-07-14 05:48:24.459489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.432 qpair failed and we were unable to recover it. 00:34:17.432 [2024-07-14 05:48:24.459672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.432 [2024-07-14 05:48:24.459701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.432 qpair failed and we were unable to recover it. 00:34:17.433 [2024-07-14 05:48:24.459885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.433 [2024-07-14 05:48:24.459911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.433 qpair failed and we were unable to recover it. 00:34:17.433 [2024-07-14 05:48:24.460097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.433 [2024-07-14 05:48:24.460121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.433 qpair failed and we were unable to recover it. 00:34:17.433 [2024-07-14 05:48:24.460329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.433 [2024-07-14 05:48:24.460355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.433 qpair failed and we were unable to recover it. 00:34:17.433 [2024-07-14 05:48:24.460555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.433 [2024-07-14 05:48:24.460584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.433 qpair failed and we were unable to recover it. 00:34:17.433 [2024-07-14 05:48:24.460821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.433 [2024-07-14 05:48:24.460846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.433 qpair failed and we were unable to recover it. 00:34:17.433 [2024-07-14 05:48:24.461075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.433 [2024-07-14 05:48:24.461101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.433 qpair failed and we were unable to recover it. 00:34:17.433 [2024-07-14 05:48:24.461258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.433 [2024-07-14 05:48:24.461284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.433 qpair failed and we were unable to recover it. 00:34:17.433 [2024-07-14 05:48:24.461493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.433 [2024-07-14 05:48:24.461534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.433 qpair failed and we were unable to recover it. 00:34:17.433 [2024-07-14 05:48:24.461739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.433 [2024-07-14 05:48:24.461764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.433 qpair failed and we were unable to recover it. 00:34:17.433 [2024-07-14 05:48:24.461959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.433 [2024-07-14 05:48:24.461985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.433 qpair failed and we were unable to recover it. 00:34:17.433 [2024-07-14 05:48:24.462173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.433 [2024-07-14 05:48:24.462198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.433 qpair failed and we were unable to recover it. 00:34:17.433 [2024-07-14 05:48:24.462420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.433 [2024-07-14 05:48:24.462449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.433 qpair failed and we were unable to recover it. 00:34:17.433 [2024-07-14 05:48:24.462654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.433 [2024-07-14 05:48:24.462682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.433 qpair failed and we were unable to recover it. 00:34:17.433 [2024-07-14 05:48:24.462895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.433 [2024-07-14 05:48:24.462922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.433 qpair failed and we were unable to recover it. 00:34:17.433 [2024-07-14 05:48:24.463127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.433 [2024-07-14 05:48:24.463152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.433 qpair failed and we were unable to recover it. 00:34:17.433 [2024-07-14 05:48:24.463333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.433 [2024-07-14 05:48:24.463359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.433 qpair failed and we were unable to recover it. 00:34:17.433 [2024-07-14 05:48:24.463592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.433 [2024-07-14 05:48:24.463621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.433 qpair failed and we were unable to recover it. 00:34:17.433 [2024-07-14 05:48:24.463829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.433 [2024-07-14 05:48:24.463854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.433 qpair failed and we were unable to recover it. 00:34:17.433 [2024-07-14 05:48:24.464180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.433 [2024-07-14 05:48:24.464207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.433 qpair failed and we were unable to recover it. 00:34:17.433 [2024-07-14 05:48:24.464396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.433 [2024-07-14 05:48:24.464422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.433 qpair failed and we were unable to recover it. 00:34:17.433 [2024-07-14 05:48:24.464610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.433 [2024-07-14 05:48:24.464636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.433 qpair failed and we were unable to recover it. 00:34:17.433 [2024-07-14 05:48:24.464815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.433 [2024-07-14 05:48:24.464840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.433 qpair failed and we were unable to recover it. 00:34:17.433 [2024-07-14 05:48:24.465055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.433 [2024-07-14 05:48:24.465080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.433 qpair failed and we were unable to recover it. 00:34:17.433 [2024-07-14 05:48:24.465281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.433 [2024-07-14 05:48:24.465308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.433 qpair failed and we were unable to recover it. 00:34:17.433 [2024-07-14 05:48:24.465515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.433 [2024-07-14 05:48:24.465545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.433 qpair failed and we were unable to recover it. 00:34:17.433 [2024-07-14 05:48:24.465755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.433 [2024-07-14 05:48:24.465780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.433 qpair failed and we were unable to recover it. 00:34:17.433 [2024-07-14 05:48:24.465968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.433 [2024-07-14 05:48:24.465995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.433 qpair failed and we were unable to recover it. 00:34:17.433 [2024-07-14 05:48:24.466182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.433 [2024-07-14 05:48:24.466208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.433 qpair failed and we were unable to recover it. 00:34:17.433 [2024-07-14 05:48:24.466390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.433 [2024-07-14 05:48:24.466417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.433 qpair failed and we were unable to recover it. 00:34:17.433 [2024-07-14 05:48:24.466608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.433 [2024-07-14 05:48:24.466633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.433 qpair failed and we were unable to recover it. 00:34:17.433 [2024-07-14 05:48:24.466834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.433 [2024-07-14 05:48:24.466862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.433 qpair failed and we were unable to recover it. 00:34:17.433 [2024-07-14 05:48:24.467043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.433 [2024-07-14 05:48:24.467068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.433 qpair failed and we were unable to recover it. 00:34:17.433 [2024-07-14 05:48:24.467229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.433 [2024-07-14 05:48:24.467255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.433 qpair failed and we were unable to recover it. 00:34:17.433 [2024-07-14 05:48:24.467450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.433 [2024-07-14 05:48:24.467475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.433 qpair failed and we were unable to recover it. 00:34:17.433 [2024-07-14 05:48:24.467684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.433 [2024-07-14 05:48:24.467708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.433 qpair failed and we were unable to recover it. 00:34:17.433 [2024-07-14 05:48:24.467887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.433 [2024-07-14 05:48:24.467916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.433 qpair failed and we were unable to recover it. 00:34:17.433 [2024-07-14 05:48:24.468094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.433 [2024-07-14 05:48:24.468122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.433 qpair failed and we were unable to recover it. 00:34:17.433 [2024-07-14 05:48:24.468327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.433 [2024-07-14 05:48:24.468353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.433 qpair failed and we were unable to recover it. 00:34:17.433 [2024-07-14 05:48:24.468561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.433 [2024-07-14 05:48:24.468586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.433 qpair failed and we were unable to recover it. 00:34:17.433 [2024-07-14 05:48:24.468748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.433 [2024-07-14 05:48:24.468779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.433 qpair failed and we were unable to recover it. 00:34:17.433 [2024-07-14 05:48:24.468936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.433 [2024-07-14 05:48:24.468961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.433 qpair failed and we were unable to recover it. 00:34:17.433 [2024-07-14 05:48:24.469138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.433 [2024-07-14 05:48:24.469164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.433 qpair failed and we were unable to recover it. 00:34:17.433 [2024-07-14 05:48:24.469315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.433 [2024-07-14 05:48:24.469341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.433 qpair failed and we were unable to recover it. 00:34:17.433 [2024-07-14 05:48:24.469548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.433 [2024-07-14 05:48:24.469576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.433 qpair failed and we were unable to recover it. 00:34:17.433 [2024-07-14 05:48:24.469771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.433 [2024-07-14 05:48:24.469798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.433 qpair failed and we were unable to recover it. 00:34:17.433 [2024-07-14 05:48:24.469973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.433 [2024-07-14 05:48:24.469998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.433 qpair failed and we were unable to recover it. 00:34:17.433 [2024-07-14 05:48:24.470180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.433 [2024-07-14 05:48:24.470207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.433 qpair failed and we were unable to recover it. 00:34:17.433 [2024-07-14 05:48:24.470382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.433 [2024-07-14 05:48:24.470407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.433 qpair failed and we were unable to recover it. 00:34:17.434 [2024-07-14 05:48:24.470593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.434 [2024-07-14 05:48:24.470618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.434 qpair failed and we were unable to recover it. 00:34:17.434 [2024-07-14 05:48:24.470769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.434 [2024-07-14 05:48:24.470795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.434 qpair failed and we were unable to recover it. 00:34:17.434 [2024-07-14 05:48:24.471015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.434 [2024-07-14 05:48:24.471042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.434 qpair failed and we were unable to recover it. 00:34:17.434 [2024-07-14 05:48:24.471247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.434 [2024-07-14 05:48:24.471276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.434 qpair failed and we were unable to recover it. 00:34:17.434 [2024-07-14 05:48:24.471476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.434 [2024-07-14 05:48:24.471505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.434 qpair failed and we were unable to recover it. 00:34:17.434 [2024-07-14 05:48:24.471708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.434 [2024-07-14 05:48:24.471735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.434 qpair failed and we were unable to recover it. 00:34:17.434 [2024-07-14 05:48:24.471920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.434 [2024-07-14 05:48:24.471947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.434 qpair failed and we were unable to recover it. 00:34:17.434 [2024-07-14 05:48:24.472127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.434 [2024-07-14 05:48:24.472153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.434 qpair failed and we were unable to recover it. 00:34:17.434 [2024-07-14 05:48:24.472361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.434 [2024-07-14 05:48:24.472386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.434 qpair failed and we were unable to recover it. 00:34:17.434 [2024-07-14 05:48:24.472598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.434 [2024-07-14 05:48:24.472623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.434 qpair failed and we were unable to recover it. 00:34:17.434 [2024-07-14 05:48:24.472808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.434 [2024-07-14 05:48:24.472837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.434 qpair failed and we were unable to recover it. 00:34:17.434 [2024-07-14 05:48:24.473064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.434 [2024-07-14 05:48:24.473090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.434 qpair failed and we were unable to recover it. 00:34:17.434 [2024-07-14 05:48:24.473270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.434 [2024-07-14 05:48:24.473298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.434 qpair failed and we were unable to recover it. 00:34:17.434 [2024-07-14 05:48:24.473521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.434 [2024-07-14 05:48:24.473547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.434 qpair failed and we were unable to recover it. 00:34:17.434 [2024-07-14 05:48:24.473752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.434 [2024-07-14 05:48:24.473777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.434 qpair failed and we were unable to recover it. 00:34:17.434 [2024-07-14 05:48:24.474007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.434 [2024-07-14 05:48:24.474036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.434 qpair failed and we were unable to recover it. 00:34:17.434 [2024-07-14 05:48:24.474240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.434 [2024-07-14 05:48:24.474265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.434 qpair failed and we were unable to recover it. 00:34:17.434 [2024-07-14 05:48:24.474423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.434 [2024-07-14 05:48:24.474448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.434 qpair failed and we were unable to recover it. 00:34:17.434 [2024-07-14 05:48:24.474489] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1413390 (9): Bad file descriptor 00:34:17.434 [2024-07-14 05:48:24.474741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.434 [2024-07-14 05:48:24.474780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.434 qpair failed and we were unable to recover it. 00:34:17.434 [2024-07-14 05:48:24.474972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.434 [2024-07-14 05:48:24.475000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.434 qpair failed and we were unable to recover it. 00:34:17.434 [2024-07-14 05:48:24.475161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.434 [2024-07-14 05:48:24.475187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.434 qpair failed and we were unable to recover it. 00:34:17.434 [2024-07-14 05:48:24.475369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.434 [2024-07-14 05:48:24.475397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.434 qpair failed and we were unable to recover it. 00:34:17.434 [2024-07-14 05:48:24.475597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.434 [2024-07-14 05:48:24.475625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.434 qpair failed and we were unable to recover it. 00:34:17.434 [2024-07-14 05:48:24.475835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.434 [2024-07-14 05:48:24.475860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.434 qpair failed and we were unable to recover it. 00:34:17.434 [2024-07-14 05:48:24.476009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.434 [2024-07-14 05:48:24.476034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.434 qpair failed and we were unable to recover it. 00:34:17.434 [2024-07-14 05:48:24.476218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.434 [2024-07-14 05:48:24.476243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.434 qpair failed and we were unable to recover it. 00:34:17.434 [2024-07-14 05:48:24.476453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.434 [2024-07-14 05:48:24.476478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.434 qpair failed and we were unable to recover it. 00:34:17.434 [2024-07-14 05:48:24.476681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.434 [2024-07-14 05:48:24.476706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.434 qpair failed and we were unable to recover it. 00:34:17.434 [2024-07-14 05:48:24.476887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.434 [2024-07-14 05:48:24.476912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.434 qpair failed and we were unable to recover it. 00:34:17.434 [2024-07-14 05:48:24.477089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.434 [2024-07-14 05:48:24.477115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.434 qpair failed and we were unable to recover it. 00:34:17.434 [2024-07-14 05:48:24.477323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.434 [2024-07-14 05:48:24.477365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.434 qpair failed and we were unable to recover it. 00:34:17.434 [2024-07-14 05:48:24.477573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.434 [2024-07-14 05:48:24.477602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.434 qpair failed and we were unable to recover it. 00:34:17.434 [2024-07-14 05:48:24.477837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.434 [2024-07-14 05:48:24.477871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.434 qpair failed and we were unable to recover it. 00:34:17.434 [2024-07-14 05:48:24.478099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.434 [2024-07-14 05:48:24.478125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.434 qpair failed and we were unable to recover it. 00:34:17.434 [2024-07-14 05:48:24.478324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.434 [2024-07-14 05:48:24.478352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.434 qpair failed and we were unable to recover it. 00:34:17.434 [2024-07-14 05:48:24.478521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.434 [2024-07-14 05:48:24.478546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.434 qpair failed and we were unable to recover it. 00:34:17.434 [2024-07-14 05:48:24.478800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.434 [2024-07-14 05:48:24.478850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.434 qpair failed and we were unable to recover it. 00:34:17.434 [2024-07-14 05:48:24.479082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.434 [2024-07-14 05:48:24.479107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.434 qpair failed and we were unable to recover it. 00:34:17.434 [2024-07-14 05:48:24.479286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.434 [2024-07-14 05:48:24.479311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.434 qpair failed and we were unable to recover it. 00:34:17.434 [2024-07-14 05:48:24.479490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.434 [2024-07-14 05:48:24.479514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.434 qpair failed and we were unable to recover it. 00:34:17.434 [2024-07-14 05:48:24.479665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.434 [2024-07-14 05:48:24.479690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.434 qpair failed and we were unable to recover it. 00:34:17.434 [2024-07-14 05:48:24.479905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.434 [2024-07-14 05:48:24.479931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.434 qpair failed and we were unable to recover it. 00:34:17.434 [2024-07-14 05:48:24.480117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.434 [2024-07-14 05:48:24.480160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.434 qpair failed and we were unable to recover it. 00:34:17.434 [2024-07-14 05:48:24.480387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.434 [2024-07-14 05:48:24.480412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.434 qpair failed and we were unable to recover it. 00:34:17.434 [2024-07-14 05:48:24.480596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.434 [2024-07-14 05:48:24.480621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.434 qpair failed and we were unable to recover it. 00:34:17.434 [2024-07-14 05:48:24.480805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.434 [2024-07-14 05:48:24.480830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.434 qpair failed and we were unable to recover it. 00:34:17.434 [2024-07-14 05:48:24.481045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.434 [2024-07-14 05:48:24.481069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.434 qpair failed and we were unable to recover it. 00:34:17.434 [2024-07-14 05:48:24.481233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.434 [2024-07-14 05:48:24.481258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.435 qpair failed and we were unable to recover it. 00:34:17.435 [2024-07-14 05:48:24.481464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.435 [2024-07-14 05:48:24.481490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.435 qpair failed and we were unable to recover it. 00:34:17.435 [2024-07-14 05:48:24.481650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.435 [2024-07-14 05:48:24.481676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.435 qpair failed and we were unable to recover it. 00:34:17.435 [2024-07-14 05:48:24.481856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.435 [2024-07-14 05:48:24.481889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.435 qpair failed and we were unable to recover it. 00:34:17.435 [2024-07-14 05:48:24.482108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.435 [2024-07-14 05:48:24.482134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.435 qpair failed and we were unable to recover it. 00:34:17.435 [2024-07-14 05:48:24.482316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.435 [2024-07-14 05:48:24.482342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.435 qpair failed and we were unable to recover it. 00:34:17.435 [2024-07-14 05:48:24.482527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.435 [2024-07-14 05:48:24.482554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.435 qpair failed and we were unable to recover it. 00:34:17.435 [2024-07-14 05:48:24.482807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.435 [2024-07-14 05:48:24.482859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.435 qpair failed and we were unable to recover it. 00:34:17.435 [2024-07-14 05:48:24.483091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.435 [2024-07-14 05:48:24.483118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.435 qpair failed and we were unable to recover it. 00:34:17.435 [2024-07-14 05:48:24.483330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.435 [2024-07-14 05:48:24.483357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.435 qpair failed and we were unable to recover it. 00:34:17.435 [2024-07-14 05:48:24.483531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.435 [2024-07-14 05:48:24.483558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.435 qpair failed and we were unable to recover it. 00:34:17.435 [2024-07-14 05:48:24.483720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.435 [2024-07-14 05:48:24.483750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.435 qpair failed and we were unable to recover it. 00:34:17.435 [2024-07-14 05:48:24.483912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.435 [2024-07-14 05:48:24.483939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.435 qpair failed and we were unable to recover it. 00:34:17.435 [2024-07-14 05:48:24.484148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.435 [2024-07-14 05:48:24.484175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.435 qpair failed and we were unable to recover it. 00:34:17.435 [2024-07-14 05:48:24.484387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.435 [2024-07-14 05:48:24.484416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.435 qpair failed and we were unable to recover it. 00:34:17.435 [2024-07-14 05:48:24.484636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.435 [2024-07-14 05:48:24.484663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.435 qpair failed and we were unable to recover it. 00:34:17.435 [2024-07-14 05:48:24.484851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.435 [2024-07-14 05:48:24.484885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.435 qpair failed and we were unable to recover it. 00:34:17.435 [2024-07-14 05:48:24.485064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.435 [2024-07-14 05:48:24.485091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.435 qpair failed and we were unable to recover it. 00:34:17.435 [2024-07-14 05:48:24.485287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.435 [2024-07-14 05:48:24.485314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.435 qpair failed and we were unable to recover it. 00:34:17.435 [2024-07-14 05:48:24.485575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.435 [2024-07-14 05:48:24.485605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.435 qpair failed and we were unable to recover it. 00:34:17.435 [2024-07-14 05:48:24.485832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.435 [2024-07-14 05:48:24.485861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.435 qpair failed and we were unable to recover it. 00:34:17.435 [2024-07-14 05:48:24.486096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.435 [2024-07-14 05:48:24.486123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.435 qpair failed and we were unable to recover it. 00:34:17.435 [2024-07-14 05:48:24.486327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.435 [2024-07-14 05:48:24.486357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.435 qpair failed and we were unable to recover it. 00:34:17.435 [2024-07-14 05:48:24.486564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.435 [2024-07-14 05:48:24.486594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.435 qpair failed and we were unable to recover it. 00:34:17.435 [2024-07-14 05:48:24.486797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.435 [2024-07-14 05:48:24.486823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.435 qpair failed and we were unable to recover it. 00:34:17.435 [2024-07-14 05:48:24.487039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.435 [2024-07-14 05:48:24.487066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.435 qpair failed and we were unable to recover it. 00:34:17.435 [2024-07-14 05:48:24.487249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.435 [2024-07-14 05:48:24.487275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.435 qpair failed and we were unable to recover it. 00:34:17.435 [2024-07-14 05:48:24.487459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.435 [2024-07-14 05:48:24.487486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.435 qpair failed and we were unable to recover it. 00:34:17.435 [2024-07-14 05:48:24.487759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.435 [2024-07-14 05:48:24.487811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.435 qpair failed and we were unable to recover it. 00:34:17.435 [2024-07-14 05:48:24.488025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.435 [2024-07-14 05:48:24.488052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.435 qpair failed and we were unable to recover it. 00:34:17.435 [2024-07-14 05:48:24.488248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.435 [2024-07-14 05:48:24.488275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.435 qpair failed and we were unable to recover it. 00:34:17.435 [2024-07-14 05:48:24.488437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.435 [2024-07-14 05:48:24.488464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.435 qpair failed and we were unable to recover it. 00:34:17.435 [2024-07-14 05:48:24.488642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.435 [2024-07-14 05:48:24.488669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.435 qpair failed and we were unable to recover it. 00:34:17.435 [2024-07-14 05:48:24.488877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.435 [2024-07-14 05:48:24.488921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.435 qpair failed and we were unable to recover it. 00:34:17.435 [2024-07-14 05:48:24.489106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.435 [2024-07-14 05:48:24.489132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.435 qpair failed and we were unable to recover it. 00:34:17.435 [2024-07-14 05:48:24.489340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.435 [2024-07-14 05:48:24.489366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.435 qpair failed and we were unable to recover it. 00:34:17.435 [2024-07-14 05:48:24.489548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.435 [2024-07-14 05:48:24.489575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.435 qpair failed and we were unable to recover it. 00:34:17.435 [2024-07-14 05:48:24.489759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.435 [2024-07-14 05:48:24.489786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.435 qpair failed and we were unable to recover it. 00:34:17.435 [2024-07-14 05:48:24.490030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.435 [2024-07-14 05:48:24.490061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.435 qpair failed and we were unable to recover it. 00:34:17.435 [2024-07-14 05:48:24.490240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.435 [2024-07-14 05:48:24.490267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.435 qpair failed and we were unable to recover it. 00:34:17.435 [2024-07-14 05:48:24.490472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.435 [2024-07-14 05:48:24.490499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.435 qpair failed and we were unable to recover it. 00:34:17.435 [2024-07-14 05:48:24.490684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.435 [2024-07-14 05:48:24.490710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.435 qpair failed and we were unable to recover it. 00:34:17.435 [2024-07-14 05:48:24.490898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.435 [2024-07-14 05:48:24.490925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.435 qpair failed and we were unable to recover it. 00:34:17.435 [2024-07-14 05:48:24.491135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.436 [2024-07-14 05:48:24.491177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.436 qpair failed and we were unable to recover it. 00:34:17.436 [2024-07-14 05:48:24.491391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.436 [2024-07-14 05:48:24.491420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.436 qpair failed and we were unable to recover it. 00:34:17.436 [2024-07-14 05:48:24.491644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.436 [2024-07-14 05:48:24.491670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.436 qpair failed and we were unable to recover it. 00:34:17.436 [2024-07-14 05:48:24.491876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.436 [2024-07-14 05:48:24.491904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.436 qpair failed and we were unable to recover it. 00:34:17.436 [2024-07-14 05:48:24.492089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.436 [2024-07-14 05:48:24.492115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.436 qpair failed and we were unable to recover it. 00:34:17.436 [2024-07-14 05:48:24.492299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.436 [2024-07-14 05:48:24.492326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.436 qpair failed and we were unable to recover it. 00:34:17.436 [2024-07-14 05:48:24.492534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.436 [2024-07-14 05:48:24.492577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.436 qpair failed and we were unable to recover it. 00:34:17.436 [2024-07-14 05:48:24.492777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.436 [2024-07-14 05:48:24.492807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.436 qpair failed and we were unable to recover it. 00:34:17.436 [2024-07-14 05:48:24.493013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.436 [2024-07-14 05:48:24.493041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.436 qpair failed and we were unable to recover it. 00:34:17.436 [2024-07-14 05:48:24.493232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.436 [2024-07-14 05:48:24.493259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.436 qpair failed and we were unable to recover it. 00:34:17.436 [2024-07-14 05:48:24.493442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.436 [2024-07-14 05:48:24.493469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.436 qpair failed and we were unable to recover it. 00:34:17.436 [2024-07-14 05:48:24.493657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.436 [2024-07-14 05:48:24.493683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.436 qpair failed and we were unable to recover it. 00:34:17.436 [2024-07-14 05:48:24.493885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.436 [2024-07-14 05:48:24.493916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.436 qpair failed and we were unable to recover it. 00:34:17.436 [2024-07-14 05:48:24.494089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.436 [2024-07-14 05:48:24.494118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.436 qpair failed and we were unable to recover it. 00:34:17.436 [2024-07-14 05:48:24.494303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.436 [2024-07-14 05:48:24.494330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.436 qpair failed and we were unable to recover it. 00:34:17.436 [2024-07-14 05:48:24.494515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.436 [2024-07-14 05:48:24.494541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.436 qpair failed and we were unable to recover it. 00:34:17.436 [2024-07-14 05:48:24.494730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.436 [2024-07-14 05:48:24.494775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.436 qpair failed and we were unable to recover it. 00:34:17.436 [2024-07-14 05:48:24.495008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.436 [2024-07-14 05:48:24.495035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.436 qpair failed and we were unable to recover it. 00:34:17.436 [2024-07-14 05:48:24.495238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.436 [2024-07-14 05:48:24.495268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.436 qpair failed and we were unable to recover it. 00:34:17.436 [2024-07-14 05:48:24.495468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.436 [2024-07-14 05:48:24.495497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.436 qpair failed and we were unable to recover it. 00:34:17.436 [2024-07-14 05:48:24.495698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.436 [2024-07-14 05:48:24.495724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.436 qpair failed and we were unable to recover it. 00:34:17.436 [2024-07-14 05:48:24.495909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.436 [2024-07-14 05:48:24.495936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.436 qpair failed and we were unable to recover it. 00:34:17.436 [2024-07-14 05:48:24.496156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.436 [2024-07-14 05:48:24.496186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.436 qpair failed and we were unable to recover it. 00:34:17.436 [2024-07-14 05:48:24.496408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.436 [2024-07-14 05:48:24.496437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.436 qpair failed and we were unable to recover it. 00:34:17.436 [2024-07-14 05:48:24.496637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.436 [2024-07-14 05:48:24.496674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.436 qpair failed and we were unable to recover it. 00:34:17.436 [2024-07-14 05:48:24.496862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.436 [2024-07-14 05:48:24.496899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.436 qpair failed and we were unable to recover it. 00:34:17.436 [2024-07-14 05:48:24.497111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.436 [2024-07-14 05:48:24.497138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.436 qpair failed and we were unable to recover it. 00:34:17.436 [2024-07-14 05:48:24.497288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.436 [2024-07-14 05:48:24.497314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.436 qpair failed and we were unable to recover it. 00:34:17.436 [2024-07-14 05:48:24.497498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.436 [2024-07-14 05:48:24.497524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.436 qpair failed and we were unable to recover it. 00:34:17.436 [2024-07-14 05:48:24.497687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.436 [2024-07-14 05:48:24.497716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.436 qpair failed and we were unable to recover it. 00:34:17.436 [2024-07-14 05:48:24.497873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.436 [2024-07-14 05:48:24.497901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.436 qpair failed and we were unable to recover it. 00:34:17.436 [2024-07-14 05:48:24.498060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.436 [2024-07-14 05:48:24.498086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.436 qpair failed and we were unable to recover it. 00:34:17.436 [2024-07-14 05:48:24.498270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.436 [2024-07-14 05:48:24.498295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.436 qpair failed and we were unable to recover it. 00:34:17.436 [2024-07-14 05:48:24.498509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.436 [2024-07-14 05:48:24.498556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.436 qpair failed and we were unable to recover it. 00:34:17.436 [2024-07-14 05:48:24.498766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.436 [2024-07-14 05:48:24.498799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.436 qpair failed and we were unable to recover it. 00:34:17.436 [2024-07-14 05:48:24.499043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.436 [2024-07-14 05:48:24.499073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:17.436 qpair failed and we were unable to recover it. 00:34:17.436 [2024-07-14 05:48:24.499300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.436 [2024-07-14 05:48:24.499340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.436 qpair failed and we were unable to recover it. 00:34:17.436 [2024-07-14 05:48:24.499500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.436 [2024-07-14 05:48:24.499529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.436 qpair failed and we were unable to recover it. 00:34:17.436 [2024-07-14 05:48:24.499684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.436 [2024-07-14 05:48:24.499712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.436 qpair failed and we were unable to recover it. 00:34:17.436 [2024-07-14 05:48:24.499877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.436 [2024-07-14 05:48:24.499907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.436 qpair failed and we were unable to recover it. 00:34:17.436 [2024-07-14 05:48:24.500132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.436 [2024-07-14 05:48:24.500161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.436 qpair failed and we were unable to recover it. 00:34:17.436 [2024-07-14 05:48:24.500417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.436 [2024-07-14 05:48:24.500449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.436 qpair failed and we were unable to recover it. 00:34:17.436 [2024-07-14 05:48:24.500646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.436 [2024-07-14 05:48:24.500674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.436 qpair failed and we were unable to recover it. 00:34:17.436 [2024-07-14 05:48:24.500884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.436 [2024-07-14 05:48:24.500913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.436 qpair failed and we were unable to recover it. 00:34:17.436 [2024-07-14 05:48:24.501077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.436 [2024-07-14 05:48:24.501105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.436 qpair failed and we were unable to recover it. 00:34:17.436 [2024-07-14 05:48:24.501315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.436 [2024-07-14 05:48:24.501358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.436 qpair failed and we were unable to recover it. 00:34:17.436 [2024-07-14 05:48:24.501564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.436 [2024-07-14 05:48:24.501594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.436 qpair failed and we were unable to recover it. 00:34:17.436 [2024-07-14 05:48:24.501825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.436 [2024-07-14 05:48:24.501862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.436 qpair failed and we were unable to recover it. 00:34:17.436 [2024-07-14 05:48:24.502094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.437 [2024-07-14 05:48:24.502124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.437 qpair failed and we were unable to recover it. 00:34:17.712 [2024-07-14 05:48:24.502322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.712 [2024-07-14 05:48:24.502355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.712 qpair failed and we were unable to recover it. 00:34:17.712 [2024-07-14 05:48:24.502558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.712 [2024-07-14 05:48:24.502585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.712 qpair failed and we were unable to recover it. 00:34:17.712 [2024-07-14 05:48:24.502773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.712 [2024-07-14 05:48:24.502802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.712 qpair failed and we were unable to recover it. 00:34:17.712 [2024-07-14 05:48:24.502961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.712 [2024-07-14 05:48:24.502989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.712 qpair failed and we were unable to recover it. 00:34:17.712 [2024-07-14 05:48:24.503142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.712 [2024-07-14 05:48:24.503168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.712 qpair failed and we were unable to recover it. 00:34:17.712 [2024-07-14 05:48:24.503326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.712 [2024-07-14 05:48:24.503355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.712 qpair failed and we were unable to recover it. 00:34:17.712 [2024-07-14 05:48:24.503515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.712 [2024-07-14 05:48:24.503544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.712 qpair failed and we were unable to recover it. 00:34:17.712 [2024-07-14 05:48:24.503731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.712 [2024-07-14 05:48:24.503761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.712 qpair failed and we were unable to recover it. 00:34:17.712 [2024-07-14 05:48:24.503998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.712 [2024-07-14 05:48:24.504026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.712 qpair failed and we were unable to recover it. 00:34:17.712 [2024-07-14 05:48:24.504203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.713 [2024-07-14 05:48:24.504230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.713 qpair failed and we were unable to recover it. 00:34:17.713 [2024-07-14 05:48:24.504383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.713 [2024-07-14 05:48:24.504411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.713 qpair failed and we were unable to recover it. 00:34:17.713 [2024-07-14 05:48:24.504600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.713 [2024-07-14 05:48:24.504627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.713 qpair failed and we were unable to recover it. 00:34:17.713 [2024-07-14 05:48:24.504808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.713 [2024-07-14 05:48:24.504836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.713 qpair failed and we were unable to recover it. 00:34:17.713 [2024-07-14 05:48:24.505061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.713 [2024-07-14 05:48:24.505089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.713 qpair failed and we were unable to recover it. 00:34:17.713 [2024-07-14 05:48:24.505260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.713 [2024-07-14 05:48:24.505288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.713 qpair failed and we were unable to recover it. 00:34:17.713 [2024-07-14 05:48:24.505468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.713 [2024-07-14 05:48:24.505495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.713 qpair failed and we were unable to recover it. 00:34:17.713 [2024-07-14 05:48:24.505704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.713 [2024-07-14 05:48:24.505731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.713 qpair failed and we were unable to recover it. 00:34:17.713 [2024-07-14 05:48:24.505925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.713 [2024-07-14 05:48:24.505954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.713 qpair failed and we were unable to recover it. 00:34:17.713 [2024-07-14 05:48:24.506108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.713 [2024-07-14 05:48:24.506135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.713 qpair failed and we were unable to recover it. 00:34:17.713 [2024-07-14 05:48:24.506314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.713 [2024-07-14 05:48:24.506341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.713 qpair failed and we were unable to recover it. 00:34:17.713 [2024-07-14 05:48:24.506499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.713 [2024-07-14 05:48:24.506528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.713 qpair failed and we were unable to recover it. 00:34:17.713 [2024-07-14 05:48:24.506739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.713 [2024-07-14 05:48:24.506766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.713 qpair failed and we were unable to recover it. 00:34:17.713 [2024-07-14 05:48:24.506956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.713 [2024-07-14 05:48:24.506985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.713 qpair failed and we were unable to recover it. 00:34:17.713 [2024-07-14 05:48:24.507163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.713 [2024-07-14 05:48:24.507190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.713 qpair failed and we were unable to recover it. 00:34:17.713 [2024-07-14 05:48:24.507377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.713 [2024-07-14 05:48:24.507406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.713 qpair failed and we were unable to recover it. 00:34:17.713 [2024-07-14 05:48:24.507589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.713 [2024-07-14 05:48:24.507616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.713 qpair failed and we were unable to recover it. 00:34:17.713 [2024-07-14 05:48:24.507794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.713 [2024-07-14 05:48:24.507822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.713 qpair failed and we were unable to recover it. 00:34:17.713 [2024-07-14 05:48:24.508017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.713 [2024-07-14 05:48:24.508046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.713 qpair failed and we were unable to recover it. 00:34:17.713 [2024-07-14 05:48:24.508251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.713 [2024-07-14 05:48:24.508278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.713 qpair failed and we were unable to recover it. 00:34:17.713 [2024-07-14 05:48:24.508629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.713 [2024-07-14 05:48:24.508692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.713 qpair failed and we were unable to recover it. 00:34:17.713 [2024-07-14 05:48:24.508936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.713 [2024-07-14 05:48:24.508964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.713 qpair failed and we were unable to recover it. 00:34:17.713 [2024-07-14 05:48:24.509157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.713 [2024-07-14 05:48:24.509184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.713 qpair failed and we were unable to recover it. 00:34:17.713 [2024-07-14 05:48:24.509367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.713 [2024-07-14 05:48:24.509394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.713 qpair failed and we were unable to recover it. 00:34:17.713 [2024-07-14 05:48:24.509556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.713 [2024-07-14 05:48:24.509583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.713 qpair failed and we were unable to recover it. 00:34:17.713 [2024-07-14 05:48:24.509764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.713 [2024-07-14 05:48:24.509790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.713 qpair failed and we were unable to recover it. 00:34:17.713 [2024-07-14 05:48:24.509976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.713 [2024-07-14 05:48:24.510005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.713 qpair failed and we were unable to recover it. 00:34:17.713 [2024-07-14 05:48:24.510189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.713 [2024-07-14 05:48:24.510217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.713 qpair failed and we were unable to recover it. 00:34:17.713 [2024-07-14 05:48:24.510402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.713 [2024-07-14 05:48:24.510430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.713 qpair failed and we were unable to recover it. 00:34:17.713 [2024-07-14 05:48:24.510614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.713 [2024-07-14 05:48:24.510640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.713 qpair failed and we were unable to recover it. 00:34:17.713 [2024-07-14 05:48:24.510860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.713 [2024-07-14 05:48:24.510893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.713 qpair failed and we were unable to recover it. 00:34:17.713 [2024-07-14 05:48:24.511095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.713 [2024-07-14 05:48:24.511127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.713 qpair failed and we were unable to recover it. 00:34:17.713 [2024-07-14 05:48:24.511366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.713 [2024-07-14 05:48:24.511393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.713 qpair failed and we were unable to recover it. 00:34:17.713 [2024-07-14 05:48:24.511602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.713 [2024-07-14 05:48:24.511629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.713 qpair failed and we were unable to recover it. 00:34:17.713 [2024-07-14 05:48:24.511824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.713 [2024-07-14 05:48:24.511851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.713 qpair failed and we were unable to recover it. 00:34:17.713 [2024-07-14 05:48:24.512025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.713 [2024-07-14 05:48:24.512053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.713 qpair failed and we were unable to recover it. 00:34:17.713 [2024-07-14 05:48:24.512262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.713 [2024-07-14 05:48:24.512289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.713 qpair failed and we were unable to recover it. 00:34:17.713 [2024-07-14 05:48:24.512473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.713 [2024-07-14 05:48:24.512502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.713 qpair failed and we were unable to recover it. 00:34:17.713 [2024-07-14 05:48:24.512687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.713 [2024-07-14 05:48:24.512715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.713 qpair failed and we were unable to recover it. 00:34:17.713 [2024-07-14 05:48:24.512936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.713 [2024-07-14 05:48:24.512964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.713 qpair failed and we were unable to recover it. 00:34:17.713 [2024-07-14 05:48:24.513169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.713 [2024-07-14 05:48:24.513196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.713 qpair failed and we were unable to recover it. 00:34:17.713 [2024-07-14 05:48:24.513406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.713 [2024-07-14 05:48:24.513437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.713 qpair failed and we were unable to recover it. 00:34:17.713 [2024-07-14 05:48:24.513641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.713 [2024-07-14 05:48:24.513673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.713 qpair failed and we were unable to recover it. 00:34:17.713 [2024-07-14 05:48:24.513905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.714 [2024-07-14 05:48:24.513933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.714 qpair failed and we were unable to recover it. 00:34:17.714 [2024-07-14 05:48:24.514120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.714 [2024-07-14 05:48:24.514163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.714 qpair failed and we were unable to recover it. 00:34:17.714 [2024-07-14 05:48:24.514377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.714 [2024-07-14 05:48:24.514409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.714 qpair failed and we were unable to recover it. 00:34:17.714 [2024-07-14 05:48:24.514640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.714 [2024-07-14 05:48:24.514667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.714 qpair failed and we were unable to recover it. 00:34:17.714 [2024-07-14 05:48:24.514825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.714 [2024-07-14 05:48:24.514863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.714 qpair failed and we were unable to recover it. 00:34:17.714 [2024-07-14 05:48:24.515088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.714 [2024-07-14 05:48:24.515116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.714 qpair failed and we were unable to recover it. 00:34:17.714 [2024-07-14 05:48:24.515303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.714 [2024-07-14 05:48:24.515331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.714 qpair failed and we were unable to recover it. 00:34:17.714 [2024-07-14 05:48:24.515513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.714 [2024-07-14 05:48:24.515540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.714 qpair failed and we were unable to recover it. 00:34:17.714 [2024-07-14 05:48:24.515720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.714 [2024-07-14 05:48:24.515747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.714 qpair failed and we were unable to recover it. 00:34:17.714 [2024-07-14 05:48:24.515930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.714 [2024-07-14 05:48:24.515958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.714 qpair failed and we were unable to recover it. 00:34:17.714 [2024-07-14 05:48:24.516112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.714 [2024-07-14 05:48:24.516140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.714 qpair failed and we were unable to recover it. 00:34:17.714 [2024-07-14 05:48:24.516326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.714 [2024-07-14 05:48:24.516354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.714 qpair failed and we were unable to recover it. 00:34:17.714 [2024-07-14 05:48:24.516565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.714 [2024-07-14 05:48:24.516592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.714 qpair failed and we were unable to recover it. 00:34:17.714 [2024-07-14 05:48:24.516775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.714 [2024-07-14 05:48:24.516803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.714 qpair failed and we were unable to recover it. 00:34:17.714 [2024-07-14 05:48:24.517011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.714 [2024-07-14 05:48:24.517038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.714 qpair failed and we were unable to recover it. 00:34:17.714 [2024-07-14 05:48:24.517257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.714 [2024-07-14 05:48:24.517285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.714 qpair failed and we were unable to recover it. 00:34:17.714 [2024-07-14 05:48:24.517528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.714 [2024-07-14 05:48:24.517556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.714 qpair failed and we were unable to recover it. 00:34:17.714 [2024-07-14 05:48:24.517745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.714 [2024-07-14 05:48:24.517774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.714 qpair failed and we were unable to recover it. 00:34:17.714 [2024-07-14 05:48:24.517957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.714 [2024-07-14 05:48:24.517985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.714 qpair failed and we were unable to recover it. 00:34:17.714 [2024-07-14 05:48:24.518184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.714 [2024-07-14 05:48:24.518211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.714 qpair failed and we were unable to recover it. 00:34:17.714 [2024-07-14 05:48:24.518417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.714 [2024-07-14 05:48:24.518460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.714 qpair failed and we were unable to recover it. 00:34:17.714 [2024-07-14 05:48:24.518653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.714 [2024-07-14 05:48:24.518680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.714 qpair failed and we were unable to recover it. 00:34:17.714 [2024-07-14 05:48:24.518834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.714 [2024-07-14 05:48:24.518861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.714 qpair failed and we were unable to recover it. 00:34:17.714 [2024-07-14 05:48:24.519075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.714 [2024-07-14 05:48:24.519102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.714 qpair failed and we were unable to recover it. 00:34:17.714 [2024-07-14 05:48:24.519308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.714 [2024-07-14 05:48:24.519335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.714 qpair failed and we were unable to recover it. 00:34:17.714 [2024-07-14 05:48:24.519532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.714 [2024-07-14 05:48:24.519560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.714 qpair failed and we were unable to recover it. 00:34:17.714 [2024-07-14 05:48:24.519790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.714 [2024-07-14 05:48:24.519820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.714 qpair failed and we were unable to recover it. 00:34:17.714 [2024-07-14 05:48:24.520012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.714 [2024-07-14 05:48:24.520040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.714 qpair failed and we were unable to recover it. 00:34:17.714 [2024-07-14 05:48:24.520257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.714 [2024-07-14 05:48:24.520288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.714 qpair failed and we were unable to recover it. 00:34:17.714 [2024-07-14 05:48:24.520454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.714 [2024-07-14 05:48:24.520483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.714 qpair failed and we were unable to recover it. 00:34:17.714 [2024-07-14 05:48:24.520665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.714 [2024-07-14 05:48:24.520693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.714 qpair failed and we were unable to recover it. 00:34:17.714 [2024-07-14 05:48:24.520939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.714 [2024-07-14 05:48:24.520967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.714 qpair failed and we were unable to recover it. 00:34:17.714 [2024-07-14 05:48:24.521211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.714 [2024-07-14 05:48:24.521242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.714 qpair failed and we were unable to recover it. 00:34:17.714 [2024-07-14 05:48:24.521449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.714 [2024-07-14 05:48:24.521476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.714 qpair failed and we were unable to recover it. 00:34:17.714 [2024-07-14 05:48:24.521687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.714 [2024-07-14 05:48:24.521715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.714 qpair failed and we were unable to recover it. 00:34:17.714 [2024-07-14 05:48:24.521943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.714 [2024-07-14 05:48:24.521971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.714 qpair failed and we were unable to recover it. 00:34:17.714 [2024-07-14 05:48:24.522175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.714 [2024-07-14 05:48:24.522203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.714 qpair failed and we were unable to recover it. 00:34:17.714 [2024-07-14 05:48:24.522388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.714 [2024-07-14 05:48:24.522416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.714 qpair failed and we were unable to recover it. 00:34:17.714 [2024-07-14 05:48:24.522605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.714 [2024-07-14 05:48:24.522632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.714 qpair failed and we were unable to recover it. 00:34:17.714 [2024-07-14 05:48:24.522841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.715 [2024-07-14 05:48:24.522874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.715 qpair failed and we were unable to recover it. 00:34:17.715 [2024-07-14 05:48:24.523062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.715 [2024-07-14 05:48:24.523089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.715 qpair failed and we were unable to recover it. 00:34:17.715 [2024-07-14 05:48:24.523245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.715 [2024-07-14 05:48:24.523278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.715 qpair failed and we were unable to recover it. 00:34:17.715 [2024-07-14 05:48:24.523492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.715 [2024-07-14 05:48:24.523519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.715 qpair failed and we were unable to recover it. 00:34:17.715 [2024-07-14 05:48:24.523697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.715 [2024-07-14 05:48:24.523724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.715 qpair failed and we were unable to recover it. 00:34:17.715 [2024-07-14 05:48:24.523878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.715 [2024-07-14 05:48:24.523906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.715 qpair failed and we were unable to recover it. 00:34:17.715 [2024-07-14 05:48:24.524108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.715 [2024-07-14 05:48:24.524135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.715 qpair failed and we were unable to recover it. 00:34:17.715 [2024-07-14 05:48:24.524319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.715 [2024-07-14 05:48:24.524346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.715 qpair failed and we were unable to recover it. 00:34:17.715 [2024-07-14 05:48:24.524593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.715 [2024-07-14 05:48:24.524623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.715 qpair failed and we were unable to recover it. 00:34:17.715 [2024-07-14 05:48:24.524805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.715 [2024-07-14 05:48:24.524832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.715 qpair failed and we were unable to recover it. 00:34:17.715 [2024-07-14 05:48:24.525044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.715 [2024-07-14 05:48:24.525072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.715 qpair failed and we were unable to recover it. 00:34:17.715 [2024-07-14 05:48:24.525260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.715 [2024-07-14 05:48:24.525287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.715 qpair failed and we were unable to recover it. 00:34:17.715 [2024-07-14 05:48:24.525463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.715 [2024-07-14 05:48:24.525490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.715 qpair failed and we were unable to recover it. 00:34:17.715 [2024-07-14 05:48:24.525692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.715 [2024-07-14 05:48:24.525720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.715 qpair failed and we were unable to recover it. 00:34:17.715 [2024-07-14 05:48:24.525927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.715 [2024-07-14 05:48:24.525959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.715 qpair failed and we were unable to recover it. 00:34:17.715 [2024-07-14 05:48:24.526172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.715 [2024-07-14 05:48:24.526199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.715 qpair failed and we were unable to recover it. 00:34:17.715 [2024-07-14 05:48:24.526402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.715 [2024-07-14 05:48:24.526429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.715 qpair failed and we were unable to recover it. 00:34:17.715 [2024-07-14 05:48:24.526610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.715 [2024-07-14 05:48:24.526637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.715 qpair failed and we were unable to recover it. 00:34:17.715 [2024-07-14 05:48:24.526814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.715 [2024-07-14 05:48:24.526843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.715 qpair failed and we were unable to recover it. 00:34:17.715 [2024-07-14 05:48:24.527073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.715 [2024-07-14 05:48:24.527101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.715 qpair failed and we were unable to recover it. 00:34:17.715 [2024-07-14 05:48:24.527330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.715 [2024-07-14 05:48:24.527360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.715 qpair failed and we were unable to recover it. 00:34:17.715 [2024-07-14 05:48:24.527595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.715 [2024-07-14 05:48:24.527623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.715 qpair failed and we were unable to recover it. 00:34:17.715 [2024-07-14 05:48:24.527814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.715 [2024-07-14 05:48:24.527841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.715 qpair failed and we were unable to recover it. 00:34:17.715 [2024-07-14 05:48:24.528056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.715 [2024-07-14 05:48:24.528084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.715 qpair failed and we were unable to recover it. 00:34:17.715 [2024-07-14 05:48:24.528262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.715 [2024-07-14 05:48:24.528289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.715 qpair failed and we were unable to recover it. 00:34:17.715 [2024-07-14 05:48:24.528519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.715 [2024-07-14 05:48:24.528549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.715 qpair failed and we were unable to recover it. 00:34:17.715 [2024-07-14 05:48:24.528789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.715 [2024-07-14 05:48:24.528819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.715 qpair failed and we were unable to recover it. 00:34:17.715 [2024-07-14 05:48:24.529033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.715 [2024-07-14 05:48:24.529060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.715 qpair failed and we were unable to recover it. 00:34:17.715 [2024-07-14 05:48:24.529240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.715 [2024-07-14 05:48:24.529267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.715 qpair failed and we were unable to recover it. 00:34:17.715 [2024-07-14 05:48:24.529468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.715 [2024-07-14 05:48:24.529512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.715 qpair failed and we were unable to recover it. 00:34:17.715 [2024-07-14 05:48:24.529744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.715 [2024-07-14 05:48:24.529771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.715 qpair failed and we were unable to recover it. 00:34:17.715 [2024-07-14 05:48:24.529982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.715 [2024-07-14 05:48:24.530010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.715 qpair failed and we were unable to recover it. 00:34:17.715 [2024-07-14 05:48:24.530198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.715 [2024-07-14 05:48:24.530232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.715 qpair failed and we were unable to recover it. 00:34:17.715 [2024-07-14 05:48:24.530426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.715 [2024-07-14 05:48:24.530453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.715 qpair failed and we were unable to recover it. 00:34:17.715 [2024-07-14 05:48:24.530638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.715 [2024-07-14 05:48:24.530671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.715 qpair failed and we were unable to recover it. 00:34:17.715 [2024-07-14 05:48:24.530827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.715 [2024-07-14 05:48:24.530854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.715 qpair failed and we were unable to recover it. 00:34:17.715 [2024-07-14 05:48:24.531058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.715 [2024-07-14 05:48:24.531085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.715 qpair failed and we were unable to recover it. 00:34:17.715 [2024-07-14 05:48:24.531264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.715 [2024-07-14 05:48:24.531309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.715 qpair failed and we were unable to recover it. 00:34:17.715 [2024-07-14 05:48:24.531536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.715 [2024-07-14 05:48:24.531563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.715 qpair failed and we were unable to recover it. 00:34:17.715 [2024-07-14 05:48:24.531747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.715 [2024-07-14 05:48:24.531775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.715 qpair failed and we were unable to recover it. 00:34:17.715 [2024-07-14 05:48:24.531926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.715 [2024-07-14 05:48:24.531954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.715 qpair failed and we were unable to recover it. 00:34:17.715 [2024-07-14 05:48:24.532135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.715 [2024-07-14 05:48:24.532162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.715 qpair failed and we were unable to recover it. 00:34:17.715 [2024-07-14 05:48:24.532356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.716 [2024-07-14 05:48:24.532383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.716 qpair failed and we were unable to recover it. 00:34:17.716 [2024-07-14 05:48:24.532647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.716 [2024-07-14 05:48:24.532691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.716 qpair failed and we were unable to recover it. 00:34:17.716 [2024-07-14 05:48:24.532900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.716 [2024-07-14 05:48:24.532946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.716 qpair failed and we were unable to recover it. 00:34:17.716 [2024-07-14 05:48:24.533139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.716 [2024-07-14 05:48:24.533167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.716 qpair failed and we were unable to recover it. 00:34:17.716 [2024-07-14 05:48:24.533373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.716 [2024-07-14 05:48:24.533401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.716 qpair failed and we were unable to recover it. 00:34:17.716 [2024-07-14 05:48:24.533590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.716 [2024-07-14 05:48:24.533618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.716 qpair failed and we were unable to recover it. 00:34:17.716 [2024-07-14 05:48:24.533803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.716 [2024-07-14 05:48:24.533831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.716 qpair failed and we were unable to recover it. 00:34:17.716 [2024-07-14 05:48:24.533996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.716 [2024-07-14 05:48:24.534025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.716 qpair failed and we were unable to recover it. 00:34:17.716 [2024-07-14 05:48:24.534176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.716 [2024-07-14 05:48:24.534205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.716 qpair failed and we were unable to recover it. 00:34:17.716 [2024-07-14 05:48:24.534415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.716 [2024-07-14 05:48:24.534442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.716 qpair failed and we were unable to recover it. 00:34:17.716 [2024-07-14 05:48:24.534627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.716 [2024-07-14 05:48:24.534654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.716 qpair failed and we were unable to recover it. 00:34:17.716 [2024-07-14 05:48:24.534836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.716 [2024-07-14 05:48:24.534863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.716 qpair failed and we were unable to recover it. 00:34:17.716 [2024-07-14 05:48:24.535062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.716 [2024-07-14 05:48:24.535089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.716 qpair failed and we were unable to recover it. 00:34:17.716 [2024-07-14 05:48:24.535304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.716 [2024-07-14 05:48:24.535334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.716 qpair failed and we were unable to recover it. 00:34:17.716 [2024-07-14 05:48:24.535566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.716 [2024-07-14 05:48:24.535598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.716 qpair failed and we were unable to recover it. 00:34:17.716 [2024-07-14 05:48:24.535786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.716 [2024-07-14 05:48:24.535814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.716 qpair failed and we were unable to recover it. 00:34:17.716 [2024-07-14 05:48:24.536011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.716 [2024-07-14 05:48:24.536039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.716 qpair failed and we were unable to recover it. 00:34:17.716 [2024-07-14 05:48:24.536207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.716 [2024-07-14 05:48:24.536235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.716 qpair failed and we were unable to recover it. 00:34:17.716 [2024-07-14 05:48:24.536445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.716 [2024-07-14 05:48:24.536472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.716 qpair failed and we were unable to recover it. 00:34:17.716 [2024-07-14 05:48:24.536673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.716 [2024-07-14 05:48:24.536700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.716 qpair failed and we were unable to recover it. 00:34:17.716 [2024-07-14 05:48:24.536879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.716 [2024-07-14 05:48:24.536907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.716 qpair failed and we were unable to recover it. 00:34:17.716 [2024-07-14 05:48:24.537090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.716 [2024-07-14 05:48:24.537117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.716 qpair failed and we were unable to recover it. 00:34:17.716 [2024-07-14 05:48:24.537320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.716 [2024-07-14 05:48:24.537346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.716 qpair failed and we were unable to recover it. 00:34:17.716 [2024-07-14 05:48:24.537566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.716 [2024-07-14 05:48:24.537610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.716 qpair failed and we were unable to recover it. 00:34:17.716 [2024-07-14 05:48:24.537814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.716 [2024-07-14 05:48:24.537842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.716 qpair failed and we were unable to recover it. 00:34:17.716 [2024-07-14 05:48:24.538035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.716 [2024-07-14 05:48:24.538062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.716 qpair failed and we were unable to recover it. 00:34:17.716 [2024-07-14 05:48:24.538252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.716 [2024-07-14 05:48:24.538278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.716 qpair failed and we were unable to recover it. 00:34:17.716 [2024-07-14 05:48:24.538478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.716 [2024-07-14 05:48:24.538505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.716 qpair failed and we were unable to recover it. 00:34:17.716 [2024-07-14 05:48:24.538697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.716 [2024-07-14 05:48:24.538724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.716 qpair failed and we were unable to recover it. 00:34:17.716 [2024-07-14 05:48:24.538881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.716 [2024-07-14 05:48:24.538909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.716 qpair failed and we were unable to recover it. 00:34:17.716 [2024-07-14 05:48:24.539069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.716 [2024-07-14 05:48:24.539096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.716 qpair failed and we were unable to recover it. 00:34:17.716 [2024-07-14 05:48:24.539271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.716 [2024-07-14 05:48:24.539298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.716 qpair failed and we were unable to recover it. 00:34:17.716 [2024-07-14 05:48:24.539494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.716 [2024-07-14 05:48:24.539521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.716 qpair failed and we were unable to recover it. 00:34:17.716 [2024-07-14 05:48:24.539709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.716 [2024-07-14 05:48:24.539736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.716 qpair failed and we were unable to recover it. 00:34:17.716 [2024-07-14 05:48:24.539932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.716 [2024-07-14 05:48:24.539978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.716 qpair failed and we were unable to recover it. 00:34:17.716 [2024-07-14 05:48:24.540186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.716 [2024-07-14 05:48:24.540217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.716 qpair failed and we were unable to recover it. 00:34:17.716 [2024-07-14 05:48:24.540419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.716 [2024-07-14 05:48:24.540447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.716 qpair failed and we were unable to recover it. 00:34:17.716 [2024-07-14 05:48:24.540631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.716 [2024-07-14 05:48:24.540658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.716 qpair failed and we were unable to recover it. 00:34:17.717 [2024-07-14 05:48:24.540875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.717 [2024-07-14 05:48:24.540918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.717 qpair failed and we were unable to recover it. 00:34:17.717 [2024-07-14 05:48:24.541122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.717 [2024-07-14 05:48:24.541150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.717 qpair failed and we were unable to recover it. 00:34:17.717 [2024-07-14 05:48:24.541345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.717 [2024-07-14 05:48:24.541373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.717 qpair failed and we were unable to recover it. 00:34:17.717 [2024-07-14 05:48:24.541589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.717 [2024-07-14 05:48:24.541634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.717 qpair failed and we were unable to recover it. 00:34:17.717 [2024-07-14 05:48:24.541872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.717 [2024-07-14 05:48:24.541900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.717 qpair failed and we were unable to recover it. 00:34:17.717 [2024-07-14 05:48:24.542882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.717 [2024-07-14 05:48:24.542928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.717 qpair failed and we were unable to recover it. 00:34:17.717 [2024-07-14 05:48:24.543167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.717 [2024-07-14 05:48:24.543197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.717 qpair failed and we were unable to recover it. 00:34:17.717 [2024-07-14 05:48:24.543394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.717 [2024-07-14 05:48:24.543422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.717 qpair failed and we were unable to recover it. 00:34:17.717 [2024-07-14 05:48:24.543586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.717 [2024-07-14 05:48:24.543616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.717 qpair failed and we were unable to recover it. 00:34:17.717 [2024-07-14 05:48:24.543818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.717 [2024-07-14 05:48:24.543845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.717 qpair failed and we were unable to recover it. 00:34:17.717 [2024-07-14 05:48:24.544114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.717 [2024-07-14 05:48:24.544159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.717 qpair failed and we were unable to recover it. 00:34:17.717 [2024-07-14 05:48:24.544435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.717 [2024-07-14 05:48:24.544469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.717 qpair failed and we were unable to recover it. 00:34:17.717 [2024-07-14 05:48:24.544657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.717 [2024-07-14 05:48:24.544688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.717 qpair failed and we were unable to recover it. 00:34:17.717 [2024-07-14 05:48:24.544861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.717 [2024-07-14 05:48:24.544897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.717 qpair failed and we were unable to recover it. 00:34:17.717 [2024-07-14 05:48:24.545054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.717 [2024-07-14 05:48:24.545082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.717 qpair failed and we were unable to recover it. 00:34:17.717 [2024-07-14 05:48:24.545267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.717 [2024-07-14 05:48:24.545293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.717 qpair failed and we were unable to recover it. 00:34:17.717 [2024-07-14 05:48:24.545448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.717 [2024-07-14 05:48:24.545481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.717 qpair failed and we were unable to recover it. 00:34:17.717 [2024-07-14 05:48:24.545669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.717 [2024-07-14 05:48:24.545696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.717 qpair failed and we were unable to recover it. 00:34:17.717 [2024-07-14 05:48:24.545878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.717 [2024-07-14 05:48:24.545917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.717 qpair failed and we were unable to recover it. 00:34:17.717 [2024-07-14 05:48:24.546085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.717 [2024-07-14 05:48:24.546112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.717 qpair failed and we were unable to recover it. 00:34:17.717 [2024-07-14 05:48:24.546298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.717 [2024-07-14 05:48:24.546325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.717 qpair failed and we were unable to recover it. 00:34:17.717 [2024-07-14 05:48:24.546536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.717 [2024-07-14 05:48:24.546563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.717 qpair failed and we were unable to recover it. 00:34:17.717 [2024-07-14 05:48:24.546883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.717 [2024-07-14 05:48:24.546911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.717 qpair failed and we were unable to recover it. 00:34:17.717 [2024-07-14 05:48:24.547100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.717 [2024-07-14 05:48:24.547134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.717 qpair failed and we were unable to recover it. 00:34:17.717 [2024-07-14 05:48:24.547337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.717 [2024-07-14 05:48:24.547381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.717 qpair failed and we were unable to recover it. 00:34:17.717 [2024-07-14 05:48:24.547669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.717 [2024-07-14 05:48:24.547725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.717 qpair failed and we were unable to recover it. 00:34:17.717 [2024-07-14 05:48:24.547949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.717 [2024-07-14 05:48:24.547976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.717 qpair failed and we were unable to recover it. 00:34:17.717 [2024-07-14 05:48:24.548160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.717 [2024-07-14 05:48:24.548204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.717 qpair failed and we were unable to recover it. 00:34:17.717 [2024-07-14 05:48:24.548458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.717 [2024-07-14 05:48:24.548488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.717 qpair failed and we were unable to recover it. 00:34:17.717 [2024-07-14 05:48:24.548683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.717 [2024-07-14 05:48:24.548714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.717 qpair failed and we were unable to recover it. 00:34:17.717 [2024-07-14 05:48:24.548910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.717 [2024-07-14 05:48:24.548964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.717 qpair failed and we were unable to recover it. 00:34:17.717 [2024-07-14 05:48:24.549137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.717 [2024-07-14 05:48:24.549165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.717 qpair failed and we were unable to recover it. 00:34:17.717 [2024-07-14 05:48:24.549335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.717 [2024-07-14 05:48:24.549365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.717 qpair failed and we were unable to recover it. 00:34:17.717 [2024-07-14 05:48:24.549664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.717 [2024-07-14 05:48:24.549722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.717 qpair failed and we were unable to recover it. 00:34:17.717 [2024-07-14 05:48:24.549951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.717 [2024-07-14 05:48:24.549979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.717 qpair failed and we were unable to recover it. 00:34:17.717 [2024-07-14 05:48:24.550193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.717 [2024-07-14 05:48:24.550220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.717 qpair failed and we were unable to recover it. 00:34:17.717 [2024-07-14 05:48:24.550379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.717 [2024-07-14 05:48:24.550406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.717 qpair failed and we were unable to recover it. 00:34:17.717 [2024-07-14 05:48:24.550600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.717 [2024-07-14 05:48:24.550630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.717 qpair failed and we were unable to recover it. 00:34:17.717 [2024-07-14 05:48:24.550831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.717 [2024-07-14 05:48:24.550863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.717 qpair failed and we were unable to recover it. 00:34:17.717 [2024-07-14 05:48:24.551050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.717 [2024-07-14 05:48:24.551077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.717 qpair failed and we were unable to recover it. 00:34:17.717 [2024-07-14 05:48:24.551256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.717 [2024-07-14 05:48:24.551284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.717 qpair failed and we were unable to recover it. 00:34:17.717 [2024-07-14 05:48:24.551473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.717 [2024-07-14 05:48:24.551501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.717 qpair failed and we were unable to recover it. 00:34:17.717 [2024-07-14 05:48:24.551701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.717 [2024-07-14 05:48:24.551731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.717 qpair failed and we were unable to recover it. 00:34:17.717 [2024-07-14 05:48:24.551916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.717 [2024-07-14 05:48:24.551944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.717 qpair failed and we were unable to recover it. 00:34:17.717 [2024-07-14 05:48:24.552153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.717 [2024-07-14 05:48:24.552181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.717 qpair failed and we were unable to recover it. 00:34:17.717 [2024-07-14 05:48:24.552466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.717 [2024-07-14 05:48:24.552517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.717 qpair failed and we were unable to recover it. 00:34:17.717 [2024-07-14 05:48:24.552697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.717 [2024-07-14 05:48:24.552728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.718 qpair failed and we were unable to recover it. 00:34:17.718 [2024-07-14 05:48:24.552963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.718 [2024-07-14 05:48:24.552991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.718 qpair failed and we were unable to recover it. 00:34:17.718 [2024-07-14 05:48:24.553202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.718 [2024-07-14 05:48:24.553233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.718 qpair failed and we were unable to recover it. 00:34:17.718 [2024-07-14 05:48:24.553491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.718 [2024-07-14 05:48:24.553521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.718 qpair failed and we were unable to recover it. 00:34:17.718 [2024-07-14 05:48:24.553749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.718 [2024-07-14 05:48:24.553780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.718 qpair failed and we were unable to recover it. 00:34:17.718 [2024-07-14 05:48:24.553957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.718 [2024-07-14 05:48:24.553984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.718 qpair failed and we were unable to recover it. 00:34:17.718 [2024-07-14 05:48:24.554151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.718 [2024-07-14 05:48:24.554179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.718 qpair failed and we were unable to recover it. 00:34:17.718 [2024-07-14 05:48:24.554420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.718 [2024-07-14 05:48:24.554451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.718 qpair failed and we were unable to recover it. 00:34:17.718 [2024-07-14 05:48:24.554862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.718 [2024-07-14 05:48:24.554937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.718 qpair failed and we were unable to recover it. 00:34:17.718 [2024-07-14 05:48:24.555096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.718 [2024-07-14 05:48:24.555124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.718 qpair failed and we were unable to recover it. 00:34:17.718 [2024-07-14 05:48:24.555356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.718 [2024-07-14 05:48:24.555391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.718 qpair failed and we were unable to recover it. 00:34:17.718 [2024-07-14 05:48:24.555818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.718 [2024-07-14 05:48:24.555885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.718 qpair failed and we were unable to recover it. 00:34:17.718 [2024-07-14 05:48:24.556119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.718 [2024-07-14 05:48:24.556155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.718 qpair failed and we were unable to recover it. 00:34:17.718 [2024-07-14 05:48:24.556383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.718 [2024-07-14 05:48:24.556413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.718 qpair failed and we were unable to recover it. 00:34:17.718 [2024-07-14 05:48:24.556682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.718 [2024-07-14 05:48:24.556731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.718 qpair failed and we were unable to recover it. 00:34:17.718 [2024-07-14 05:48:24.556943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.718 [2024-07-14 05:48:24.556971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.718 qpair failed and we were unable to recover it. 00:34:17.718 [2024-07-14 05:48:24.557156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.718 [2024-07-14 05:48:24.557185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.718 qpair failed and we were unable to recover it. 00:34:17.718 [2024-07-14 05:48:24.557428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.718 [2024-07-14 05:48:24.557459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.718 qpair failed and we were unable to recover it. 00:34:17.718 [2024-07-14 05:48:24.557858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.718 [2024-07-14 05:48:24.557946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.718 qpair failed and we were unable to recover it. 00:34:17.718 [2024-07-14 05:48:24.558110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.718 [2024-07-14 05:48:24.558149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.718 qpair failed and we were unable to recover it. 00:34:17.718 [2024-07-14 05:48:24.558353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.718 [2024-07-14 05:48:24.558383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.718 qpair failed and we were unable to recover it. 00:34:17.718 [2024-07-14 05:48:24.558645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.718 [2024-07-14 05:48:24.558698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.718 qpair failed and we were unable to recover it. 00:34:17.718 [2024-07-14 05:48:24.558901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.718 [2024-07-14 05:48:24.558934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.718 qpair failed and we were unable to recover it. 00:34:17.718 [2024-07-14 05:48:24.559144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.718 [2024-07-14 05:48:24.559171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.718 qpair failed and we were unable to recover it. 00:34:17.718 [2024-07-14 05:48:24.559567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.718 [2024-07-14 05:48:24.559616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.718 qpair failed and we were unable to recover it. 00:34:17.718 [2024-07-14 05:48:24.559819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.718 [2024-07-14 05:48:24.559850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.718 qpair failed and we were unable to recover it. 00:34:17.718 [2024-07-14 05:48:24.560040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.718 [2024-07-14 05:48:24.560068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.718 qpair failed and we were unable to recover it. 00:34:17.718 [2024-07-14 05:48:24.560423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.718 [2024-07-14 05:48:24.560475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.718 qpair failed and we were unable to recover it. 00:34:17.718 [2024-07-14 05:48:24.560686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.718 [2024-07-14 05:48:24.560718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.718 qpair failed and we were unable to recover it. 00:34:17.718 [2024-07-14 05:48:24.560959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.718 [2024-07-14 05:48:24.560987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.718 qpair failed and we were unable to recover it. 00:34:17.718 [2024-07-14 05:48:24.561195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.718 [2024-07-14 05:48:24.561222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.718 qpair failed and we were unable to recover it. 00:34:17.718 [2024-07-14 05:48:24.561424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.718 [2024-07-14 05:48:24.561455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.718 qpair failed and we were unable to recover it. 00:34:17.718 [2024-07-14 05:48:24.561842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.718 [2024-07-14 05:48:24.561914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.718 qpair failed and we were unable to recover it. 00:34:17.718 [2024-07-14 05:48:24.562118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.718 [2024-07-14 05:48:24.562146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.718 qpair failed and we were unable to recover it. 00:34:17.718 [2024-07-14 05:48:24.562388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.718 [2024-07-14 05:48:24.562419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.718 qpair failed and we were unable to recover it. 00:34:17.718 [2024-07-14 05:48:24.562752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.718 [2024-07-14 05:48:24.562807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.718 qpair failed and we were unable to recover it. 00:34:17.718 [2024-07-14 05:48:24.563013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.718 [2024-07-14 05:48:24.563040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.718 qpair failed and we were unable to recover it. 00:34:17.718 [2024-07-14 05:48:24.563276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.718 [2024-07-14 05:48:24.563306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.718 qpair failed and we were unable to recover it. 00:34:17.718 [2024-07-14 05:48:24.563479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.718 [2024-07-14 05:48:24.563512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.718 qpair failed and we were unable to recover it. 00:34:17.718 [2024-07-14 05:48:24.563886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.718 [2024-07-14 05:48:24.563945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.718 qpair failed and we were unable to recover it. 00:34:17.718 [2024-07-14 05:48:24.564132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.718 [2024-07-14 05:48:24.564160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.718 qpair failed and we were unable to recover it. 00:34:17.718 [2024-07-14 05:48:24.564380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.718 [2024-07-14 05:48:24.564412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.718 qpair failed and we were unable to recover it. 00:34:17.718 [2024-07-14 05:48:24.564773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.718 [2024-07-14 05:48:24.564831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.718 qpair failed and we were unable to recover it. 00:34:17.718 [2024-07-14 05:48:24.565063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.718 [2024-07-14 05:48:24.565090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.718 qpair failed and we were unable to recover it. 00:34:17.718 [2024-07-14 05:48:24.565257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.718 [2024-07-14 05:48:24.565285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.718 qpair failed and we were unable to recover it. 00:34:17.719 [2024-07-14 05:48:24.565518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.719 [2024-07-14 05:48:24.565548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.719 qpair failed and we were unable to recover it. 00:34:17.719 [2024-07-14 05:48:24.565774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.719 [2024-07-14 05:48:24.565804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.719 qpair failed and we were unable to recover it. 00:34:17.719 [2024-07-14 05:48:24.566004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.719 [2024-07-14 05:48:24.566032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.719 qpair failed and we were unable to recover it. 00:34:17.719 [2024-07-14 05:48:24.566198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.719 [2024-07-14 05:48:24.566226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.719 qpair failed and we were unable to recover it. 00:34:17.719 [2024-07-14 05:48:24.566419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.719 [2024-07-14 05:48:24.566447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.719 qpair failed and we were unable to recover it. 00:34:17.719 [2024-07-14 05:48:24.566629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.719 [2024-07-14 05:48:24.566660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.719 qpair failed and we were unable to recover it. 00:34:17.719 [2024-07-14 05:48:24.566909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.719 [2024-07-14 05:48:24.566952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.719 qpair failed and we were unable to recover it. 00:34:17.719 [2024-07-14 05:48:24.567125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.719 [2024-07-14 05:48:24.567153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.719 qpair failed and we were unable to recover it. 00:34:17.719 [2024-07-14 05:48:24.567335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.719 [2024-07-14 05:48:24.567362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.719 qpair failed and we were unable to recover it. 00:34:17.719 [2024-07-14 05:48:24.567752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.719 [2024-07-14 05:48:24.567805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.719 qpair failed and we were unable to recover it. 00:34:17.719 [2024-07-14 05:48:24.568010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.719 [2024-07-14 05:48:24.568041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.719 qpair failed and we were unable to recover it. 00:34:17.719 [2024-07-14 05:48:24.568201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.719 [2024-07-14 05:48:24.568229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.719 qpair failed and we were unable to recover it. 00:34:17.719 [2024-07-14 05:48:24.568437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.719 [2024-07-14 05:48:24.568467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.719 qpair failed and we were unable to recover it. 00:34:17.719 [2024-07-14 05:48:24.568673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.719 [2024-07-14 05:48:24.568704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.719 qpair failed and we were unable to recover it. 00:34:17.719 [2024-07-14 05:48:24.568927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.719 [2024-07-14 05:48:24.568955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.719 qpair failed and we were unable to recover it. 00:34:17.719 [2024-07-14 05:48:24.569161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.719 [2024-07-14 05:48:24.569189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.719 qpair failed and we were unable to recover it. 00:34:17.719 [2024-07-14 05:48:24.569347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.719 [2024-07-14 05:48:24.569375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.719 qpair failed and we were unable to recover it. 00:34:17.719 [2024-07-14 05:48:24.569565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.719 [2024-07-14 05:48:24.569593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.719 qpair failed and we were unable to recover it. 00:34:17.719 [2024-07-14 05:48:24.569834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.719 [2024-07-14 05:48:24.569864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.719 qpair failed and we were unable to recover it. 00:34:17.719 [2024-07-14 05:48:24.570058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.719 [2024-07-14 05:48:24.570087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.719 qpair failed and we were unable to recover it. 00:34:17.719 [2024-07-14 05:48:24.570319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.719 [2024-07-14 05:48:24.570350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.719 qpair failed and we were unable to recover it. 00:34:17.719 [2024-07-14 05:48:24.570624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.719 [2024-07-14 05:48:24.570677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.719 qpair failed and we were unable to recover it. 00:34:17.719 [2024-07-14 05:48:24.570921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.719 [2024-07-14 05:48:24.570949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.719 qpair failed and we were unable to recover it. 00:34:17.719 [2024-07-14 05:48:24.571136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.719 [2024-07-14 05:48:24.571163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.719 qpair failed and we were unable to recover it. 00:34:17.719 [2024-07-14 05:48:24.571345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.719 [2024-07-14 05:48:24.571373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.719 qpair failed and we were unable to recover it. 00:34:17.719 [2024-07-14 05:48:24.571557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.719 [2024-07-14 05:48:24.571586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.719 qpair failed and we were unable to recover it. 00:34:17.719 [2024-07-14 05:48:24.571779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.719 [2024-07-14 05:48:24.571809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.719 qpair failed and we were unable to recover it. 00:34:17.719 [2024-07-14 05:48:24.572003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.719 [2024-07-14 05:48:24.572031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.719 qpair failed and we were unable to recover it. 00:34:17.719 [2024-07-14 05:48:24.572239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.719 [2024-07-14 05:48:24.572269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.719 qpair failed and we were unable to recover it. 00:34:17.719 [2024-07-14 05:48:24.572519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.719 [2024-07-14 05:48:24.572549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.719 qpair failed and we were unable to recover it. 00:34:17.719 [2024-07-14 05:48:24.572811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.719 [2024-07-14 05:48:24.572841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.719 qpair failed and we were unable to recover it. 00:34:17.719 [2024-07-14 05:48:24.573094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.719 [2024-07-14 05:48:24.573122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.719 qpair failed and we were unable to recover it. 00:34:17.719 [2024-07-14 05:48:24.573317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.719 [2024-07-14 05:48:24.573361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.719 qpair failed and we were unable to recover it. 00:34:17.719 [2024-07-14 05:48:24.573743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.719 [2024-07-14 05:48:24.573806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.719 qpair failed and we were unable to recover it. 00:34:17.719 [2024-07-14 05:48:24.573999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.719 [2024-07-14 05:48:24.574027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.719 qpair failed and we were unable to recover it. 00:34:17.719 [2024-07-14 05:48:24.574233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.719 [2024-07-14 05:48:24.574262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.719 qpair failed and we were unable to recover it. 00:34:17.719 [2024-07-14 05:48:24.574634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.719 [2024-07-14 05:48:24.574693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.719 qpair failed and we were unable to recover it. 00:34:17.719 [2024-07-14 05:48:24.574920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.719 [2024-07-14 05:48:24.574947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.719 qpair failed and we were unable to recover it. 00:34:17.719 [2024-07-14 05:48:24.575134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.719 [2024-07-14 05:48:24.575163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.719 qpair failed and we were unable to recover it. 00:34:17.719 [2024-07-14 05:48:24.575369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.719 [2024-07-14 05:48:24.575396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.719 qpair failed and we were unable to recover it. 00:34:17.719 [2024-07-14 05:48:24.575638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.719 [2024-07-14 05:48:24.575668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.719 qpair failed and we were unable to recover it. 00:34:17.719 [2024-07-14 05:48:24.575843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.719 [2024-07-14 05:48:24.575881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.719 qpair failed and we were unable to recover it. 00:34:17.719 [2024-07-14 05:48:24.576092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.719 [2024-07-14 05:48:24.576119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.719 qpair failed and we were unable to recover it. 00:34:17.719 [2024-07-14 05:48:24.576286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.719 [2024-07-14 05:48:24.576314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.719 qpair failed and we were unable to recover it. 00:34:17.719 [2024-07-14 05:48:24.576520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.720 [2024-07-14 05:48:24.576547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.720 qpair failed and we were unable to recover it. 00:34:17.720 [2024-07-14 05:48:24.576757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.720 [2024-07-14 05:48:24.576796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.720 qpair failed and we were unable to recover it. 00:34:17.720 [2024-07-14 05:48:24.577018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.720 [2024-07-14 05:48:24.577046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.720 qpair failed and we were unable to recover it. 00:34:17.720 [2024-07-14 05:48:24.577258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.720 [2024-07-14 05:48:24.577288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.720 qpair failed and we were unable to recover it. 00:34:17.720 [2024-07-14 05:48:24.577693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.720 [2024-07-14 05:48:24.577750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.720 qpair failed and we were unable to recover it. 00:34:17.720 [2024-07-14 05:48:24.577986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.720 [2024-07-14 05:48:24.578013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.720 qpair failed and we were unable to recover it. 00:34:17.720 [2024-07-14 05:48:24.578202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.720 [2024-07-14 05:48:24.578255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.720 qpair failed and we were unable to recover it. 00:34:17.720 [2024-07-14 05:48:24.578637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.720 [2024-07-14 05:48:24.578697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.720 qpair failed and we were unable to recover it. 00:34:17.720 [2024-07-14 05:48:24.578940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.720 [2024-07-14 05:48:24.578969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.720 qpair failed and we were unable to recover it. 00:34:17.720 [2024-07-14 05:48:24.579133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.720 [2024-07-14 05:48:24.579161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.720 qpair failed and we were unable to recover it. 00:34:17.720 [2024-07-14 05:48:24.579391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.720 [2024-07-14 05:48:24.579419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.720 qpair failed and we were unable to recover it. 00:34:17.720 [2024-07-14 05:48:24.579655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.720 [2024-07-14 05:48:24.579685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.720 qpair failed and we were unable to recover it. 00:34:17.720 [2024-07-14 05:48:24.579928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.720 [2024-07-14 05:48:24.579956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.720 qpair failed and we were unable to recover it. 00:34:17.720 [2024-07-14 05:48:24.580153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.720 [2024-07-14 05:48:24.580184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.720 qpair failed and we were unable to recover it. 00:34:17.720 [2024-07-14 05:48:24.580372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.720 [2024-07-14 05:48:24.580401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.720 qpair failed and we were unable to recover it. 00:34:17.720 [2024-07-14 05:48:24.580615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.720 [2024-07-14 05:48:24.580645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.720 qpair failed and we were unable to recover it. 00:34:17.720 [2024-07-14 05:48:24.580885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.720 [2024-07-14 05:48:24.580930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.720 qpair failed and we were unable to recover it. 00:34:17.720 [2024-07-14 05:48:24.581149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.720 [2024-07-14 05:48:24.581176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.720 qpair failed and we were unable to recover it. 00:34:17.720 [2024-07-14 05:48:24.581326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.720 [2024-07-14 05:48:24.581353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.720 qpair failed and we were unable to recover it. 00:34:17.720 [2024-07-14 05:48:24.581529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.720 [2024-07-14 05:48:24.581557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.720 qpair failed and we were unable to recover it. 00:34:17.720 [2024-07-14 05:48:24.581739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.720 [2024-07-14 05:48:24.581766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.720 qpair failed and we were unable to recover it. 00:34:17.720 [2024-07-14 05:48:24.581974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.720 [2024-07-14 05:48:24.582002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.720 qpair failed and we were unable to recover it. 00:34:17.720 [2024-07-14 05:48:24.582211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.720 [2024-07-14 05:48:24.582238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.720 qpair failed and we were unable to recover it. 00:34:17.720 [2024-07-14 05:48:24.582438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.720 [2024-07-14 05:48:24.582468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.720 qpair failed and we were unable to recover it. 00:34:17.720 [2024-07-14 05:48:24.582634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.720 [2024-07-14 05:48:24.582665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.720 qpair failed and we were unable to recover it. 00:34:17.720 [2024-07-14 05:48:24.582861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.720 [2024-07-14 05:48:24.582896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.720 qpair failed and we were unable to recover it. 00:34:17.720 [2024-07-14 05:48:24.583108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.720 [2024-07-14 05:48:24.583135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.720 qpair failed and we were unable to recover it. 00:34:17.720 [2024-07-14 05:48:24.583407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.720 [2024-07-14 05:48:24.583438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.720 qpair failed and we were unable to recover it. 00:34:17.720 [2024-07-14 05:48:24.583656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.720 [2024-07-14 05:48:24.583683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.720 qpair failed and we were unable to recover it. 00:34:17.720 [2024-07-14 05:48:24.583884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.720 [2024-07-14 05:48:24.583928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.720 qpair failed and we were unable to recover it. 00:34:17.720 [2024-07-14 05:48:24.584142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.720 [2024-07-14 05:48:24.584169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.720 qpair failed and we were unable to recover it. 00:34:17.720 [2024-07-14 05:48:24.584352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.720 [2024-07-14 05:48:24.584379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.720 qpair failed and we were unable to recover it. 00:34:17.720 [2024-07-14 05:48:24.584660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.720 [2024-07-14 05:48:24.584691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.720 qpair failed and we were unable to recover it. 00:34:17.720 [2024-07-14 05:48:24.584904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.720 [2024-07-14 05:48:24.584932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.720 qpair failed and we were unable to recover it. 00:34:17.720 [2024-07-14 05:48:24.585114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.720 [2024-07-14 05:48:24.585141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.720 qpair failed and we were unable to recover it. 00:34:17.720 [2024-07-14 05:48:24.585320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.720 [2024-07-14 05:48:24.585347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.720 qpair failed and we were unable to recover it. 00:34:17.721 [2024-07-14 05:48:24.585559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.721 [2024-07-14 05:48:24.585587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.721 qpair failed and we were unable to recover it. 00:34:17.721 [2024-07-14 05:48:24.585770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.721 [2024-07-14 05:48:24.585797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.721 qpair failed and we were unable to recover it. 00:34:17.721 [2024-07-14 05:48:24.586066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.721 [2024-07-14 05:48:24.586097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.721 qpair failed and we were unable to recover it. 00:34:17.721 [2024-07-14 05:48:24.586305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.721 [2024-07-14 05:48:24.586335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.721 qpair failed and we were unable to recover it. 00:34:17.721 [2024-07-14 05:48:24.586562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.721 [2024-07-14 05:48:24.586590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.721 qpair failed and we were unable to recover it. 00:34:17.721 [2024-07-14 05:48:24.586835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.721 [2024-07-14 05:48:24.586887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.721 qpair failed and we were unable to recover it. 00:34:17.721 [2024-07-14 05:48:24.587127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.721 [2024-07-14 05:48:24.587158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.721 qpair failed and we were unable to recover it. 00:34:17.721 [2024-07-14 05:48:24.587370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.721 [2024-07-14 05:48:24.587398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.721 qpair failed and we were unable to recover it. 00:34:17.721 [2024-07-14 05:48:24.587577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.721 [2024-07-14 05:48:24.587605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.721 qpair failed and we were unable to recover it. 00:34:17.721 [2024-07-14 05:48:24.587791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.721 [2024-07-14 05:48:24.587819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.721 qpair failed and we were unable to recover it. 00:34:17.721 [2024-07-14 05:48:24.588036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.721 [2024-07-14 05:48:24.588074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.721 qpair failed and we were unable to recover it. 00:34:17.721 [2024-07-14 05:48:24.588270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.721 [2024-07-14 05:48:24.588309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.721 qpair failed and we were unable to recover it. 00:34:17.721 [2024-07-14 05:48:24.588553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.721 [2024-07-14 05:48:24.588580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.721 qpair failed and we were unable to recover it. 00:34:17.721 [2024-07-14 05:48:24.588779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.721 [2024-07-14 05:48:24.588807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.721 qpair failed and we were unable to recover it. 00:34:17.721 [2024-07-14 05:48:24.588993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.721 [2024-07-14 05:48:24.589023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.721 qpair failed and we were unable to recover it. 00:34:17.721 [2024-07-14 05:48:24.589232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.721 [2024-07-14 05:48:24.589260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.721 qpair failed and we were unable to recover it. 00:34:17.721 [2024-07-14 05:48:24.589471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.721 [2024-07-14 05:48:24.589498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.721 qpair failed and we were unable to recover it. 00:34:17.721 [2024-07-14 05:48:24.589710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.721 [2024-07-14 05:48:24.589743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.721 qpair failed and we were unable to recover it. 00:34:17.721 [2024-07-14 05:48:24.589970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.721 [2024-07-14 05:48:24.590001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.721 qpair failed and we were unable to recover it. 00:34:17.721 [2024-07-14 05:48:24.590200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.721 [2024-07-14 05:48:24.590228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.721 qpair failed and we were unable to recover it. 00:34:17.721 [2024-07-14 05:48:24.590493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.721 [2024-07-14 05:48:24.590520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.721 qpair failed and we were unable to recover it. 00:34:17.721 [2024-07-14 05:48:24.590699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.721 [2024-07-14 05:48:24.590726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.721 qpair failed and we were unable to recover it. 00:34:17.721 [2024-07-14 05:48:24.590912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.721 [2024-07-14 05:48:24.590940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.721 qpair failed and we were unable to recover it. 00:34:17.721 [2024-07-14 05:48:24.591175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.721 [2024-07-14 05:48:24.591205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.721 qpair failed and we were unable to recover it. 00:34:17.721 [2024-07-14 05:48:24.591408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.721 [2024-07-14 05:48:24.591438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.721 qpair failed and we were unable to recover it. 00:34:17.721 [2024-07-14 05:48:24.591628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.721 [2024-07-14 05:48:24.591656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.721 qpair failed and we were unable to recover it. 00:34:17.721 [2024-07-14 05:48:24.591834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.721 [2024-07-14 05:48:24.591861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.721 qpair failed and we were unable to recover it. 00:34:17.721 [2024-07-14 05:48:24.592052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.721 [2024-07-14 05:48:24.592080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.721 qpair failed and we were unable to recover it. 00:34:17.721 [2024-07-14 05:48:24.592240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.721 [2024-07-14 05:48:24.592266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.721 qpair failed and we were unable to recover it. 00:34:17.721 [2024-07-14 05:48:24.592467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.721 [2024-07-14 05:48:24.592494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.721 qpair failed and we were unable to recover it. 00:34:17.721 [2024-07-14 05:48:24.592708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.721 [2024-07-14 05:48:24.592738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.721 qpair failed and we were unable to recover it. 00:34:17.721 [2024-07-14 05:48:24.592974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.721 [2024-07-14 05:48:24.593001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.721 qpair failed and we were unable to recover it. 00:34:17.721 [2024-07-14 05:48:24.593165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.721 [2024-07-14 05:48:24.593193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.721 qpair failed and we were unable to recover it. 00:34:17.721 [2024-07-14 05:48:24.593403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.721 [2024-07-14 05:48:24.593431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.721 qpair failed and we were unable to recover it. 00:34:17.721 [2024-07-14 05:48:24.593586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.721 [2024-07-14 05:48:24.593613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.721 qpair failed and we were unable to recover it. 00:34:17.721 [2024-07-14 05:48:24.593830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.721 [2024-07-14 05:48:24.593860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.721 qpair failed and we were unable to recover it. 00:34:17.721 [2024-07-14 05:48:24.594057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.721 [2024-07-14 05:48:24.594096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.721 qpair failed and we were unable to recover it. 00:34:17.721 [2024-07-14 05:48:24.594309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.721 [2024-07-14 05:48:24.594336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.721 qpair failed and we were unable to recover it. 00:34:17.721 [2024-07-14 05:48:24.594520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.721 [2024-07-14 05:48:24.594547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.721 qpair failed and we were unable to recover it. 00:34:17.721 [2024-07-14 05:48:24.594707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.721 [2024-07-14 05:48:24.594734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.721 qpair failed and we were unable to recover it. 00:34:17.721 [2024-07-14 05:48:24.594917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.721 [2024-07-14 05:48:24.594946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.721 qpair failed and we were unable to recover it. 00:34:17.721 [2024-07-14 05:48:24.595223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.721 [2024-07-14 05:48:24.595253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.721 qpair failed and we were unable to recover it. 00:34:17.721 [2024-07-14 05:48:24.595454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.721 [2024-07-14 05:48:24.595484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.721 qpair failed and we were unable to recover it. 00:34:17.721 [2024-07-14 05:48:24.595693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.721 [2024-07-14 05:48:24.595721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.721 qpair failed and we were unable to recover it. 00:34:17.721 [2024-07-14 05:48:24.595884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.721 [2024-07-14 05:48:24.595912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.721 qpair failed and we were unable to recover it. 00:34:17.721 [2024-07-14 05:48:24.596073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.721 [2024-07-14 05:48:24.596104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.721 qpair failed and we were unable to recover it. 00:34:17.721 [2024-07-14 05:48:24.596300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.721 [2024-07-14 05:48:24.596328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.722 qpair failed and we were unable to recover it. 00:34:17.722 [2024-07-14 05:48:24.596537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.722 [2024-07-14 05:48:24.596567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.722 qpair failed and we were unable to recover it. 00:34:17.722 [2024-07-14 05:48:24.596775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.722 [2024-07-14 05:48:24.596806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.722 qpair failed and we were unable to recover it. 00:34:17.722 [2024-07-14 05:48:24.597017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.722 [2024-07-14 05:48:24.597046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.722 qpair failed and we were unable to recover it. 00:34:17.722 [2024-07-14 05:48:24.597233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.722 [2024-07-14 05:48:24.597261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.722 qpair failed and we were unable to recover it. 00:34:17.722 [2024-07-14 05:48:24.597447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.722 [2024-07-14 05:48:24.597474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.722 qpair failed and we were unable to recover it. 00:34:17.722 [2024-07-14 05:48:24.597657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.722 [2024-07-14 05:48:24.597685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.722 qpair failed and we were unable to recover it. 00:34:17.722 [2024-07-14 05:48:24.597922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.722 [2024-07-14 05:48:24.597952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.722 qpair failed and we were unable to recover it. 00:34:17.722 [2024-07-14 05:48:24.598127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.722 [2024-07-14 05:48:24.598157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.722 qpair failed and we were unable to recover it. 00:34:17.722 [2024-07-14 05:48:24.598331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.722 [2024-07-14 05:48:24.598358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.722 qpair failed and we were unable to recover it. 00:34:17.722 [2024-07-14 05:48:24.598560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.722 [2024-07-14 05:48:24.598590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.722 qpair failed and we were unable to recover it. 00:34:17.722 [2024-07-14 05:48:24.598828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.722 [2024-07-14 05:48:24.598855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.722 qpair failed and we were unable to recover it. 00:34:17.722 [2024-07-14 05:48:24.599075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.722 [2024-07-14 05:48:24.599103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.722 qpair failed and we were unable to recover it. 00:34:17.722 [2024-07-14 05:48:24.599294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.722 [2024-07-14 05:48:24.599322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.722 qpair failed and we were unable to recover it. 00:34:17.722 [2024-07-14 05:48:24.599528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.722 [2024-07-14 05:48:24.599556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.722 qpair failed and we were unable to recover it. 00:34:17.722 [2024-07-14 05:48:24.599821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.722 [2024-07-14 05:48:24.599848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.722 qpair failed and we were unable to recover it. 00:34:17.722 [2024-07-14 05:48:24.600115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.722 [2024-07-14 05:48:24.600145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.722 qpair failed and we were unable to recover it. 00:34:17.722 [2024-07-14 05:48:24.600341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.722 [2024-07-14 05:48:24.600372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.722 qpair failed and we were unable to recover it. 00:34:17.722 [2024-07-14 05:48:24.600574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.722 [2024-07-14 05:48:24.600601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.722 qpair failed and we were unable to recover it. 00:34:17.722 [2024-07-14 05:48:24.600784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.722 [2024-07-14 05:48:24.600812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.722 qpair failed and we were unable to recover it. 00:34:17.722 [2024-07-14 05:48:24.601006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.722 [2024-07-14 05:48:24.601035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.722 qpair failed and we were unable to recover it. 00:34:17.722 [2024-07-14 05:48:24.601214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.722 [2024-07-14 05:48:24.601241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.722 qpair failed and we were unable to recover it. 00:34:17.722 [2024-07-14 05:48:24.601476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.722 [2024-07-14 05:48:24.601507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.722 qpair failed and we were unable to recover it. 00:34:17.722 [2024-07-14 05:48:24.601693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.722 [2024-07-14 05:48:24.601724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.722 qpair failed and we were unable to recover it. 00:34:17.722 [2024-07-14 05:48:24.601937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.722 [2024-07-14 05:48:24.601965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.722 qpair failed and we were unable to recover it. 00:34:17.722 [2024-07-14 05:48:24.602174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.722 [2024-07-14 05:48:24.602203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.722 qpair failed and we were unable to recover it. 00:34:17.722 [2024-07-14 05:48:24.602441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.722 [2024-07-14 05:48:24.602472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.722 qpair failed and we were unable to recover it. 00:34:17.722 [2024-07-14 05:48:24.602670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.722 [2024-07-14 05:48:24.602697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.722 qpair failed and we were unable to recover it. 00:34:17.722 [2024-07-14 05:48:24.602929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.722 [2024-07-14 05:48:24.602960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.722 qpair failed and we were unable to recover it. 00:34:17.722 [2024-07-14 05:48:24.603176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.722 [2024-07-14 05:48:24.603208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.722 qpair failed and we were unable to recover it. 00:34:17.722 [2024-07-14 05:48:24.603382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.722 [2024-07-14 05:48:24.603409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.722 qpair failed and we were unable to recover it. 00:34:17.722 [2024-07-14 05:48:24.603618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.722 [2024-07-14 05:48:24.603645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.722 qpair failed and we were unable to recover it. 00:34:17.722 [2024-07-14 05:48:24.603829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.722 [2024-07-14 05:48:24.603857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.722 qpair failed and we were unable to recover it. 00:34:17.722 [2024-07-14 05:48:24.604099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.722 [2024-07-14 05:48:24.604128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.722 qpair failed and we were unable to recover it. 00:34:17.722 [2024-07-14 05:48:24.604313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.722 [2024-07-14 05:48:24.604344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.722 qpair failed and we were unable to recover it. 00:34:17.722 [2024-07-14 05:48:24.604544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.722 [2024-07-14 05:48:24.604575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.722 qpair failed and we were unable to recover it. 00:34:17.722 [2024-07-14 05:48:24.604842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.722 [2024-07-14 05:48:24.604880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.722 qpair failed and we were unable to recover it. 00:34:17.722 [2024-07-14 05:48:24.605088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.722 [2024-07-14 05:48:24.605116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.722 qpair failed and we were unable to recover it. 00:34:17.722 [2024-07-14 05:48:24.605308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.722 [2024-07-14 05:48:24.605335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.722 qpair failed and we were unable to recover it. 00:34:17.722 [2024-07-14 05:48:24.605495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.722 [2024-07-14 05:48:24.605527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.722 qpair failed and we were unable to recover it. 00:34:17.722 [2024-07-14 05:48:24.605734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.722 [2024-07-14 05:48:24.605764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.722 qpair failed and we were unable to recover it. 00:34:17.722 [2024-07-14 05:48:24.605997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.722 [2024-07-14 05:48:24.606025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.722 qpair failed and we were unable to recover it. 00:34:17.722 [2024-07-14 05:48:24.606208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.722 [2024-07-14 05:48:24.606237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.722 qpair failed and we were unable to recover it. 00:34:17.722 [2024-07-14 05:48:24.606419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.723 [2024-07-14 05:48:24.606446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.723 qpair failed and we were unable to recover it. 00:34:17.723 [2024-07-14 05:48:24.606645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.723 [2024-07-14 05:48:24.606675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.723 qpair failed and we were unable to recover it. 00:34:17.723 [2024-07-14 05:48:24.606876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.723 [2024-07-14 05:48:24.606904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.723 qpair failed and we were unable to recover it. 00:34:17.723 [2024-07-14 05:48:24.607181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.723 [2024-07-14 05:48:24.607212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.723 qpair failed and we were unable to recover it. 00:34:17.723 [2024-07-14 05:48:24.607418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.723 [2024-07-14 05:48:24.607450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.723 qpair failed and we were unable to recover it. 00:34:17.723 [2024-07-14 05:48:24.607657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.723 [2024-07-14 05:48:24.607685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.723 qpair failed and we were unable to recover it. 00:34:17.723 [2024-07-14 05:48:24.607873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.723 [2024-07-14 05:48:24.607901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.723 qpair failed and we were unable to recover it. 00:34:17.723 [2024-07-14 05:48:24.608080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.723 [2024-07-14 05:48:24.608107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.723 qpair failed and we were unable to recover it. 00:34:17.723 [2024-07-14 05:48:24.608260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.723 [2024-07-14 05:48:24.608287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.723 qpair failed and we were unable to recover it. 00:34:17.723 [2024-07-14 05:48:24.608471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.723 [2024-07-14 05:48:24.608499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.723 qpair failed and we were unable to recover it. 00:34:17.723 [2024-07-14 05:48:24.608662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.723 [2024-07-14 05:48:24.608690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.723 qpair failed and we were unable to recover it. 00:34:17.723 [2024-07-14 05:48:24.608877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.723 [2024-07-14 05:48:24.608905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.723 qpair failed and we were unable to recover it. 00:34:17.723 [2024-07-14 05:48:24.609094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.723 [2024-07-14 05:48:24.609123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.723 qpair failed and we were unable to recover it. 00:34:17.723 [2024-07-14 05:48:24.609309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.723 [2024-07-14 05:48:24.609337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.723 qpair failed and we were unable to recover it. 00:34:17.723 [2024-07-14 05:48:24.609555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.723 [2024-07-14 05:48:24.609582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.723 qpair failed and we were unable to recover it. 00:34:17.723 [2024-07-14 05:48:24.609781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.723 [2024-07-14 05:48:24.609811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.723 qpair failed and we were unable to recover it. 00:34:17.723 [2024-07-14 05:48:24.610019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.723 [2024-07-14 05:48:24.610049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.723 qpair failed and we were unable to recover it. 00:34:17.723 [2024-07-14 05:48:24.610247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.723 [2024-07-14 05:48:24.610275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.723 qpair failed and we were unable to recover it. 00:34:17.723 [2024-07-14 05:48:24.610459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.723 [2024-07-14 05:48:24.610487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.723 qpair failed and we were unable to recover it. 00:34:17.723 [2024-07-14 05:48:24.610677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.723 [2024-07-14 05:48:24.610705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.723 qpair failed and we were unable to recover it. 00:34:17.723 [2024-07-14 05:48:24.610902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.723 [2024-07-14 05:48:24.610930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.723 qpair failed and we were unable to recover it. 00:34:17.723 [2024-07-14 05:48:24.611115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.723 [2024-07-14 05:48:24.611143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.723 qpair failed and we were unable to recover it. 00:34:17.723 [2024-07-14 05:48:24.611361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.723 [2024-07-14 05:48:24.611389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.723 qpair failed and we were unable to recover it. 00:34:17.723 [2024-07-14 05:48:24.611572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.723 [2024-07-14 05:48:24.611600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.723 qpair failed and we were unable to recover it. 00:34:17.723 [2024-07-14 05:48:24.611833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.723 [2024-07-14 05:48:24.611863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.723 qpair failed and we were unable to recover it. 00:34:17.723 [2024-07-14 05:48:24.612106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.723 [2024-07-14 05:48:24.612133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.723 qpair failed and we were unable to recover it. 00:34:17.723 [2024-07-14 05:48:24.612315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.723 [2024-07-14 05:48:24.612342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.723 qpair failed and we were unable to recover it. 00:34:17.723 [2024-07-14 05:48:24.612527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.723 [2024-07-14 05:48:24.612554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.723 qpair failed and we were unable to recover it. 00:34:17.723 [2024-07-14 05:48:24.612743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.723 [2024-07-14 05:48:24.612772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.723 qpair failed and we were unable to recover it. 00:34:17.723 [2024-07-14 05:48:24.612992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.723 [2024-07-14 05:48:24.613020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.723 qpair failed and we were unable to recover it. 00:34:17.723 [2024-07-14 05:48:24.613250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.723 [2024-07-14 05:48:24.613280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.723 qpair failed and we were unable to recover it. 00:34:17.723 [2024-07-14 05:48:24.613488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.723 [2024-07-14 05:48:24.613516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.723 qpair failed and we were unable to recover it. 00:34:17.723 [2024-07-14 05:48:24.613717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.723 [2024-07-14 05:48:24.613744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.723 qpair failed and we were unable to recover it. 00:34:17.723 [2024-07-14 05:48:24.613934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.723 [2024-07-14 05:48:24.613962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.723 qpair failed and we were unable to recover it. 00:34:17.723 [2024-07-14 05:48:24.614130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.723 [2024-07-14 05:48:24.614157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.723 qpair failed and we were unable to recover it. 00:34:17.723 [2024-07-14 05:48:24.614400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.723 [2024-07-14 05:48:24.614427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.723 qpair failed and we were unable to recover it. 00:34:17.723 [2024-07-14 05:48:24.614609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.723 [2024-07-14 05:48:24.614642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.723 qpair failed and we were unable to recover it. 00:34:17.723 [2024-07-14 05:48:24.614808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.723 [2024-07-14 05:48:24.614836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.723 qpair failed and we were unable to recover it. 00:34:17.723 [2024-07-14 05:48:24.615058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.723 [2024-07-14 05:48:24.615086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.723 qpair failed and we were unable to recover it. 00:34:17.723 [2024-07-14 05:48:24.615294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.723 [2024-07-14 05:48:24.615324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.723 qpair failed and we were unable to recover it. 00:34:17.723 [2024-07-14 05:48:24.615538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.723 [2024-07-14 05:48:24.615565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.723 qpair failed and we were unable to recover it. 00:34:17.723 [2024-07-14 05:48:24.615723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.723 [2024-07-14 05:48:24.615750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.723 qpair failed and we were unable to recover it. 00:34:17.723 [2024-07-14 05:48:24.615958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.723 [2024-07-14 05:48:24.615986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.723 qpair failed and we were unable to recover it. 00:34:17.723 [2024-07-14 05:48:24.616195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.723 [2024-07-14 05:48:24.616239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.723 qpair failed and we were unable to recover it. 00:34:17.723 [2024-07-14 05:48:24.616444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.723 [2024-07-14 05:48:24.616471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.723 qpair failed and we were unable to recover it. 00:34:17.723 [2024-07-14 05:48:24.616708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.723 [2024-07-14 05:48:24.616739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.723 qpair failed and we were unable to recover it. 00:34:17.723 [2024-07-14 05:48:24.616924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.723 [2024-07-14 05:48:24.616954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.723 qpair failed and we were unable to recover it. 00:34:17.723 [2024-07-14 05:48:24.617154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.724 [2024-07-14 05:48:24.617181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.724 qpair failed and we were unable to recover it. 00:34:17.724 [2024-07-14 05:48:24.617343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.724 [2024-07-14 05:48:24.617372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.724 qpair failed and we were unable to recover it. 00:34:17.724 [2024-07-14 05:48:24.617583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.724 [2024-07-14 05:48:24.617613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.724 qpair failed and we were unable to recover it. 00:34:17.724 [2024-07-14 05:48:24.617802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.724 [2024-07-14 05:48:24.617829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.724 qpair failed and we were unable to recover it. 00:34:17.724 [2024-07-14 05:48:24.617996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.724 [2024-07-14 05:48:24.618024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.724 qpair failed and we were unable to recover it. 00:34:17.724 [2024-07-14 05:48:24.618206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.724 [2024-07-14 05:48:24.618233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.724 qpair failed and we were unable to recover it. 00:34:17.724 [2024-07-14 05:48:24.618403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.724 [2024-07-14 05:48:24.618430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.724 qpair failed and we were unable to recover it. 00:34:17.724 [2024-07-14 05:48:24.618622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.724 [2024-07-14 05:48:24.618649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.724 qpair failed and we were unable to recover it. 00:34:17.724 [2024-07-14 05:48:24.618806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.724 [2024-07-14 05:48:24.618833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.724 qpair failed and we were unable to recover it. 00:34:17.724 [2024-07-14 05:48:24.619037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.724 [2024-07-14 05:48:24.619066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.724 qpair failed and we were unable to recover it. 00:34:17.724 [2024-07-14 05:48:24.619244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.724 [2024-07-14 05:48:24.619275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.724 qpair failed and we were unable to recover it. 00:34:17.724 [2024-07-14 05:48:24.619457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.724 [2024-07-14 05:48:24.619488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.724 qpair failed and we were unable to recover it. 00:34:17.724 [2024-07-14 05:48:24.619689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.724 [2024-07-14 05:48:24.619717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.724 qpair failed and we were unable to recover it. 00:34:17.724 [2024-07-14 05:48:24.619908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.724 [2024-07-14 05:48:24.619937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.724 qpair failed and we were unable to recover it. 00:34:17.724 [2024-07-14 05:48:24.620134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.724 [2024-07-14 05:48:24.620162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.724 qpair failed and we were unable to recover it. 00:34:17.724 [2024-07-14 05:48:24.620313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.724 [2024-07-14 05:48:24.620341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.724 qpair failed and we were unable to recover it. 00:34:17.724 [2024-07-14 05:48:24.620579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.724 [2024-07-14 05:48:24.620609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.724 qpair failed and we were unable to recover it. 00:34:17.724 [2024-07-14 05:48:24.620787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.724 [2024-07-14 05:48:24.620817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.724 qpair failed and we were unable to recover it. 00:34:17.724 [2024-07-14 05:48:24.621026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.724 [2024-07-14 05:48:24.621053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.724 qpair failed and we were unable to recover it. 00:34:17.724 [2024-07-14 05:48:24.621223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.724 [2024-07-14 05:48:24.621250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.724 qpair failed and we were unable to recover it. 00:34:17.724 [2024-07-14 05:48:24.621435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.724 [2024-07-14 05:48:24.621462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.724 qpair failed and we were unable to recover it. 00:34:17.724 [2024-07-14 05:48:24.621671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.724 [2024-07-14 05:48:24.621698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.724 qpair failed and we were unable to recover it. 00:34:17.724 [2024-07-14 05:48:24.621920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.724 [2024-07-14 05:48:24.621948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.724 qpair failed and we were unable to recover it. 00:34:17.724 [2024-07-14 05:48:24.622130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.724 [2024-07-14 05:48:24.622158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.724 qpair failed and we were unable to recover it. 00:34:17.724 [2024-07-14 05:48:24.622342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.724 [2024-07-14 05:48:24.622370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.724 qpair failed and we were unable to recover it. 00:34:17.724 [2024-07-14 05:48:24.622555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.724 [2024-07-14 05:48:24.622582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.724 qpair failed and we were unable to recover it. 00:34:17.724 [2024-07-14 05:48:24.622752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.724 [2024-07-14 05:48:24.622779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.724 qpair failed and we were unable to recover it. 00:34:17.724 [2024-07-14 05:48:24.622965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.724 [2024-07-14 05:48:24.622993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.724 qpair failed and we were unable to recover it. 00:34:17.724 [2024-07-14 05:48:24.623234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.724 [2024-07-14 05:48:24.623264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.724 qpair failed and we were unable to recover it. 00:34:17.724 [2024-07-14 05:48:24.623445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.724 [2024-07-14 05:48:24.623481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.724 qpair failed and we were unable to recover it. 00:34:17.724 [2024-07-14 05:48:24.623677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.724 [2024-07-14 05:48:24.623705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.724 qpair failed and we were unable to recover it. 00:34:17.724 [2024-07-14 05:48:24.623915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.724 [2024-07-14 05:48:24.623943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.724 qpair failed and we were unable to recover it. 00:34:17.724 [2024-07-14 05:48:24.624102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.724 [2024-07-14 05:48:24.624130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.724 qpair failed and we were unable to recover it. 00:34:17.724 [2024-07-14 05:48:24.624288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.724 [2024-07-14 05:48:24.624316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.724 qpair failed and we were unable to recover it. 00:34:17.724 [2024-07-14 05:48:24.624492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.724 [2024-07-14 05:48:24.624523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.724 qpair failed and we were unable to recover it. 00:34:17.724 [2024-07-14 05:48:24.624725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.724 [2024-07-14 05:48:24.624755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.724 qpair failed and we were unable to recover it. 00:34:17.725 [2024-07-14 05:48:24.624959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.725 [2024-07-14 05:48:24.624987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.725 qpair failed and we were unable to recover it. 00:34:17.725 [2024-07-14 05:48:24.625174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.725 [2024-07-14 05:48:24.625201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.725 qpair failed and we were unable to recover it. 00:34:17.725 [2024-07-14 05:48:24.625388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.725 [2024-07-14 05:48:24.625415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.725 qpair failed and we were unable to recover it. 00:34:17.725 [2024-07-14 05:48:24.625633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.725 [2024-07-14 05:48:24.625661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.725 qpair failed and we were unable to recover it. 00:34:17.725 [2024-07-14 05:48:24.625875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.725 [2024-07-14 05:48:24.625905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.725 qpair failed and we were unable to recover it. 00:34:17.725 [2024-07-14 05:48:24.626133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.725 [2024-07-14 05:48:24.626163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.725 qpair failed and we were unable to recover it. 00:34:17.725 [2024-07-14 05:48:24.626435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.725 [2024-07-14 05:48:24.626462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.725 qpair failed and we were unable to recover it. 00:34:17.725 [2024-07-14 05:48:24.626650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.725 [2024-07-14 05:48:24.626677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.725 qpair failed and we were unable to recover it. 00:34:17.725 [2024-07-14 05:48:24.626860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.725 [2024-07-14 05:48:24.626904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.725 qpair failed and we were unable to recover it. 00:34:17.725 [2024-07-14 05:48:24.627116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.725 [2024-07-14 05:48:24.627143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.725 qpair failed and we were unable to recover it. 00:34:17.725 [2024-07-14 05:48:24.627344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.725 [2024-07-14 05:48:24.627374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.725 qpair failed and we were unable to recover it. 00:34:17.725 [2024-07-14 05:48:24.627586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.725 [2024-07-14 05:48:24.627615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.725 qpair failed and we were unable to recover it. 00:34:17.725 [2024-07-14 05:48:24.627879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.725 [2024-07-14 05:48:24.627935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.725 qpair failed and we were unable to recover it. 00:34:17.725 [2024-07-14 05:48:24.628100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.725 [2024-07-14 05:48:24.628127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.725 qpair failed and we were unable to recover it. 00:34:17.725 [2024-07-14 05:48:24.628313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.725 [2024-07-14 05:48:24.628340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.725 qpair failed and we were unable to recover it. 00:34:17.725 [2024-07-14 05:48:24.628546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.725 [2024-07-14 05:48:24.628573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.725 qpair failed and we were unable to recover it. 00:34:17.725 [2024-07-14 05:48:24.628755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.725 [2024-07-14 05:48:24.628784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.725 qpair failed and we were unable to recover it. 00:34:17.725 [2024-07-14 05:48:24.628994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.725 [2024-07-14 05:48:24.629022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.725 qpair failed and we were unable to recover it. 00:34:17.725 [2024-07-14 05:48:24.629241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.725 [2024-07-14 05:48:24.629268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.725 qpair failed and we were unable to recover it. 00:34:17.725 [2024-07-14 05:48:24.629456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.725 [2024-07-14 05:48:24.629484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.725 qpair failed and we were unable to recover it. 00:34:17.725 [2024-07-14 05:48:24.629700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.725 [2024-07-14 05:48:24.629727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.725 qpair failed and we were unable to recover it. 00:34:17.725 [2024-07-14 05:48:24.629916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.725 [2024-07-14 05:48:24.629943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.725 qpair failed and we were unable to recover it. 00:34:17.725 [2024-07-14 05:48:24.630221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.725 [2024-07-14 05:48:24.630251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.725 qpair failed and we were unable to recover it. 00:34:17.725 [2024-07-14 05:48:24.630480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.725 [2024-07-14 05:48:24.630510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.725 qpair failed and we were unable to recover it. 00:34:17.725 [2024-07-14 05:48:24.630745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.725 [2024-07-14 05:48:24.630773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.725 qpair failed and we were unable to recover it. 00:34:17.725 [2024-07-14 05:48:24.630952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.725 [2024-07-14 05:48:24.630981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.725 qpair failed and we were unable to recover it. 00:34:17.725 [2024-07-14 05:48:24.631164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.725 [2024-07-14 05:48:24.631191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.725 qpair failed and we were unable to recover it. 00:34:17.725 [2024-07-14 05:48:24.631363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.725 [2024-07-14 05:48:24.631390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.725 qpair failed and we were unable to recover it. 00:34:17.725 [2024-07-14 05:48:24.631548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.725 [2024-07-14 05:48:24.631577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.725 qpair failed and we were unable to recover it. 00:34:17.725 [2024-07-14 05:48:24.631783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.725 [2024-07-14 05:48:24.631814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.725 qpair failed and we were unable to recover it. 00:34:17.725 [2024-07-14 05:48:24.632032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.725 [2024-07-14 05:48:24.632060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.725 qpair failed and we were unable to recover it. 00:34:17.725 [2024-07-14 05:48:24.632214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.725 [2024-07-14 05:48:24.632241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.725 qpair failed and we were unable to recover it. 00:34:17.725 [2024-07-14 05:48:24.632424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.725 [2024-07-14 05:48:24.632452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.725 qpair failed and we were unable to recover it. 00:34:17.725 [2024-07-14 05:48:24.632611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.725 [2024-07-14 05:48:24.632642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.725 qpair failed and we were unable to recover it. 00:34:17.725 [2024-07-14 05:48:24.632828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.725 [2024-07-14 05:48:24.632855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.725 qpair failed and we were unable to recover it. 00:34:17.725 [2024-07-14 05:48:24.633066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.725 [2024-07-14 05:48:24.633096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.725 qpair failed and we were unable to recover it. 00:34:17.725 [2024-07-14 05:48:24.633283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.725 [2024-07-14 05:48:24.633310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.725 qpair failed and we were unable to recover it. 00:34:17.725 [2024-07-14 05:48:24.633519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.725 [2024-07-14 05:48:24.633546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.725 qpair failed and we were unable to recover it. 00:34:17.725 [2024-07-14 05:48:24.633730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.725 [2024-07-14 05:48:24.633757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.725 qpair failed and we were unable to recover it. 00:34:17.725 [2024-07-14 05:48:24.634020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.725 [2024-07-14 05:48:24.634048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.725 qpair failed and we were unable to recover it. 00:34:17.725 [2024-07-14 05:48:24.634289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.725 [2024-07-14 05:48:24.634319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.725 qpair failed and we were unable to recover it. 00:34:17.725 [2024-07-14 05:48:24.634531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.725 [2024-07-14 05:48:24.634558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.725 qpair failed and we were unable to recover it. 00:34:17.725 [2024-07-14 05:48:24.634773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.725 [2024-07-14 05:48:24.634800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.725 qpair failed and we were unable to recover it. 00:34:17.725 [2024-07-14 05:48:24.634989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.725 [2024-07-14 05:48:24.635017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.725 qpair failed and we were unable to recover it. 00:34:17.725 [2024-07-14 05:48:24.635204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.725 [2024-07-14 05:48:24.635232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.725 qpair failed and we were unable to recover it. 00:34:17.725 [2024-07-14 05:48:24.635414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.725 [2024-07-14 05:48:24.635441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.725 qpair failed and we were unable to recover it. 00:34:17.726 [2024-07-14 05:48:24.635609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.726 [2024-07-14 05:48:24.635639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.726 qpair failed and we were unable to recover it. 00:34:17.726 [2024-07-14 05:48:24.635838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.726 [2024-07-14 05:48:24.635878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.726 qpair failed and we were unable to recover it. 00:34:17.726 [2024-07-14 05:48:24.636064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.726 [2024-07-14 05:48:24.636091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.726 qpair failed and we were unable to recover it. 00:34:17.726 [2024-07-14 05:48:24.636284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.726 [2024-07-14 05:48:24.636311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.726 qpair failed and we were unable to recover it. 00:34:17.726 [2024-07-14 05:48:24.636509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.726 [2024-07-14 05:48:24.636535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.726 qpair failed and we were unable to recover it. 00:34:17.726 [2024-07-14 05:48:24.636732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.726 [2024-07-14 05:48:24.636760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.726 qpair failed and we were unable to recover it. 00:34:17.726 [2024-07-14 05:48:24.636991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.726 [2024-07-14 05:48:24.637021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.726 qpair failed and we were unable to recover it. 00:34:17.726 [2024-07-14 05:48:24.637254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.726 [2024-07-14 05:48:24.637280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.726 qpair failed and we were unable to recover it. 00:34:17.726 [2024-07-14 05:48:24.637490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.726 [2024-07-14 05:48:24.637517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.726 qpair failed and we were unable to recover it. 00:34:17.726 [2024-07-14 05:48:24.637673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.726 [2024-07-14 05:48:24.637700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.726 qpair failed and we were unable to recover it. 00:34:17.726 [2024-07-14 05:48:24.637872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.726 [2024-07-14 05:48:24.637900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.726 qpair failed and we were unable to recover it. 00:34:17.726 [2024-07-14 05:48:24.638063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.726 [2024-07-14 05:48:24.638090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.726 qpair failed and we were unable to recover it. 00:34:17.726 [2024-07-14 05:48:24.638274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.726 [2024-07-14 05:48:24.638306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.726 qpair failed and we were unable to recover it. 00:34:17.726 [2024-07-14 05:48:24.638478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.726 [2024-07-14 05:48:24.638513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.726 qpair failed and we were unable to recover it. 00:34:17.726 [2024-07-14 05:48:24.638791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.726 [2024-07-14 05:48:24.638818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.726 qpair failed and we were unable to recover it. 00:34:17.726 [2024-07-14 05:48:24.639002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.726 [2024-07-14 05:48:24.639029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.726 qpair failed and we were unable to recover it. 00:34:17.726 [2024-07-14 05:48:24.639238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.726 [2024-07-14 05:48:24.639266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.726 qpair failed and we were unable to recover it. 00:34:17.726 [2024-07-14 05:48:24.639479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.726 [2024-07-14 05:48:24.639506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.726 qpair failed and we were unable to recover it. 00:34:17.726 [2024-07-14 05:48:24.639720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.726 [2024-07-14 05:48:24.639754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.726 qpair failed and we were unable to recover it. 00:34:17.726 [2024-07-14 05:48:24.639967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.726 [2024-07-14 05:48:24.639997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.726 qpair failed and we were unable to recover it. 00:34:17.726 [2024-07-14 05:48:24.640180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.726 [2024-07-14 05:48:24.640207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.726 qpair failed and we were unable to recover it. 00:34:17.726 [2024-07-14 05:48:24.640472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.726 [2024-07-14 05:48:24.640500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.726 qpair failed and we were unable to recover it. 00:34:17.726 [2024-07-14 05:48:24.640711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.726 [2024-07-14 05:48:24.640739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.726 qpair failed and we were unable to recover it. 00:34:17.726 [2024-07-14 05:48:24.640981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.726 [2024-07-14 05:48:24.641010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.726 qpair failed and we were unable to recover it. 00:34:17.726 [2024-07-14 05:48:24.641247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.726 [2024-07-14 05:48:24.641278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.726 qpair failed and we were unable to recover it. 00:34:17.726 [2024-07-14 05:48:24.641502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.726 [2024-07-14 05:48:24.641530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.726 qpair failed and we were unable to recover it. 00:34:17.726 [2024-07-14 05:48:24.641716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.726 [2024-07-14 05:48:24.641743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.726 qpair failed and we were unable to recover it. 00:34:17.726 [2024-07-14 05:48:24.641918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.726 [2024-07-14 05:48:24.641951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.726 qpair failed and we were unable to recover it. 00:34:17.726 [2024-07-14 05:48:24.642132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.726 [2024-07-14 05:48:24.642160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.726 qpair failed and we were unable to recover it. 00:34:17.726 [2024-07-14 05:48:24.642345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.726 [2024-07-14 05:48:24.642372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.726 qpair failed and we were unable to recover it. 00:34:17.726 [2024-07-14 05:48:24.642608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.726 [2024-07-14 05:48:24.642638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.726 qpair failed and we were unable to recover it. 00:34:17.726 [2024-07-14 05:48:24.642811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.726 [2024-07-14 05:48:24.642841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.726 qpair failed and we were unable to recover it. 00:34:17.726 [2024-07-14 05:48:24.643070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.726 [2024-07-14 05:48:24.643098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.726 qpair failed and we were unable to recover it. 00:34:17.726 [2024-07-14 05:48:24.643362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.726 [2024-07-14 05:48:24.643390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.726 qpair failed and we were unable to recover it. 00:34:17.726 [2024-07-14 05:48:24.643597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.726 [2024-07-14 05:48:24.643624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.726 qpair failed and we were unable to recover it. 00:34:17.726 [2024-07-14 05:48:24.643809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.726 [2024-07-14 05:48:24.643836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.726 qpair failed and we were unable to recover it. 00:34:17.726 [2024-07-14 05:48:24.644026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.726 [2024-07-14 05:48:24.644053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.726 qpair failed and we were unable to recover it. 00:34:17.726 [2024-07-14 05:48:24.644252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.726 [2024-07-14 05:48:24.644282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.726 qpair failed and we were unable to recover it. 00:34:17.726 [2024-07-14 05:48:24.644511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.726 [2024-07-14 05:48:24.644539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.726 qpair failed and we were unable to recover it. 00:34:17.726 [2024-07-14 05:48:24.644749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.726 [2024-07-14 05:48:24.644776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.726 qpair failed and we were unable to recover it. 00:34:17.726 [2024-07-14 05:48:24.644962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.726 [2024-07-14 05:48:24.644989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.726 qpair failed and we were unable to recover it. 00:34:17.726 [2024-07-14 05:48:24.645256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.726 [2024-07-14 05:48:24.645283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.726 qpair failed and we were unable to recover it. 00:34:17.726 [2024-07-14 05:48:24.645473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.726 [2024-07-14 05:48:24.645500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.726 qpair failed and we were unable to recover it. 00:34:17.726 [2024-07-14 05:48:24.645732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.726 [2024-07-14 05:48:24.645763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.726 qpair failed and we were unable to recover it. 00:34:17.726 [2024-07-14 05:48:24.645966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.726 [2024-07-14 05:48:24.645994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.726 qpair failed and we were unable to recover it. 00:34:17.727 [2024-07-14 05:48:24.646153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.727 [2024-07-14 05:48:24.646180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.727 qpair failed and we were unable to recover it. 00:34:17.727 [2024-07-14 05:48:24.646342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.727 [2024-07-14 05:48:24.646380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.727 qpair failed and we were unable to recover it. 00:34:17.727 [2024-07-14 05:48:24.646600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.727 [2024-07-14 05:48:24.646628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.727 qpair failed and we were unable to recover it. 00:34:17.727 [2024-07-14 05:48:24.646860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.727 [2024-07-14 05:48:24.646908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.727 qpair failed and we were unable to recover it. 00:34:17.727 [2024-07-14 05:48:24.647116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.727 [2024-07-14 05:48:24.647146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.727 qpair failed and we were unable to recover it. 00:34:17.727 [2024-07-14 05:48:24.647381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.727 [2024-07-14 05:48:24.647409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.727 qpair failed and we were unable to recover it. 00:34:17.727 [2024-07-14 05:48:24.647626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.727 [2024-07-14 05:48:24.647653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.727 qpair failed and we were unable to recover it. 00:34:17.727 [2024-07-14 05:48:24.647839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.727 [2024-07-14 05:48:24.647874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.727 qpair failed and we were unable to recover it. 00:34:17.727 [2024-07-14 05:48:24.648030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.727 [2024-07-14 05:48:24.648058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.727 qpair failed and we were unable to recover it. 00:34:17.727 [2024-07-14 05:48:24.648295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.727 [2024-07-14 05:48:24.648326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.727 qpair failed and we were unable to recover it. 00:34:17.727 [2024-07-14 05:48:24.648554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.727 [2024-07-14 05:48:24.648586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.727 qpair failed and we were unable to recover it. 00:34:17.727 [2024-07-14 05:48:24.648805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.727 [2024-07-14 05:48:24.648835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.727 qpair failed and we were unable to recover it. 00:34:17.727 [2024-07-14 05:48:24.649047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.727 [2024-07-14 05:48:24.649075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.727 qpair failed and we were unable to recover it. 00:34:17.727 [2024-07-14 05:48:24.649260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.727 [2024-07-14 05:48:24.649289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.727 qpair failed and we were unable to recover it. 00:34:17.727 [2024-07-14 05:48:24.649495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.727 [2024-07-14 05:48:24.649522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.727 qpair failed and we were unable to recover it. 00:34:17.727 [2024-07-14 05:48:24.649700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.727 [2024-07-14 05:48:24.649727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.727 qpair failed and we were unable to recover it. 00:34:17.727 [2024-07-14 05:48:24.649913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.727 [2024-07-14 05:48:24.649940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.727 qpair failed and we were unable to recover it. 00:34:17.727 [2024-07-14 05:48:24.650099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.727 [2024-07-14 05:48:24.650127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.727 qpair failed and we were unable to recover it. 00:34:17.727 [2024-07-14 05:48:24.650333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.727 [2024-07-14 05:48:24.650364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.727 qpair failed and we were unable to recover it. 00:34:17.727 [2024-07-14 05:48:24.650572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.727 [2024-07-14 05:48:24.650602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.727 qpair failed and we were unable to recover it. 00:34:17.727 [2024-07-14 05:48:24.650780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.727 [2024-07-14 05:48:24.650808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.727 qpair failed and we were unable to recover it. 00:34:17.727 [2024-07-14 05:48:24.650970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.727 [2024-07-14 05:48:24.650999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.727 qpair failed and we were unable to recover it. 00:34:17.727 [2024-07-14 05:48:24.651180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.727 [2024-07-14 05:48:24.651207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.727 qpair failed and we were unable to recover it. 00:34:17.727 [2024-07-14 05:48:24.651413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.727 [2024-07-14 05:48:24.651440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.727 qpair failed and we were unable to recover it. 00:34:17.727 [2024-07-14 05:48:24.651649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.727 [2024-07-14 05:48:24.651677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.727 qpair failed and we were unable to recover it. 00:34:17.727 [2024-07-14 05:48:24.651874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.727 [2024-07-14 05:48:24.651901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.727 qpair failed and we were unable to recover it. 00:34:17.727 [2024-07-14 05:48:24.652089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.727 [2024-07-14 05:48:24.652117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.727 qpair failed and we were unable to recover it. 00:34:17.727 [2024-07-14 05:48:24.652331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.727 [2024-07-14 05:48:24.652361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.727 qpair failed and we were unable to recover it. 00:34:17.727 [2024-07-14 05:48:24.652564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.727 [2024-07-14 05:48:24.652594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.727 qpair failed and we were unable to recover it. 00:34:17.727 [2024-07-14 05:48:24.652798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.727 [2024-07-14 05:48:24.652826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.727 qpair failed and we were unable to recover it. 00:34:17.727 [2024-07-14 05:48:24.653017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.727 [2024-07-14 05:48:24.653045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.727 qpair failed and we were unable to recover it. 00:34:17.727 [2024-07-14 05:48:24.653232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.727 [2024-07-14 05:48:24.653259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.727 qpair failed and we were unable to recover it. 00:34:17.727 [2024-07-14 05:48:24.653415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.727 [2024-07-14 05:48:24.653444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.727 qpair failed and we were unable to recover it. 00:34:17.727 [2024-07-14 05:48:24.653597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.727 [2024-07-14 05:48:24.653625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.727 qpair failed and we were unable to recover it. 00:34:17.727 [2024-07-14 05:48:24.653812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.727 [2024-07-14 05:48:24.653840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.727 qpair failed and we were unable to recover it. 00:34:17.727 [2024-07-14 05:48:24.654045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.727 [2024-07-14 05:48:24.654074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.727 qpair failed and we were unable to recover it. 00:34:17.727 [2024-07-14 05:48:24.654268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.727 [2024-07-14 05:48:24.654295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.727 qpair failed and we were unable to recover it. 00:34:17.727 [2024-07-14 05:48:24.654455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.727 [2024-07-14 05:48:24.654483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.727 qpair failed and we were unable to recover it. 00:34:17.727 [2024-07-14 05:48:24.654669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.727 [2024-07-14 05:48:24.654697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.727 qpair failed and we were unable to recover it. 00:34:17.727 [2024-07-14 05:48:24.654943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.727 [2024-07-14 05:48:24.654972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.727 qpair failed and we were unable to recover it. 00:34:17.727 [2024-07-14 05:48:24.655127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.727 [2024-07-14 05:48:24.655155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.727 qpair failed and we were unable to recover it. 00:34:17.728 [2024-07-14 05:48:24.655345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.728 [2024-07-14 05:48:24.655373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.728 qpair failed and we were unable to recover it. 00:34:17.728 [2024-07-14 05:48:24.655556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.728 [2024-07-14 05:48:24.655584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.728 qpair failed and we were unable to recover it. 00:34:17.728 [2024-07-14 05:48:24.655766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.728 [2024-07-14 05:48:24.655793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.728 qpair failed and we were unable to recover it. 00:34:17.728 [2024-07-14 05:48:24.655980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.728 [2024-07-14 05:48:24.656010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.728 qpair failed and we were unable to recover it. 00:34:17.728 [2024-07-14 05:48:24.656194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.728 [2024-07-14 05:48:24.656222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.728 qpair failed and we were unable to recover it. 00:34:17.728 [2024-07-14 05:48:24.656389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.728 [2024-07-14 05:48:24.656416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.728 qpair failed and we were unable to recover it. 00:34:17.728 [2024-07-14 05:48:24.656576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.728 [2024-07-14 05:48:24.656604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.728 qpair failed and we were unable to recover it. 00:34:17.728 [2024-07-14 05:48:24.656815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.728 [2024-07-14 05:48:24.656846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.728 qpair failed and we were unable to recover it. 00:34:17.728 [2024-07-14 05:48:24.657073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.728 [2024-07-14 05:48:24.657109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.728 qpair failed and we were unable to recover it. 00:34:17.728 [2024-07-14 05:48:24.657290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.728 [2024-07-14 05:48:24.657317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.728 qpair failed and we were unable to recover it. 00:34:17.728 [2024-07-14 05:48:24.657477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.728 [2024-07-14 05:48:24.657505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.728 qpair failed and we were unable to recover it. 00:34:17.728 [2024-07-14 05:48:24.657707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.728 [2024-07-14 05:48:24.657738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.728 qpair failed and we were unable to recover it. 00:34:17.728 [2024-07-14 05:48:24.657932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.728 [2024-07-14 05:48:24.657960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.728 qpair failed and we were unable to recover it. 00:34:17.728 [2024-07-14 05:48:24.658177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.728 [2024-07-14 05:48:24.658221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.728 qpair failed and we were unable to recover it. 00:34:17.728 [2024-07-14 05:48:24.658433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.728 [2024-07-14 05:48:24.658463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.728 qpair failed and we were unable to recover it. 00:34:17.728 [2024-07-14 05:48:24.658633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.728 [2024-07-14 05:48:24.658660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.728 qpair failed and we were unable to recover it. 00:34:17.728 [2024-07-14 05:48:24.658844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.728 [2024-07-14 05:48:24.658878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.728 qpair failed and we were unable to recover it. 00:34:17.728 [2024-07-14 05:48:24.659058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.728 [2024-07-14 05:48:24.659088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.728 qpair failed and we were unable to recover it. 00:34:17.728 [2024-07-14 05:48:24.659299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.728 [2024-07-14 05:48:24.659326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.728 qpair failed and we were unable to recover it. 00:34:17.728 [2024-07-14 05:48:24.659532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.728 [2024-07-14 05:48:24.659562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.728 qpair failed and we were unable to recover it. 00:34:17.728 [2024-07-14 05:48:24.659763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.728 [2024-07-14 05:48:24.659794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.728 qpair failed and we were unable to recover it. 00:34:17.728 [2024-07-14 05:48:24.660023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.728 [2024-07-14 05:48:24.660051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.728 qpair failed and we were unable to recover it. 00:34:17.728 [2024-07-14 05:48:24.660245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.728 [2024-07-14 05:48:24.660273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.728 qpair failed and we were unable to recover it. 00:34:17.728 [2024-07-14 05:48:24.660484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.728 [2024-07-14 05:48:24.660514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.728 qpair failed and we were unable to recover it. 00:34:17.728 [2024-07-14 05:48:24.660720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.728 [2024-07-14 05:48:24.660747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.728 qpair failed and we were unable to recover it. 00:34:17.728 [2024-07-14 05:48:24.660959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.728 [2024-07-14 05:48:24.660987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.728 qpair failed and we were unable to recover it. 00:34:17.728 [2024-07-14 05:48:24.661194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.728 [2024-07-14 05:48:24.661224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.728 qpair failed and we were unable to recover it. 00:34:17.728 [2024-07-14 05:48:24.661429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.728 [2024-07-14 05:48:24.661457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.728 qpair failed and we were unable to recover it. 00:34:17.728 [2024-07-14 05:48:24.661668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.728 [2024-07-14 05:48:24.661695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.728 qpair failed and we were unable to recover it. 00:34:17.728 [2024-07-14 05:48:24.661890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.728 [2024-07-14 05:48:24.661932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.728 qpair failed and we were unable to recover it. 00:34:17.728 [2024-07-14 05:48:24.662112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.728 [2024-07-14 05:48:24.662139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.728 qpair failed and we were unable to recover it. 00:34:17.728 [2024-07-14 05:48:24.662383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.728 [2024-07-14 05:48:24.662413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.728 qpair failed and we were unable to recover it. 00:34:17.728 [2024-07-14 05:48:24.662615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.728 [2024-07-14 05:48:24.662646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.728 qpair failed and we were unable to recover it. 00:34:17.728 [2024-07-14 05:48:24.662828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.728 [2024-07-14 05:48:24.662856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.728 qpair failed and we were unable to recover it. 00:34:17.728 [2024-07-14 05:48:24.663085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.728 [2024-07-14 05:48:24.663113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.728 qpair failed and we were unable to recover it. 00:34:17.728 [2024-07-14 05:48:24.663315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.728 [2024-07-14 05:48:24.663342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.728 qpair failed and we were unable to recover it. 00:34:17.728 [2024-07-14 05:48:24.663507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.728 [2024-07-14 05:48:24.663536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.728 qpair failed and we were unable to recover it. 00:34:17.728 [2024-07-14 05:48:24.663763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.728 [2024-07-14 05:48:24.663794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.728 qpair failed and we were unable to recover it. 00:34:17.728 [2024-07-14 05:48:24.664000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.728 [2024-07-14 05:48:24.664033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.728 qpair failed and we were unable to recover it. 00:34:17.728 [2024-07-14 05:48:24.664246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.728 [2024-07-14 05:48:24.664274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.728 qpair failed and we were unable to recover it. 00:34:17.728 [2024-07-14 05:48:24.664455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.728 [2024-07-14 05:48:24.664482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.728 qpair failed and we were unable to recover it. 00:34:17.728 [2024-07-14 05:48:24.664688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.728 [2024-07-14 05:48:24.664718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.728 qpair failed and we were unable to recover it. 00:34:17.729 [2024-07-14 05:48:24.664929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.729 [2024-07-14 05:48:24.664957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.729 qpair failed and we were unable to recover it. 00:34:17.729 [2024-07-14 05:48:24.665169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.729 [2024-07-14 05:48:24.665200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.729 qpair failed and we were unable to recover it. 00:34:17.729 [2024-07-14 05:48:24.665416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.729 [2024-07-14 05:48:24.665444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.729 qpair failed and we were unable to recover it. 00:34:17.729 [2024-07-14 05:48:24.665628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.729 [2024-07-14 05:48:24.665655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.729 qpair failed and we were unable to recover it. 00:34:17.729 [2024-07-14 05:48:24.665799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.729 [2024-07-14 05:48:24.665826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.729 qpair failed and we were unable to recover it. 00:34:17.729 [2024-07-14 05:48:24.666051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.729 [2024-07-14 05:48:24.666080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.729 qpair failed and we were unable to recover it. 00:34:17.729 [2024-07-14 05:48:24.666266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.729 [2024-07-14 05:48:24.666297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.729 qpair failed and we were unable to recover it. 00:34:17.729 [2024-07-14 05:48:24.666449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.729 [2024-07-14 05:48:24.666476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.729 qpair failed and we were unable to recover it. 00:34:17.729 [2024-07-14 05:48:24.666665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.729 [2024-07-14 05:48:24.666697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.729 qpair failed and we were unable to recover it. 00:34:17.729 [2024-07-14 05:48:24.666902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.729 [2024-07-14 05:48:24.666932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.729 qpair failed and we were unable to recover it. 00:34:17.729 [2024-07-14 05:48:24.667170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.729 [2024-07-14 05:48:24.667201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.729 qpair failed and we were unable to recover it. 00:34:17.729 [2024-07-14 05:48:24.667427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.729 [2024-07-14 05:48:24.667456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.729 qpair failed and we were unable to recover it. 00:34:17.729 [2024-07-14 05:48:24.667661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.729 [2024-07-14 05:48:24.667689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.729 qpair failed and we were unable to recover it. 00:34:17.729 [2024-07-14 05:48:24.667894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.729 [2024-07-14 05:48:24.667925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.729 qpair failed and we were unable to recover it. 00:34:17.729 [2024-07-14 05:48:24.668157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.729 [2024-07-14 05:48:24.668188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.729 qpair failed and we were unable to recover it. 00:34:17.729 [2024-07-14 05:48:24.668395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.729 [2024-07-14 05:48:24.668430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.729 qpair failed and we were unable to recover it. 00:34:17.729 [2024-07-14 05:48:24.668647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.729 [2024-07-14 05:48:24.668691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.729 qpair failed and we were unable to recover it. 00:34:17.729 [2024-07-14 05:48:24.668947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.729 [2024-07-14 05:48:24.668975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.729 qpair failed and we were unable to recover it. 00:34:17.729 [2024-07-14 05:48:24.669182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.729 [2024-07-14 05:48:24.669210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.729 qpair failed and we were unable to recover it. 00:34:17.729 [2024-07-14 05:48:24.669392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.729 [2024-07-14 05:48:24.669423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.729 qpair failed and we were unable to recover it. 00:34:17.729 [2024-07-14 05:48:24.669637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.729 [2024-07-14 05:48:24.669667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.729 qpair failed and we were unable to recover it. 00:34:17.729 [2024-07-14 05:48:24.669900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.729 [2024-07-14 05:48:24.669928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.729 qpair failed and we were unable to recover it. 00:34:17.729 [2024-07-14 05:48:24.670184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.729 [2024-07-14 05:48:24.670212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.729 qpair failed and we were unable to recover it. 00:34:17.729 [2024-07-14 05:48:24.670414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.729 [2024-07-14 05:48:24.670446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.729 qpair failed and we were unable to recover it. 00:34:17.729 [2024-07-14 05:48:24.670661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.729 [2024-07-14 05:48:24.670689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.729 qpair failed and we were unable to recover it. 00:34:17.729 [2024-07-14 05:48:24.670903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.729 [2024-07-14 05:48:24.670934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.729 qpair failed and we were unable to recover it. 00:34:17.729 [2024-07-14 05:48:24.671141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.729 [2024-07-14 05:48:24.671171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.729 qpair failed and we were unable to recover it. 00:34:17.729 [2024-07-14 05:48:24.671414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.729 [2024-07-14 05:48:24.671441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.729 qpair failed and we were unable to recover it. 00:34:17.729 [2024-07-14 05:48:24.671663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.729 [2024-07-14 05:48:24.671691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.729 qpair failed and we were unable to recover it. 00:34:17.729 [2024-07-14 05:48:24.671924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.729 [2024-07-14 05:48:24.671955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.729 qpair failed and we were unable to recover it. 00:34:17.729 [2024-07-14 05:48:24.672157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.729 [2024-07-14 05:48:24.672184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.729 qpair failed and we were unable to recover it. 00:34:17.729 [2024-07-14 05:48:24.672425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.729 [2024-07-14 05:48:24.672452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.729 qpair failed and we were unable to recover it. 00:34:17.729 [2024-07-14 05:48:24.672643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.729 [2024-07-14 05:48:24.672671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.729 qpair failed and we were unable to recover it. 00:34:17.729 [2024-07-14 05:48:24.672893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.729 [2024-07-14 05:48:24.672921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.729 qpair failed and we were unable to recover it. 00:34:17.729 [2024-07-14 05:48:24.673149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.729 [2024-07-14 05:48:24.673179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.729 qpair failed and we were unable to recover it. 00:34:17.729 [2024-07-14 05:48:24.673351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.729 [2024-07-14 05:48:24.673381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.729 qpair failed and we were unable to recover it. 00:34:17.729 [2024-07-14 05:48:24.673591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.729 [2024-07-14 05:48:24.673619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.729 qpair failed and we were unable to recover it. 00:34:17.729 [2024-07-14 05:48:24.673826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.729 [2024-07-14 05:48:24.673856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.729 qpair failed and we were unable to recover it. 00:34:17.729 [2024-07-14 05:48:24.674053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.729 [2024-07-14 05:48:24.674084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.729 qpair failed and we were unable to recover it. 00:34:17.729 [2024-07-14 05:48:24.674315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.729 [2024-07-14 05:48:24.674342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.729 qpair failed and we were unable to recover it. 00:34:17.729 [2024-07-14 05:48:24.674539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.729 [2024-07-14 05:48:24.674569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.729 qpair failed and we were unable to recover it. 00:34:17.729 [2024-07-14 05:48:24.674767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.729 [2024-07-14 05:48:24.674798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.729 qpair failed and we were unable to recover it. 00:34:17.729 [2024-07-14 05:48:24.674985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.729 [2024-07-14 05:48:24.675013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.729 qpair failed and we were unable to recover it. 00:34:17.729 [2024-07-14 05:48:24.675207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.729 [2024-07-14 05:48:24.675235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.729 qpair failed and we were unable to recover it. 00:34:17.729 [2024-07-14 05:48:24.675455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.729 [2024-07-14 05:48:24.675485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.729 qpair failed and we were unable to recover it. 00:34:17.729 [2024-07-14 05:48:24.675716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.729 [2024-07-14 05:48:24.675744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.729 qpair failed and we were unable to recover it. 00:34:17.729 [2024-07-14 05:48:24.675924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.729 [2024-07-14 05:48:24.675957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.729 qpair failed and we were unable to recover it. 00:34:17.729 [2024-07-14 05:48:24.676157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.729 [2024-07-14 05:48:24.676187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.729 qpair failed and we were unable to recover it. 00:34:17.729 [2024-07-14 05:48:24.676415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.730 [2024-07-14 05:48:24.676441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.730 qpair failed and we were unable to recover it. 00:34:17.730 [2024-07-14 05:48:24.676651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.730 [2024-07-14 05:48:24.676682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.730 qpair failed and we were unable to recover it. 00:34:17.730 [2024-07-14 05:48:24.676889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.730 [2024-07-14 05:48:24.676938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.730 qpair failed and we were unable to recover it. 00:34:17.730 [2024-07-14 05:48:24.677127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.730 [2024-07-14 05:48:24.677155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.730 qpair failed and we were unable to recover it. 00:34:17.730 [2024-07-14 05:48:24.677338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.730 [2024-07-14 05:48:24.677369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.730 qpair failed and we were unable to recover it. 00:34:17.730 [2024-07-14 05:48:24.677546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.730 [2024-07-14 05:48:24.677577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.730 qpair failed and we were unable to recover it. 00:34:17.730 [2024-07-14 05:48:24.677807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.730 [2024-07-14 05:48:24.677834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.730 qpair failed and we were unable to recover it. 00:34:17.730 [2024-07-14 05:48:24.678062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.730 [2024-07-14 05:48:24.678090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.730 qpair failed and we were unable to recover it. 00:34:17.730 [2024-07-14 05:48:24.678325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.730 [2024-07-14 05:48:24.678355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.730 qpair failed and we were unable to recover it. 00:34:17.730 [2024-07-14 05:48:24.678555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.730 [2024-07-14 05:48:24.678587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.730 qpair failed and we were unable to recover it. 00:34:17.730 [2024-07-14 05:48:24.678798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.730 [2024-07-14 05:48:24.678832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.730 qpair failed and we were unable to recover it. 00:34:17.730 [2024-07-14 05:48:24.679031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.730 [2024-07-14 05:48:24.679064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.730 qpair failed and we were unable to recover it. 00:34:17.730 [2024-07-14 05:48:24.679341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.730 [2024-07-14 05:48:24.679369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.730 qpair failed and we were unable to recover it. 00:34:17.730 [2024-07-14 05:48:24.679598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.730 [2024-07-14 05:48:24.679628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.730 qpair failed and we were unable to recover it. 00:34:17.730 [2024-07-14 05:48:24.679856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.730 [2024-07-14 05:48:24.679895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.730 qpair failed and we were unable to recover it. 00:34:17.730 [2024-07-14 05:48:24.680118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.730 [2024-07-14 05:48:24.680146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.730 qpair failed and we were unable to recover it. 00:34:17.730 [2024-07-14 05:48:24.680330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.730 [2024-07-14 05:48:24.680361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.730 qpair failed and we were unable to recover it. 00:34:17.730 [2024-07-14 05:48:24.680595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.730 [2024-07-14 05:48:24.680626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.730 qpair failed and we were unable to recover it. 00:34:17.730 [2024-07-14 05:48:24.680831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.730 [2024-07-14 05:48:24.680858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.730 qpair failed and we were unable to recover it. 00:34:17.730 [2024-07-14 05:48:24.681074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.730 [2024-07-14 05:48:24.681106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.730 qpair failed and we were unable to recover it. 00:34:17.730 [2024-07-14 05:48:24.681308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.730 [2024-07-14 05:48:24.681345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.730 qpair failed and we were unable to recover it. 00:34:17.730 [2024-07-14 05:48:24.681549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.730 [2024-07-14 05:48:24.681577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.730 qpair failed and we were unable to recover it. 00:34:17.730 [2024-07-14 05:48:24.681776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.730 [2024-07-14 05:48:24.681806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.730 qpair failed and we were unable to recover it. 00:34:17.730 [2024-07-14 05:48:24.682019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.730 [2024-07-14 05:48:24.682050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.730 qpair failed and we were unable to recover it. 00:34:17.730 [2024-07-14 05:48:24.682268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.730 [2024-07-14 05:48:24.682295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.730 qpair failed and we were unable to recover it. 00:34:17.730 [2024-07-14 05:48:24.682506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.730 [2024-07-14 05:48:24.682537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.730 qpair failed and we were unable to recover it. 00:34:17.730 [2024-07-14 05:48:24.682705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.730 [2024-07-14 05:48:24.682735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.730 qpair failed and we were unable to recover it. 00:34:17.730 [2024-07-14 05:48:24.682948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.730 [2024-07-14 05:48:24.682978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.730 qpair failed and we were unable to recover it. 00:34:17.730 [2024-07-14 05:48:24.683185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.730 [2024-07-14 05:48:24.683216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.730 qpair failed and we were unable to recover it. 00:34:17.730 [2024-07-14 05:48:24.683443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.730 [2024-07-14 05:48:24.683473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.730 qpair failed and we were unable to recover it. 00:34:17.730 [2024-07-14 05:48:24.683714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.730 [2024-07-14 05:48:24.683741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.730 qpair failed and we were unable to recover it. 00:34:17.730 [2024-07-14 05:48:24.683949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.730 [2024-07-14 05:48:24.683981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.730 qpair failed and we were unable to recover it. 00:34:17.730 [2024-07-14 05:48:24.684209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.730 [2024-07-14 05:48:24.684237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.730 qpair failed and we were unable to recover it. 00:34:17.730 [2024-07-14 05:48:24.684438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.730 [2024-07-14 05:48:24.684465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.730 qpair failed and we were unable to recover it. 00:34:17.730 [2024-07-14 05:48:24.684678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.730 [2024-07-14 05:48:24.684708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.730 qpair failed and we were unable to recover it. 00:34:17.730 [2024-07-14 05:48:24.684929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.730 [2024-07-14 05:48:24.684956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.730 qpair failed and we were unable to recover it. 00:34:17.730 [2024-07-14 05:48:24.685122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.730 [2024-07-14 05:48:24.685151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.730 qpair failed and we were unable to recover it. 00:34:17.730 [2024-07-14 05:48:24.685385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.730 [2024-07-14 05:48:24.685415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.730 qpair failed and we were unable to recover it. 00:34:17.730 [2024-07-14 05:48:24.685631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.730 [2024-07-14 05:48:24.685666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.730 qpair failed and we were unable to recover it. 00:34:17.730 [2024-07-14 05:48:24.685899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.730 [2024-07-14 05:48:24.685927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.730 qpair failed and we were unable to recover it. 00:34:17.730 [2024-07-14 05:48:24.686138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.730 [2024-07-14 05:48:24.686165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.730 qpair failed and we were unable to recover it. 00:34:17.730 [2024-07-14 05:48:24.686393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.730 [2024-07-14 05:48:24.686423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.730 qpair failed and we were unable to recover it. 00:34:17.730 [2024-07-14 05:48:24.686627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.730 [2024-07-14 05:48:24.686654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.731 qpair failed and we were unable to recover it. 00:34:17.731 [2024-07-14 05:48:24.686893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.731 [2024-07-14 05:48:24.686923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.731 qpair failed and we were unable to recover it. 00:34:17.731 [2024-07-14 05:48:24.687152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.731 [2024-07-14 05:48:24.687182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.731 qpair failed and we were unable to recover it. 00:34:17.731 [2024-07-14 05:48:24.687387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.731 [2024-07-14 05:48:24.687415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.731 qpair failed and we were unable to recover it. 00:34:17.731 [2024-07-14 05:48:24.687599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.731 [2024-07-14 05:48:24.687626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.731 qpair failed and we were unable to recover it. 00:34:17.731 [2024-07-14 05:48:24.687858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.731 [2024-07-14 05:48:24.687896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.731 qpair failed and we were unable to recover it. 00:34:17.731 [2024-07-14 05:48:24.688082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.731 [2024-07-14 05:48:24.688109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.731 qpair failed and we were unable to recover it. 00:34:17.731 [2024-07-14 05:48:24.688312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.731 [2024-07-14 05:48:24.688343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.731 qpair failed and we were unable to recover it. 00:34:17.731 [2024-07-14 05:48:24.688550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.731 [2024-07-14 05:48:24.688580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.731 qpair failed and we were unable to recover it. 00:34:17.731 [2024-07-14 05:48:24.688757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.731 [2024-07-14 05:48:24.688786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.731 qpair failed and we were unable to recover it. 00:34:17.731 [2024-07-14 05:48:24.689023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.731 [2024-07-14 05:48:24.689054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.731 qpair failed and we were unable to recover it. 00:34:17.731 [2024-07-14 05:48:24.689289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.731 [2024-07-14 05:48:24.689320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.731 qpair failed and we were unable to recover it. 00:34:17.731 [2024-07-14 05:48:24.689505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.731 [2024-07-14 05:48:24.689533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.731 qpair failed and we were unable to recover it. 00:34:17.731 [2024-07-14 05:48:24.689711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.731 [2024-07-14 05:48:24.689750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.731 qpair failed and we were unable to recover it. 00:34:17.731 [2024-07-14 05:48:24.689997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.731 [2024-07-14 05:48:24.690024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.731 qpair failed and we were unable to recover it. 00:34:17.731 [2024-07-14 05:48:24.690171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.731 [2024-07-14 05:48:24.690197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.731 qpair failed and we were unable to recover it. 00:34:17.731 [2024-07-14 05:48:24.690398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.731 [2024-07-14 05:48:24.690429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.731 qpair failed and we were unable to recover it. 00:34:17.731 [2024-07-14 05:48:24.690607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.731 [2024-07-14 05:48:24.690648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.731 qpair failed and we were unable to recover it. 00:34:17.731 [2024-07-14 05:48:24.690885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.731 [2024-07-14 05:48:24.690913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.731 qpair failed and we were unable to recover it. 00:34:17.731 [2024-07-14 05:48:24.691154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.731 [2024-07-14 05:48:24.691188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.731 qpair failed and we were unable to recover it. 00:34:17.731 [2024-07-14 05:48:24.691374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.731 [2024-07-14 05:48:24.691402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.731 qpair failed and we were unable to recover it. 00:34:17.731 [2024-07-14 05:48:24.691605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.731 [2024-07-14 05:48:24.691633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.731 qpair failed and we were unable to recover it. 00:34:17.731 [2024-07-14 05:48:24.691809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.731 [2024-07-14 05:48:24.691839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.731 qpair failed and we were unable to recover it. 00:34:17.731 [2024-07-14 05:48:24.692038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.731 [2024-07-14 05:48:24.692068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.731 qpair failed and we were unable to recover it. 00:34:17.731 [2024-07-14 05:48:24.692264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.731 [2024-07-14 05:48:24.692291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.731 qpair failed and we were unable to recover it. 00:34:17.731 [2024-07-14 05:48:24.692474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.731 [2024-07-14 05:48:24.692501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.731 qpair failed and we were unable to recover it. 00:34:17.731 [2024-07-14 05:48:24.692688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.731 [2024-07-14 05:48:24.692719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.731 qpair failed and we were unable to recover it. 00:34:17.731 [2024-07-14 05:48:24.692976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.731 [2024-07-14 05:48:24.693004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.731 qpair failed and we were unable to recover it. 00:34:17.731 [2024-07-14 05:48:24.693211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.731 [2024-07-14 05:48:24.693241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.731 qpair failed and we were unable to recover it. 00:34:17.731 [2024-07-14 05:48:24.693418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.731 [2024-07-14 05:48:24.693448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.731 qpair failed and we were unable to recover it. 00:34:17.731 [2024-07-14 05:48:24.693654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.731 [2024-07-14 05:48:24.693682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.731 qpair failed and we were unable to recover it. 00:34:17.731 [2024-07-14 05:48:24.693871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.731 [2024-07-14 05:48:24.693899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.731 qpair failed and we were unable to recover it. 00:34:17.731 [2024-07-14 05:48:24.694112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.731 [2024-07-14 05:48:24.694140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.731 qpair failed and we were unable to recover it. 00:34:17.731 [2024-07-14 05:48:24.694342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.731 [2024-07-14 05:48:24.694369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.731 qpair failed and we were unable to recover it. 00:34:17.731 [2024-07-14 05:48:24.694555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.731 [2024-07-14 05:48:24.694583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.731 qpair failed and we were unable to recover it. 00:34:17.731 [2024-07-14 05:48:24.694793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.731 [2024-07-14 05:48:24.694821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.731 qpair failed and we were unable to recover it. 00:34:17.731 [2024-07-14 05:48:24.695008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.731 [2024-07-14 05:48:24.695041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.731 qpair failed and we were unable to recover it. 00:34:17.731 [2024-07-14 05:48:24.695253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.731 [2024-07-14 05:48:24.695281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.731 qpair failed and we were unable to recover it. 00:34:17.731 [2024-07-14 05:48:24.695445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.731 [2024-07-14 05:48:24.695472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.731 qpair failed and we were unable to recover it. 00:34:17.731 [2024-07-14 05:48:24.695677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.731 [2024-07-14 05:48:24.695705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.731 qpair failed and we were unable to recover it. 00:34:17.731 [2024-07-14 05:48:24.695951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.731 [2024-07-14 05:48:24.695982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.731 qpair failed and we were unable to recover it. 00:34:17.731 [2024-07-14 05:48:24.696196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.731 [2024-07-14 05:48:24.696227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.731 qpair failed and we were unable to recover it. 00:34:17.731 [2024-07-14 05:48:24.696425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.731 [2024-07-14 05:48:24.696452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.731 qpair failed and we were unable to recover it. 00:34:17.731 [2024-07-14 05:48:24.696657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.731 [2024-07-14 05:48:24.696687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.732 qpair failed and we were unable to recover it. 00:34:17.732 [2024-07-14 05:48:24.696890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.732 [2024-07-14 05:48:24.696921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.732 qpair failed and we were unable to recover it. 00:34:17.732 [2024-07-14 05:48:24.697085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.732 [2024-07-14 05:48:24.697112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.732 qpair failed and we were unable to recover it. 00:34:17.732 [2024-07-14 05:48:24.697344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.732 [2024-07-14 05:48:24.697375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.732 qpair failed and we were unable to recover it. 00:34:17.732 [2024-07-14 05:48:24.697585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.732 [2024-07-14 05:48:24.697615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.732 qpair failed and we were unable to recover it. 00:34:17.732 [2024-07-14 05:48:24.697809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.732 [2024-07-14 05:48:24.697836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.732 qpair failed and we were unable to recover it. 00:34:17.732 [2024-07-14 05:48:24.698063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.732 [2024-07-14 05:48:24.698091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.732 qpair failed and we were unable to recover it. 00:34:17.732 [2024-07-14 05:48:24.698323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.732 [2024-07-14 05:48:24.698354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.732 qpair failed and we were unable to recover it. 00:34:17.732 [2024-07-14 05:48:24.698560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.732 [2024-07-14 05:48:24.698587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.732 qpair failed and we were unable to recover it. 00:34:17.732 [2024-07-14 05:48:24.698771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.732 [2024-07-14 05:48:24.698798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.732 qpair failed and we were unable to recover it. 00:34:17.732 [2024-07-14 05:48:24.698983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.732 [2024-07-14 05:48:24.699012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.732 qpair failed and we were unable to recover it. 00:34:17.732 [2024-07-14 05:48:24.699198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.732 [2024-07-14 05:48:24.699225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.732 qpair failed and we were unable to recover it. 00:34:17.732 [2024-07-14 05:48:24.699409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.732 [2024-07-14 05:48:24.699436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.732 qpair failed and we were unable to recover it. 00:34:17.732 [2024-07-14 05:48:24.699621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.732 [2024-07-14 05:48:24.699650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.732 qpair failed and we were unable to recover it. 00:34:17.732 [2024-07-14 05:48:24.699830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.732 [2024-07-14 05:48:24.699857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.732 qpair failed and we were unable to recover it. 00:34:17.732 [2024-07-14 05:48:24.700076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.732 [2024-07-14 05:48:24.700107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.732 qpair failed and we were unable to recover it. 00:34:17.732 [2024-07-14 05:48:24.700336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.732 [2024-07-14 05:48:24.700366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.732 qpair failed and we were unable to recover it. 00:34:17.732 [2024-07-14 05:48:24.700554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.732 [2024-07-14 05:48:24.700581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.732 qpair failed and we were unable to recover it. 00:34:17.732 [2024-07-14 05:48:24.700807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.732 [2024-07-14 05:48:24.700837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.732 qpair failed and we were unable to recover it. 00:34:17.732 [2024-07-14 05:48:24.701081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.732 [2024-07-14 05:48:24.701109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.732 qpair failed and we were unable to recover it. 00:34:17.732 [2024-07-14 05:48:24.701303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.732 [2024-07-14 05:48:24.701330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.732 qpair failed and we were unable to recover it. 00:34:17.732 [2024-07-14 05:48:24.701536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.732 [2024-07-14 05:48:24.701564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.732 qpair failed and we were unable to recover it. 00:34:17.732 [2024-07-14 05:48:24.701749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.732 [2024-07-14 05:48:24.701776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.732 qpair failed and we were unable to recover it. 00:34:17.732 [2024-07-14 05:48:24.701988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.732 [2024-07-14 05:48:24.702016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.732 qpair failed and we were unable to recover it. 00:34:17.732 [2024-07-14 05:48:24.702238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.732 [2024-07-14 05:48:24.702269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.732 qpair failed and we were unable to recover it. 00:34:17.732 [2024-07-14 05:48:24.702498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.732 [2024-07-14 05:48:24.702528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.732 qpair failed and we were unable to recover it. 00:34:17.732 [2024-07-14 05:48:24.702737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.732 [2024-07-14 05:48:24.702764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.732 qpair failed and we were unable to recover it. 00:34:17.732 [2024-07-14 05:48:24.702962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.732 [2024-07-14 05:48:24.702991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.732 qpair failed and we were unable to recover it. 00:34:17.732 [2024-07-14 05:48:24.703201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.732 [2024-07-14 05:48:24.703229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.732 qpair failed and we were unable to recover it. 00:34:17.732 [2024-07-14 05:48:24.703441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.732 [2024-07-14 05:48:24.703468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.732 qpair failed and we were unable to recover it. 00:34:17.732 [2024-07-14 05:48:24.703669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.732 [2024-07-14 05:48:24.703700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.732 qpair failed and we were unable to recover it. 00:34:17.732 [2024-07-14 05:48:24.703931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.732 [2024-07-14 05:48:24.703961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.732 qpair failed and we were unable to recover it. 00:34:17.732 [2024-07-14 05:48:24.704195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.732 [2024-07-14 05:48:24.704223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.732 qpair failed and we were unable to recover it. 00:34:17.732 [2024-07-14 05:48:24.704406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.732 [2024-07-14 05:48:24.704438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.732 qpair failed and we were unable to recover it. 00:34:17.732 [2024-07-14 05:48:24.704631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.732 [2024-07-14 05:48:24.704658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.732 qpair failed and we were unable to recover it. 00:34:17.732 [2024-07-14 05:48:24.704840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.732 [2024-07-14 05:48:24.704876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.732 qpair failed and we were unable to recover it. 00:34:17.733 [2024-07-14 05:48:24.705094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.733 [2024-07-14 05:48:24.705124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.733 qpair failed and we were unable to recover it. 00:34:17.733 [2024-07-14 05:48:24.705335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.733 [2024-07-14 05:48:24.705365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.733 qpair failed and we were unable to recover it. 00:34:17.733 [2024-07-14 05:48:24.705586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.733 [2024-07-14 05:48:24.705613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.733 qpair failed and we were unable to recover it. 00:34:17.733 [2024-07-14 05:48:24.705773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.733 [2024-07-14 05:48:24.705802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.733 qpair failed and we were unable to recover it. 00:34:17.733 [2024-07-14 05:48:24.706001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.733 [2024-07-14 05:48:24.706029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.733 qpair failed and we were unable to recover it. 00:34:17.733 [2024-07-14 05:48:24.706176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.733 [2024-07-14 05:48:24.706203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.733 qpair failed and we were unable to recover it. 00:34:17.733 [2024-07-14 05:48:24.706429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.733 [2024-07-14 05:48:24.706460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.733 qpair failed and we were unable to recover it. 00:34:17.733 [2024-07-14 05:48:24.706659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.733 [2024-07-14 05:48:24.706689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.733 qpair failed and we were unable to recover it. 00:34:17.733 [2024-07-14 05:48:24.706904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.733 [2024-07-14 05:48:24.706932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.733 qpair failed and we were unable to recover it. 00:34:17.733 [2024-07-14 05:48:24.707091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.733 [2024-07-14 05:48:24.707120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.733 qpair failed and we were unable to recover it. 00:34:17.733 [2024-07-14 05:48:24.707312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.733 [2024-07-14 05:48:24.707339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.733 qpair failed and we were unable to recover it. 00:34:17.733 [2024-07-14 05:48:24.707526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.733 [2024-07-14 05:48:24.707553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.733 qpair failed and we were unable to recover it. 00:34:17.733 [2024-07-14 05:48:24.707703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.733 [2024-07-14 05:48:24.707731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.733 qpair failed and we were unable to recover it. 00:34:17.733 [2024-07-14 05:48:24.707912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.733 [2024-07-14 05:48:24.707940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.733 qpair failed and we were unable to recover it. 00:34:17.733 [2024-07-14 05:48:24.708125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.733 [2024-07-14 05:48:24.708153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.733 qpair failed and we were unable to recover it. 00:34:17.733 [2024-07-14 05:48:24.708409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.733 [2024-07-14 05:48:24.708437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.733 qpair failed and we were unable to recover it. 00:34:17.733 [2024-07-14 05:48:24.708626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.733 [2024-07-14 05:48:24.708653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.733 qpair failed and we were unable to recover it. 00:34:17.733 [2024-07-14 05:48:24.708860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.733 [2024-07-14 05:48:24.708892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.733 qpair failed and we were unable to recover it. 00:34:17.733 [2024-07-14 05:48:24.709077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.733 [2024-07-14 05:48:24.709104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.733 qpair failed and we were unable to recover it. 00:34:17.733 [2024-07-14 05:48:24.709347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.733 [2024-07-14 05:48:24.709377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.733 qpair failed and we were unable to recover it. 00:34:17.733 [2024-07-14 05:48:24.709587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.733 [2024-07-14 05:48:24.709614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.733 qpair failed and we were unable to recover it. 00:34:17.733 [2024-07-14 05:48:24.709768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.733 [2024-07-14 05:48:24.709795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.733 qpair failed and we were unable to recover it. 00:34:17.733 [2024-07-14 05:48:24.709981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.733 [2024-07-14 05:48:24.710009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.733 qpair failed and we were unable to recover it. 00:34:17.733 [2024-07-14 05:48:24.710193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.733 [2024-07-14 05:48:24.710220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.733 qpair failed and we were unable to recover it. 00:34:17.733 [2024-07-14 05:48:24.710405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.733 [2024-07-14 05:48:24.710433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.733 qpair failed and we were unable to recover it. 00:34:17.733 [2024-07-14 05:48:24.710651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.733 [2024-07-14 05:48:24.710681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.733 qpair failed and we were unable to recover it. 00:34:17.733 [2024-07-14 05:48:24.710858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.733 [2024-07-14 05:48:24.710894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.733 qpair failed and we were unable to recover it. 00:34:17.733 [2024-07-14 05:48:24.711078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.733 [2024-07-14 05:48:24.711105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.733 qpair failed and we were unable to recover it. 00:34:17.733 [2024-07-14 05:48:24.711286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.733 [2024-07-14 05:48:24.711313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.733 qpair failed and we were unable to recover it. 00:34:17.733 [2024-07-14 05:48:24.711491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.733 [2024-07-14 05:48:24.711518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.733 qpair failed and we were unable to recover it. 00:34:17.733 [2024-07-14 05:48:24.711743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.733 [2024-07-14 05:48:24.711770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.733 qpair failed and we were unable to recover it. 00:34:17.733 [2024-07-14 05:48:24.711980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.733 [2024-07-14 05:48:24.712008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.733 qpair failed and we were unable to recover it. 00:34:17.733 [2024-07-14 05:48:24.712220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.733 [2024-07-14 05:48:24.712247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.733 qpair failed and we were unable to recover it. 00:34:17.733 [2024-07-14 05:48:24.712418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.733 [2024-07-14 05:48:24.712445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.733 qpair failed and we were unable to recover it. 00:34:17.733 [2024-07-14 05:48:24.712694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.733 [2024-07-14 05:48:24.712724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.733 qpair failed and we were unable to recover it. 00:34:17.733 [2024-07-14 05:48:24.712993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.733 [2024-07-14 05:48:24.713021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.733 qpair failed and we were unable to recover it. 00:34:17.733 [2024-07-14 05:48:24.713177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.733 [2024-07-14 05:48:24.713205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.733 qpair failed and we were unable to recover it. 00:34:17.733 [2024-07-14 05:48:24.713413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.733 [2024-07-14 05:48:24.713460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.733 qpair failed and we were unable to recover it. 00:34:17.733 [2024-07-14 05:48:24.713673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.733 [2024-07-14 05:48:24.713700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.733 qpair failed and we were unable to recover it. 00:34:17.733 [2024-07-14 05:48:24.713911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.733 [2024-07-14 05:48:24.713939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.733 qpair failed and we were unable to recover it. 00:34:17.733 [2024-07-14 05:48:24.714098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.733 [2024-07-14 05:48:24.714126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.733 qpair failed and we were unable to recover it. 00:34:17.733 [2024-07-14 05:48:24.714330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.733 [2024-07-14 05:48:24.714356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.733 qpair failed and we were unable to recover it. 00:34:17.733 [2024-07-14 05:48:24.714596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.733 [2024-07-14 05:48:24.714626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.733 qpair failed and we were unable to recover it. 00:34:17.733 [2024-07-14 05:48:24.714801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.733 [2024-07-14 05:48:24.714831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.733 qpair failed and we were unable to recover it. 00:34:17.733 [2024-07-14 05:48:24.715030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.733 [2024-07-14 05:48:24.715059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.733 qpair failed and we were unable to recover it. 00:34:17.733 [2024-07-14 05:48:24.715244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.733 [2024-07-14 05:48:24.715271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.733 qpair failed and we were unable to recover it. 00:34:17.733 [2024-07-14 05:48:24.715452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.734 [2024-07-14 05:48:24.715480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.734 qpair failed and we were unable to recover it. 00:34:17.734 [2024-07-14 05:48:24.715684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.734 [2024-07-14 05:48:24.715711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.734 qpair failed and we were unable to recover it. 00:34:17.734 [2024-07-14 05:48:24.715917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.734 [2024-07-14 05:48:24.715949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.734 qpair failed and we were unable to recover it. 00:34:17.734 [2024-07-14 05:48:24.716156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.734 [2024-07-14 05:48:24.716188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.734 qpair failed and we were unable to recover it. 00:34:17.734 [2024-07-14 05:48:24.716396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.734 [2024-07-14 05:48:24.716424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.734 qpair failed and we were unable to recover it. 00:34:17.734 [2024-07-14 05:48:24.716646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.734 [2024-07-14 05:48:24.716674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.734 qpair failed and we were unable to recover it. 00:34:17.734 [2024-07-14 05:48:24.716880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.734 [2024-07-14 05:48:24.716908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.734 qpair failed and we were unable to recover it. 00:34:17.734 [2024-07-14 05:48:24.717088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.734 [2024-07-14 05:48:24.717115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.734 qpair failed and we were unable to recover it. 00:34:17.734 [2024-07-14 05:48:24.717326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.734 [2024-07-14 05:48:24.717357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.734 qpair failed and we were unable to recover it. 00:34:17.734 [2024-07-14 05:48:24.717560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.734 [2024-07-14 05:48:24.717591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.734 qpair failed and we were unable to recover it. 00:34:17.734 [2024-07-14 05:48:24.717814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.734 [2024-07-14 05:48:24.717845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.734 qpair failed and we were unable to recover it. 00:34:17.734 [2024-07-14 05:48:24.718083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.734 [2024-07-14 05:48:24.718111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.734 qpair failed and we were unable to recover it. 00:34:17.734 [2024-07-14 05:48:24.718274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.734 [2024-07-14 05:48:24.718302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.734 qpair failed and we were unable to recover it. 00:34:17.734 [2024-07-14 05:48:24.718488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.734 [2024-07-14 05:48:24.718516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.734 qpair failed and we were unable to recover it. 00:34:17.734 [2024-07-14 05:48:24.718749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.734 [2024-07-14 05:48:24.718780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.734 qpair failed and we were unable to recover it. 00:34:17.734 [2024-07-14 05:48:24.718994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.734 [2024-07-14 05:48:24.719025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.734 qpair failed and we were unable to recover it. 00:34:17.734 [2024-07-14 05:48:24.719236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.734 [2024-07-14 05:48:24.719264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.734 qpair failed and we were unable to recover it. 00:34:17.734 [2024-07-14 05:48:24.719452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.734 [2024-07-14 05:48:24.719480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.734 qpair failed and we were unable to recover it. 00:34:17.734 [2024-07-14 05:48:24.719695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.734 [2024-07-14 05:48:24.719723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.734 qpair failed and we were unable to recover it. 00:34:17.734 [2024-07-14 05:48:24.719886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.734 [2024-07-14 05:48:24.719914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.734 qpair failed and we were unable to recover it. 00:34:17.734 [2024-07-14 05:48:24.720152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.734 [2024-07-14 05:48:24.720182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.734 qpair failed and we were unable to recover it. 00:34:17.734 [2024-07-14 05:48:24.720386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.734 [2024-07-14 05:48:24.720416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.734 qpair failed and we were unable to recover it. 00:34:17.734 [2024-07-14 05:48:24.720594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.734 [2024-07-14 05:48:24.720636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.734 qpair failed and we were unable to recover it. 00:34:17.734 [2024-07-14 05:48:24.720836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.734 [2024-07-14 05:48:24.720873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.734 qpair failed and we were unable to recover it. 00:34:17.734 [2024-07-14 05:48:24.721051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.734 [2024-07-14 05:48:24.721078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.734 qpair failed and we were unable to recover it. 00:34:17.734 [2024-07-14 05:48:24.721279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.734 [2024-07-14 05:48:24.721307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.734 qpair failed and we were unable to recover it. 00:34:17.734 [2024-07-14 05:48:24.721493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.734 [2024-07-14 05:48:24.721520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.734 qpair failed and we were unable to recover it. 00:34:17.734 [2024-07-14 05:48:24.721679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.734 [2024-07-14 05:48:24.721707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.734 qpair failed and we were unable to recover it. 00:34:17.734 [2024-07-14 05:48:24.721916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.734 [2024-07-14 05:48:24.721944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.734 qpair failed and we were unable to recover it. 00:34:17.734 [2024-07-14 05:48:24.722161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.734 [2024-07-14 05:48:24.722192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.734 qpair failed and we were unable to recover it. 00:34:17.734 [2024-07-14 05:48:24.722398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.734 [2024-07-14 05:48:24.722426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.734 qpair failed and we were unable to recover it. 00:34:17.734 [2024-07-14 05:48:24.722612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.734 [2024-07-14 05:48:24.722644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.734 qpair failed and we were unable to recover it. 00:34:17.734 [2024-07-14 05:48:24.722806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.734 [2024-07-14 05:48:24.722835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.734 qpair failed and we were unable to recover it. 00:34:17.734 [2024-07-14 05:48:24.723077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.734 [2024-07-14 05:48:24.723108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.734 qpair failed and we were unable to recover it. 00:34:17.734 [2024-07-14 05:48:24.723294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.734 [2024-07-14 05:48:24.723323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.734 qpair failed and we were unable to recover it. 00:34:17.734 [2024-07-14 05:48:24.723510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.734 [2024-07-14 05:48:24.723538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.734 qpair failed and we were unable to recover it. 00:34:17.734 [2024-07-14 05:48:24.723700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.734 [2024-07-14 05:48:24.723727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.734 qpair failed and we were unable to recover it. 00:34:17.734 [2024-07-14 05:48:24.723910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.734 [2024-07-14 05:48:24.723938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.734 qpair failed and we were unable to recover it. 00:34:17.734 [2024-07-14 05:48:24.724088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.734 [2024-07-14 05:48:24.724115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.734 qpair failed and we were unable to recover it. 00:34:17.734 [2024-07-14 05:48:24.724305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.734 [2024-07-14 05:48:24.724333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.734 qpair failed and we were unable to recover it. 00:34:17.734 [2024-07-14 05:48:24.724551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.734 [2024-07-14 05:48:24.724578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.734 qpair failed and we were unable to recover it. 00:34:17.734 [2024-07-14 05:48:24.724788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.734 [2024-07-14 05:48:24.724818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.734 qpair failed and we were unable to recover it. 00:34:17.734 [2024-07-14 05:48:24.725036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.734 [2024-07-14 05:48:24.725065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.734 qpair failed and we were unable to recover it. 00:34:17.735 [2024-07-14 05:48:24.725249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.735 [2024-07-14 05:48:24.725277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.735 qpair failed and we were unable to recover it. 00:34:17.735 [2024-07-14 05:48:24.725465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.735 [2024-07-14 05:48:24.725492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.735 qpair failed and we were unable to recover it. 00:34:17.735 [2024-07-14 05:48:24.725678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.735 [2024-07-14 05:48:24.725706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.735 qpair failed and we were unable to recover it. 00:34:17.735 [2024-07-14 05:48:24.725901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.735 [2024-07-14 05:48:24.725929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.735 qpair failed and we were unable to recover it. 00:34:17.735 [2024-07-14 05:48:24.726148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.735 [2024-07-14 05:48:24.726193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.735 qpair failed and we were unable to recover it. 00:34:17.735 [2024-07-14 05:48:24.726371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.735 [2024-07-14 05:48:24.726401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.735 qpair failed and we were unable to recover it. 00:34:17.735 [2024-07-14 05:48:24.726600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.735 [2024-07-14 05:48:24.726628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.735 qpair failed and we were unable to recover it. 00:34:17.735 [2024-07-14 05:48:24.726841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.735 [2024-07-14 05:48:24.726885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.735 qpair failed and we were unable to recover it. 00:34:17.735 [2024-07-14 05:48:24.727128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.735 [2024-07-14 05:48:24.727158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.735 qpair failed and we were unable to recover it. 00:34:17.735 [2024-07-14 05:48:24.727400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.735 [2024-07-14 05:48:24.727427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.735 qpair failed and we were unable to recover it. 00:34:17.735 [2024-07-14 05:48:24.727657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.735 [2024-07-14 05:48:24.727687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.735 qpair failed and we were unable to recover it. 00:34:17.735 [2024-07-14 05:48:24.727875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.735 [2024-07-14 05:48:24.727908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.735 qpair failed and we were unable to recover it. 00:34:17.735 [2024-07-14 05:48:24.728135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.735 [2024-07-14 05:48:24.728162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.735 qpair failed and we were unable to recover it. 00:34:17.735 [2024-07-14 05:48:24.728338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.735 [2024-07-14 05:48:24.728368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.735 qpair failed and we were unable to recover it. 00:34:17.735 [2024-07-14 05:48:24.728574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.735 [2024-07-14 05:48:24.728605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.735 qpair failed and we were unable to recover it. 00:34:17.735 [2024-07-14 05:48:24.728912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.735 [2024-07-14 05:48:24.728941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.735 qpair failed and we were unable to recover it. 00:34:17.735 [2024-07-14 05:48:24.729094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.735 [2024-07-14 05:48:24.729122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.735 qpair failed and we were unable to recover it. 00:34:17.735 [2024-07-14 05:48:24.729265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.735 [2024-07-14 05:48:24.729292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.735 qpair failed and we were unable to recover it. 00:34:17.735 [2024-07-14 05:48:24.729474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.735 [2024-07-14 05:48:24.729501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.735 qpair failed and we were unable to recover it. 00:34:17.735 [2024-07-14 05:48:24.729712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.735 [2024-07-14 05:48:24.729742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.735 qpair failed and we were unable to recover it. 00:34:17.735 [2024-07-14 05:48:24.729953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.735 [2024-07-14 05:48:24.729981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.735 qpair failed and we were unable to recover it. 00:34:17.735 [2024-07-14 05:48:24.730192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.735 [2024-07-14 05:48:24.730220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.735 qpair failed and we were unable to recover it. 00:34:17.735 [2024-07-14 05:48:24.730443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.735 [2024-07-14 05:48:24.730470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.735 qpair failed and we were unable to recover it. 00:34:17.735 [2024-07-14 05:48:24.730653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.735 [2024-07-14 05:48:24.730683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.735 qpair failed and we were unable to recover it. 00:34:17.735 [2024-07-14 05:48:24.730927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.735 [2024-07-14 05:48:24.730955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.735 qpair failed and we were unable to recover it. 00:34:17.735 [2024-07-14 05:48:24.731122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.735 [2024-07-14 05:48:24.731150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.735 qpair failed and we were unable to recover it. 00:34:17.735 [2024-07-14 05:48:24.731324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.735 [2024-07-14 05:48:24.731355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.735 qpair failed and we were unable to recover it. 00:34:17.735 [2024-07-14 05:48:24.731620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.735 [2024-07-14 05:48:24.731651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.735 qpair failed and we were unable to recover it. 00:34:17.735 [2024-07-14 05:48:24.731830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.735 [2024-07-14 05:48:24.731871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.735 qpair failed and we were unable to recover it. 00:34:17.735 [2024-07-14 05:48:24.732052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.735 [2024-07-14 05:48:24.732079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.735 qpair failed and we were unable to recover it. 00:34:17.735 [2024-07-14 05:48:24.732242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.735 [2024-07-14 05:48:24.732269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.735 qpair failed and we were unable to recover it. 00:34:17.735 [2024-07-14 05:48:24.732504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.735 [2024-07-14 05:48:24.732535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.735 qpair failed and we were unable to recover it. 00:34:17.735 [2024-07-14 05:48:24.732766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.735 [2024-07-14 05:48:24.732793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.735 qpair failed and we were unable to recover it. 00:34:17.735 [2024-07-14 05:48:24.732976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.735 [2024-07-14 05:48:24.733005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.735 qpair failed and we were unable to recover it. 00:34:17.735 [2024-07-14 05:48:24.733193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.735 [2024-07-14 05:48:24.733221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.735 qpair failed and we were unable to recover it. 00:34:17.735 [2024-07-14 05:48:24.733426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.735 [2024-07-14 05:48:24.733456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.735 qpair failed and we were unable to recover it. 00:34:17.735 [2024-07-14 05:48:24.733632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.735 [2024-07-14 05:48:24.733659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.735 qpair failed and we were unable to recover it. 00:34:17.735 [2024-07-14 05:48:24.733894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.735 [2024-07-14 05:48:24.733925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.735 qpair failed and we were unable to recover it. 00:34:17.735 [2024-07-14 05:48:24.734155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.735 [2024-07-14 05:48:24.734186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.735 qpair failed and we were unable to recover it. 00:34:17.735 [2024-07-14 05:48:24.734421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.735 [2024-07-14 05:48:24.734449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.735 qpair failed and we were unable to recover it. 00:34:17.735 [2024-07-14 05:48:24.734678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.735 [2024-07-14 05:48:24.734709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.735 qpair failed and we were unable to recover it. 00:34:17.735 [2024-07-14 05:48:24.734906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.735 [2024-07-14 05:48:24.734937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.735 qpair failed and we were unable to recover it. 00:34:17.735 [2024-07-14 05:48:24.735148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.735 [2024-07-14 05:48:24.735176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.736 qpair failed and we were unable to recover it. 00:34:17.736 [2024-07-14 05:48:24.735383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.736 [2024-07-14 05:48:24.735413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.736 qpair failed and we were unable to recover it. 00:34:17.736 [2024-07-14 05:48:24.735582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.736 [2024-07-14 05:48:24.735612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.736 qpair failed and we were unable to recover it. 00:34:17.736 [2024-07-14 05:48:24.735790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.736 [2024-07-14 05:48:24.735817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.736 qpair failed and we were unable to recover it. 00:34:17.736 [2024-07-14 05:48:24.736003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.736 [2024-07-14 05:48:24.736031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.736 qpair failed and we were unable to recover it. 00:34:17.736 [2024-07-14 05:48:24.736233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.736 [2024-07-14 05:48:24.736263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.736 qpair failed and we were unable to recover it. 00:34:17.736 [2024-07-14 05:48:24.736494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.736 [2024-07-14 05:48:24.736521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.736 qpair failed and we were unable to recover it. 00:34:17.736 [2024-07-14 05:48:24.736759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.736 [2024-07-14 05:48:24.736789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.736 qpair failed and we were unable to recover it. 00:34:17.736 [2024-07-14 05:48:24.737030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.736 [2024-07-14 05:48:24.737058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.736 qpair failed and we were unable to recover it. 00:34:17.736 [2024-07-14 05:48:24.737242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.736 [2024-07-14 05:48:24.737269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.736 qpair failed and we were unable to recover it. 00:34:17.736 [2024-07-14 05:48:24.737457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.736 [2024-07-14 05:48:24.737484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.736 qpair failed and we were unable to recover it. 00:34:17.736 [2024-07-14 05:48:24.737671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.736 [2024-07-14 05:48:24.737702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.736 qpair failed and we were unable to recover it. 00:34:17.736 [2024-07-14 05:48:24.737903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.736 [2024-07-14 05:48:24.737931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.736 qpair failed and we were unable to recover it. 00:34:17.736 [2024-07-14 05:48:24.738165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.736 [2024-07-14 05:48:24.738195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.736 qpair failed and we were unable to recover it. 00:34:17.736 [2024-07-14 05:48:24.738430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.736 [2024-07-14 05:48:24.738460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.736 qpair failed and we were unable to recover it. 00:34:17.736 [2024-07-14 05:48:24.738668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.736 [2024-07-14 05:48:24.738695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.736 qpair failed and we were unable to recover it. 00:34:17.736 [2024-07-14 05:48:24.738907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.736 [2024-07-14 05:48:24.738939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.736 qpair failed and we were unable to recover it. 00:34:17.736 [2024-07-14 05:48:24.739182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.736 [2024-07-14 05:48:24.739209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.736 qpair failed and we were unable to recover it. 00:34:17.736 [2024-07-14 05:48:24.739404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.736 [2024-07-14 05:48:24.739432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.736 qpair failed and we were unable to recover it. 00:34:17.736 [2024-07-14 05:48:24.739639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.736 [2024-07-14 05:48:24.739669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.736 qpair failed and we were unable to recover it. 00:34:17.736 [2024-07-14 05:48:24.739898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.736 [2024-07-14 05:48:24.739926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.736 qpair failed and we were unable to recover it. 00:34:17.736 [2024-07-14 05:48:24.740109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.736 [2024-07-14 05:48:24.740136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.736 qpair failed and we were unable to recover it. 00:34:17.736 [2024-07-14 05:48:24.740309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.736 [2024-07-14 05:48:24.740339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.736 qpair failed and we were unable to recover it. 00:34:17.736 [2024-07-14 05:48:24.740571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.736 [2024-07-14 05:48:24.740598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.736 qpair failed and we were unable to recover it. 00:34:17.736 [2024-07-14 05:48:24.740812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.736 [2024-07-14 05:48:24.740842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.736 qpair failed and we were unable to recover it. 00:34:17.736 [2024-07-14 05:48:24.741097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.736 [2024-07-14 05:48:24.741124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.736 qpair failed and we were unable to recover it. 00:34:17.736 [2024-07-14 05:48:24.741362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.736 [2024-07-14 05:48:24.741396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.736 qpair failed and we were unable to recover it. 00:34:17.736 [2024-07-14 05:48:24.741586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.736 [2024-07-14 05:48:24.741613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.736 qpair failed and we were unable to recover it. 00:34:17.736 [2024-07-14 05:48:24.741844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.736 [2024-07-14 05:48:24.741880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.736 qpair failed and we were unable to recover it. 00:34:17.736 [2024-07-14 05:48:24.742113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.736 [2024-07-14 05:48:24.742143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.736 qpair failed and we were unable to recover it. 00:34:17.736 [2024-07-14 05:48:24.742346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.736 [2024-07-14 05:48:24.742374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.736 qpair failed and we were unable to recover it. 00:34:17.736 [2024-07-14 05:48:24.742583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.736 [2024-07-14 05:48:24.742614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.736 qpair failed and we were unable to recover it. 00:34:17.736 [2024-07-14 05:48:24.742844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.736 [2024-07-14 05:48:24.742887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.736 qpair failed and we were unable to recover it. 00:34:17.736 [2024-07-14 05:48:24.743117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.736 [2024-07-14 05:48:24.743144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.736 qpair failed and we were unable to recover it. 00:34:17.736 [2024-07-14 05:48:24.743376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.736 [2024-07-14 05:48:24.743406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.736 qpair failed and we were unable to recover it. 00:34:17.736 [2024-07-14 05:48:24.743641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.736 [2024-07-14 05:48:24.743671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.736 qpair failed and we were unable to recover it. 00:34:17.736 [2024-07-14 05:48:24.743883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.737 [2024-07-14 05:48:24.743911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.737 qpair failed and we were unable to recover it. 00:34:17.737 [2024-07-14 05:48:24.744091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.737 [2024-07-14 05:48:24.744118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.737 qpair failed and we were unable to recover it. 00:34:17.737 [2024-07-14 05:48:24.744349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.737 [2024-07-14 05:48:24.744379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.737 qpair failed and we were unable to recover it. 00:34:17.737 [2024-07-14 05:48:24.744592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.737 [2024-07-14 05:48:24.744619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.737 qpair failed and we were unable to recover it. 00:34:17.737 [2024-07-14 05:48:24.744857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.737 [2024-07-14 05:48:24.744894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.737 qpair failed and we were unable to recover it. 00:34:17.737 [2024-07-14 05:48:24.745101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.737 [2024-07-14 05:48:24.745132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.737 qpair failed and we were unable to recover it. 00:34:17.737 [2024-07-14 05:48:24.745310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.737 [2024-07-14 05:48:24.745337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.737 qpair failed and we were unable to recover it. 00:34:17.737 [2024-07-14 05:48:24.745537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.737 [2024-07-14 05:48:24.745567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.737 qpair failed and we were unable to recover it. 00:34:17.737 [2024-07-14 05:48:24.745750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.737 [2024-07-14 05:48:24.745778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.737 qpair failed and we were unable to recover it. 00:34:17.737 [2024-07-14 05:48:24.745985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.737 [2024-07-14 05:48:24.746014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.737 qpair failed and we were unable to recover it. 00:34:17.737 [2024-07-14 05:48:24.746254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.737 [2024-07-14 05:48:24.746281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.737 qpair failed and we were unable to recover it. 00:34:17.737 [2024-07-14 05:48:24.746464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.737 [2024-07-14 05:48:24.746492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.737 qpair failed and we were unable to recover it. 00:34:17.737 [2024-07-14 05:48:24.746752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.737 [2024-07-14 05:48:24.746780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.737 qpair failed and we were unable to recover it. 00:34:17.737 [2024-07-14 05:48:24.746971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.737 [2024-07-14 05:48:24.747001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.737 qpair failed and we were unable to recover it. 00:34:17.737 [2024-07-14 05:48:24.747232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.737 [2024-07-14 05:48:24.747262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.737 qpair failed and we were unable to recover it. 00:34:17.737 [2024-07-14 05:48:24.747442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.737 [2024-07-14 05:48:24.747470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.737 qpair failed and we were unable to recover it. 00:34:17.737 [2024-07-14 05:48:24.747697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.737 [2024-07-14 05:48:24.747727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.737 qpair failed and we were unable to recover it. 00:34:17.737 [2024-07-14 05:48:24.747917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.737 [2024-07-14 05:48:24.747948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.737 qpair failed and we were unable to recover it. 00:34:17.737 [2024-07-14 05:48:24.748180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.737 [2024-07-14 05:48:24.748207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.737 qpair failed and we were unable to recover it. 00:34:17.737 [2024-07-14 05:48:24.748416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.737 [2024-07-14 05:48:24.748446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.737 qpair failed and we were unable to recover it. 00:34:17.737 [2024-07-14 05:48:24.748668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.737 [2024-07-14 05:48:24.748695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.737 qpair failed and we were unable to recover it. 00:34:17.737 [2024-07-14 05:48:24.748881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.737 [2024-07-14 05:48:24.748929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.737 qpair failed and we were unable to recover it. 00:34:17.737 [2024-07-14 05:48:24.749139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.737 [2024-07-14 05:48:24.749184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.737 qpair failed and we were unable to recover it. 00:34:17.737 [2024-07-14 05:48:24.749389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.737 [2024-07-14 05:48:24.749420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.737 qpair failed and we were unable to recover it. 00:34:17.737 [2024-07-14 05:48:24.749652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.737 [2024-07-14 05:48:24.749679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.737 qpair failed and we were unable to recover it. 00:34:17.737 [2024-07-14 05:48:24.749908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.737 [2024-07-14 05:48:24.749938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.737 qpair failed and we were unable to recover it. 00:34:17.737 [2024-07-14 05:48:24.750167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.737 [2024-07-14 05:48:24.750197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.737 qpair failed and we were unable to recover it. 00:34:17.737 [2024-07-14 05:48:24.750386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.737 [2024-07-14 05:48:24.750413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.737 qpair failed and we were unable to recover it. 00:34:17.737 [2024-07-14 05:48:24.750648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.737 [2024-07-14 05:48:24.750679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.737 qpair failed and we were unable to recover it. 00:34:17.737 [2024-07-14 05:48:24.750893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.737 [2024-07-14 05:48:24.750924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.737 qpair failed and we were unable to recover it. 00:34:17.737 [2024-07-14 05:48:24.751158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.737 [2024-07-14 05:48:24.751189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.737 qpair failed and we were unable to recover it. 00:34:17.737 [2024-07-14 05:48:24.751429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.737 [2024-07-14 05:48:24.751460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.737 qpair failed and we were unable to recover it. 00:34:17.737 [2024-07-14 05:48:24.751661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.737 [2024-07-14 05:48:24.751691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.737 qpair failed and we were unable to recover it. 00:34:17.737 [2024-07-14 05:48:24.751907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.737 [2024-07-14 05:48:24.751934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.737 qpair failed and we were unable to recover it. 00:34:17.737 [2024-07-14 05:48:24.752120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.737 [2024-07-14 05:48:24.752157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.737 qpair failed and we were unable to recover it. 00:34:17.737 [2024-07-14 05:48:24.752395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.737 [2024-07-14 05:48:24.752424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.737 qpair failed and we were unable to recover it. 00:34:17.737 [2024-07-14 05:48:24.752660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.737 [2024-07-14 05:48:24.752686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.737 qpair failed and we were unable to recover it. 00:34:17.737 [2024-07-14 05:48:24.752878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.737 [2024-07-14 05:48:24.752920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.737 qpair failed and we were unable to recover it. 00:34:17.737 [2024-07-14 05:48:24.753143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.737 [2024-07-14 05:48:24.753172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.737 qpair failed and we were unable to recover it. 00:34:17.737 [2024-07-14 05:48:24.753400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.737 [2024-07-14 05:48:24.753427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.737 qpair failed and we were unable to recover it. 00:34:17.737 [2024-07-14 05:48:24.753610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.737 [2024-07-14 05:48:24.753639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.737 qpair failed and we were unable to recover it. 00:34:17.737 [2024-07-14 05:48:24.753844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.737 [2024-07-14 05:48:24.753879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.737 qpair failed and we were unable to recover it. 00:34:17.737 [2024-07-14 05:48:24.754066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.737 [2024-07-14 05:48:24.754093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.737 qpair failed and we were unable to recover it. 00:34:17.737 [2024-07-14 05:48:24.754305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.737 [2024-07-14 05:48:24.754334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.737 qpair failed and we were unable to recover it. 00:34:17.737 [2024-07-14 05:48:24.754524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.737 [2024-07-14 05:48:24.754553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.737 qpair failed and we were unable to recover it. 00:34:17.737 [2024-07-14 05:48:24.754783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.737 [2024-07-14 05:48:24.754809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.737 qpair failed and we were unable to recover it. 00:34:17.738 [2024-07-14 05:48:24.755049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.738 [2024-07-14 05:48:24.755079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.738 qpair failed and we were unable to recover it. 00:34:17.738 [2024-07-14 05:48:24.755308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.738 [2024-07-14 05:48:24.755338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.738 qpair failed and we were unable to recover it. 00:34:17.738 [2024-07-14 05:48:24.755542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.738 [2024-07-14 05:48:24.755569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.738 qpair failed and we were unable to recover it. 00:34:17.738 [2024-07-14 05:48:24.755770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.738 [2024-07-14 05:48:24.755800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.738 qpair failed and we were unable to recover it. 00:34:17.738 [2024-07-14 05:48:24.756012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.738 [2024-07-14 05:48:24.756042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.738 qpair failed and we were unable to recover it. 00:34:17.738 [2024-07-14 05:48:24.756275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.738 [2024-07-14 05:48:24.756302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.738 qpair failed and we were unable to recover it. 00:34:17.738 [2024-07-14 05:48:24.756513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.738 [2024-07-14 05:48:24.756543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.738 qpair failed and we were unable to recover it. 00:34:17.738 [2024-07-14 05:48:24.756778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.738 [2024-07-14 05:48:24.756807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.738 qpair failed and we were unable to recover it. 00:34:17.738 [2024-07-14 05:48:24.756993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.738 [2024-07-14 05:48:24.757020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.738 qpair failed and we were unable to recover it. 00:34:17.738 [2024-07-14 05:48:24.757255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.738 [2024-07-14 05:48:24.757284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.738 qpair failed and we were unable to recover it. 00:34:17.738 [2024-07-14 05:48:24.757463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.738 [2024-07-14 05:48:24.757494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.738 qpair failed and we were unable to recover it. 00:34:17.738 [2024-07-14 05:48:24.757680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.738 [2024-07-14 05:48:24.757708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.738 qpair failed and we were unable to recover it. 00:34:17.738 [2024-07-14 05:48:24.757915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.738 [2024-07-14 05:48:24.757946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.738 qpair failed and we were unable to recover it. 00:34:17.738 [2024-07-14 05:48:24.758128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.738 [2024-07-14 05:48:24.758158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.738 qpair failed and we were unable to recover it. 00:34:17.738 [2024-07-14 05:48:24.758365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.738 [2024-07-14 05:48:24.758392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.738 qpair failed and we were unable to recover it. 00:34:17.738 [2024-07-14 05:48:24.758603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.738 [2024-07-14 05:48:24.758632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.738 qpair failed and we were unable to recover it. 00:34:17.738 [2024-07-14 05:48:24.758870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.738 [2024-07-14 05:48:24.758912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.738 qpair failed and we were unable to recover it. 00:34:17.738 [2024-07-14 05:48:24.759118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.738 [2024-07-14 05:48:24.759145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.738 qpair failed and we were unable to recover it. 00:34:17.738 [2024-07-14 05:48:24.759378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.738 [2024-07-14 05:48:24.759408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.738 qpair failed and we were unable to recover it. 00:34:17.738 [2024-07-14 05:48:24.759610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.738 [2024-07-14 05:48:24.759639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.738 qpair failed and we were unable to recover it. 00:34:17.738 [2024-07-14 05:48:24.759838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.738 [2024-07-14 05:48:24.759882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.738 qpair failed and we were unable to recover it. 00:34:17.738 [2024-07-14 05:48:24.760087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.738 [2024-07-14 05:48:24.760115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.738 qpair failed and we were unable to recover it. 00:34:17.738 [2024-07-14 05:48:24.760354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.738 [2024-07-14 05:48:24.760385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.738 qpair failed and we were unable to recover it. 00:34:17.738 [2024-07-14 05:48:24.760571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.738 [2024-07-14 05:48:24.760598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.738 qpair failed and we were unable to recover it. 00:34:17.738 [2024-07-14 05:48:24.760807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.738 [2024-07-14 05:48:24.760838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.738 qpair failed and we were unable to recover it. 00:34:17.738 [2024-07-14 05:48:24.761003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.738 [2024-07-14 05:48:24.761031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.738 qpair failed and we were unable to recover it. 00:34:17.738 [2024-07-14 05:48:24.761213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.738 [2024-07-14 05:48:24.761239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.738 qpair failed and we were unable to recover it. 00:34:17.738 [2024-07-14 05:48:24.761475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.738 [2024-07-14 05:48:24.761504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.738 qpair failed and we were unable to recover it. 00:34:17.738 [2024-07-14 05:48:24.761679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.738 [2024-07-14 05:48:24.761709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.738 qpair failed and we were unable to recover it. 00:34:17.738 [2024-07-14 05:48:24.761949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.738 [2024-07-14 05:48:24.761976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.738 qpair failed and we were unable to recover it. 00:34:17.738 [2024-07-14 05:48:24.762185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.738 [2024-07-14 05:48:24.762215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.738 qpair failed and we were unable to recover it. 00:34:17.738 [2024-07-14 05:48:24.762395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.738 [2024-07-14 05:48:24.762426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.738 qpair failed and we were unable to recover it. 00:34:17.738 [2024-07-14 05:48:24.762645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.738 [2024-07-14 05:48:24.762672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.738 qpair failed and we were unable to recover it. 00:34:17.738 [2024-07-14 05:48:24.762851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.738 [2024-07-14 05:48:24.762887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.738 qpair failed and we were unable to recover it. 00:34:17.738 [2024-07-14 05:48:24.763077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.738 [2024-07-14 05:48:24.763103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.738 qpair failed and we were unable to recover it. 00:34:17.738 [2024-07-14 05:48:24.763341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.738 [2024-07-14 05:48:24.763370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.738 qpair failed and we were unable to recover it. 00:34:17.738 [2024-07-14 05:48:24.763562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.738 [2024-07-14 05:48:24.763592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.738 qpair failed and we were unable to recover it. 00:34:17.738 [2024-07-14 05:48:24.763829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.738 [2024-07-14 05:48:24.763855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.738 qpair failed and we were unable to recover it. 00:34:17.738 [2024-07-14 05:48:24.764152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.738 [2024-07-14 05:48:24.764183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.738 qpair failed and we were unable to recover it. 00:34:17.738 [2024-07-14 05:48:24.764390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.738 [2024-07-14 05:48:24.764418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.738 qpair failed and we were unable to recover it. 00:34:17.738 [2024-07-14 05:48:24.764638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.738 [2024-07-14 05:48:24.764667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.738 qpair failed and we were unable to recover it. 00:34:17.738 [2024-07-14 05:48:24.764880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.738 [2024-07-14 05:48:24.764921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.738 qpair failed and we were unable to recover it. 00:34:17.738 [2024-07-14 05:48:24.765127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.738 [2024-07-14 05:48:24.765154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.738 qpair failed and we were unable to recover it. 00:34:17.738 [2024-07-14 05:48:24.765336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.738 [2024-07-14 05:48:24.765363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.738 qpair failed and we were unable to recover it. 00:34:17.738 [2024-07-14 05:48:24.765575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.738 [2024-07-14 05:48:24.765617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.738 qpair failed and we were unable to recover it. 00:34:17.738 [2024-07-14 05:48:24.765871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.738 [2024-07-14 05:48:24.765899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.738 qpair failed and we were unable to recover it. 00:34:17.739 [2024-07-14 05:48:24.766114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.739 [2024-07-14 05:48:24.766143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.739 qpair failed and we were unable to recover it. 00:34:17.739 [2024-07-14 05:48:24.766350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.739 [2024-07-14 05:48:24.766380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.739 qpair failed and we were unable to recover it. 00:34:17.739 [2024-07-14 05:48:24.766560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.739 [2024-07-14 05:48:24.766587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.739 qpair failed and we were unable to recover it. 00:34:17.739 [2024-07-14 05:48:24.766794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.739 [2024-07-14 05:48:24.766826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.739 qpair failed and we were unable to recover it. 00:34:17.739 [2024-07-14 05:48:24.767044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.739 [2024-07-14 05:48:24.767075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.739 qpair failed and we were unable to recover it. 00:34:17.739 [2024-07-14 05:48:24.767265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.739 [2024-07-14 05:48:24.767292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.739 qpair failed and we were unable to recover it. 00:34:17.739 [2024-07-14 05:48:24.767480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.739 [2024-07-14 05:48:24.767507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.739 qpair failed and we were unable to recover it. 00:34:17.739 [2024-07-14 05:48:24.767717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.739 [2024-07-14 05:48:24.767748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.739 qpair failed and we were unable to recover it. 00:34:17.739 [2024-07-14 05:48:24.767930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.739 [2024-07-14 05:48:24.767957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.739 qpair failed and we were unable to recover it. 00:34:17.739 [2024-07-14 05:48:24.768138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.739 [2024-07-14 05:48:24.768165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.739 qpair failed and we were unable to recover it. 00:34:17.739 [2024-07-14 05:48:24.768402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.739 [2024-07-14 05:48:24.768431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.739 qpair failed and we were unable to recover it. 00:34:17.739 [2024-07-14 05:48:24.768633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.739 [2024-07-14 05:48:24.768660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.739 qpair failed and we were unable to recover it. 00:34:17.739 [2024-07-14 05:48:24.768843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.739 [2024-07-14 05:48:24.768880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.739 qpair failed and we were unable to recover it. 00:34:17.739 [2024-07-14 05:48:24.769106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.739 [2024-07-14 05:48:24.769139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.739 qpair failed and we were unable to recover it. 00:34:17.739 [2024-07-14 05:48:24.769391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.739 [2024-07-14 05:48:24.769418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.739 qpair failed and we were unable to recover it. 00:34:17.739 [2024-07-14 05:48:24.769640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.739 [2024-07-14 05:48:24.769670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.739 qpair failed and we were unable to recover it. 00:34:17.739 [2024-07-14 05:48:24.769891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.739 [2024-07-14 05:48:24.769930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.739 qpair failed and we were unable to recover it. 00:34:17.739 [2024-07-14 05:48:24.770133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.739 [2024-07-14 05:48:24.770161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.739 qpair failed and we were unable to recover it. 00:34:17.739 [2024-07-14 05:48:24.770339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.739 [2024-07-14 05:48:24.770374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.739 qpair failed and we were unable to recover it. 00:34:17.739 [2024-07-14 05:48:24.770587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.739 [2024-07-14 05:48:24.770617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.739 qpair failed and we were unable to recover it. 00:34:17.739 [2024-07-14 05:48:24.770830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.739 [2024-07-14 05:48:24.770857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.739 qpair failed and we were unable to recover it. 00:34:17.739 [2024-07-14 05:48:24.771112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.739 [2024-07-14 05:48:24.771141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.739 qpair failed and we were unable to recover it. 00:34:17.739 [2024-07-14 05:48:24.771365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.739 [2024-07-14 05:48:24.771394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.739 qpair failed and we were unable to recover it. 00:34:17.739 [2024-07-14 05:48:24.771593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.739 [2024-07-14 05:48:24.771620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.739 qpair failed and we were unable to recover it. 00:34:17.739 [2024-07-14 05:48:24.771854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.739 [2024-07-14 05:48:24.771891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.739 qpair failed and we were unable to recover it. 00:34:17.739 [2024-07-14 05:48:24.772074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.739 [2024-07-14 05:48:24.772103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.739 qpair failed and we were unable to recover it. 00:34:17.739 [2024-07-14 05:48:24.772276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.739 [2024-07-14 05:48:24.772304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.739 qpair failed and we were unable to recover it. 00:34:17.739 [2024-07-14 05:48:24.772510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.739 [2024-07-14 05:48:24.772539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.739 qpair failed and we were unable to recover it. 00:34:17.739 [2024-07-14 05:48:24.772719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.739 [2024-07-14 05:48:24.772746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.739 qpair failed and we were unable to recover it. 00:34:17.739 [2024-07-14 05:48:24.772962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.739 [2024-07-14 05:48:24.772988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.739 qpair failed and we were unable to recover it. 00:34:17.739 [2024-07-14 05:48:24.773170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.739 [2024-07-14 05:48:24.773200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.739 qpair failed and we were unable to recover it. 00:34:17.739 [2024-07-14 05:48:24.773401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.739 [2024-07-14 05:48:24.773430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.739 qpair failed and we were unable to recover it. 00:34:17.739 [2024-07-14 05:48:24.773674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.739 [2024-07-14 05:48:24.773700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.739 qpair failed and we were unable to recover it. 00:34:17.739 [2024-07-14 05:48:24.773943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.739 [2024-07-14 05:48:24.773970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.739 qpair failed and we were unable to recover it. 00:34:17.739 [2024-07-14 05:48:24.774125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.739 [2024-07-14 05:48:24.774153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.739 qpair failed and we were unable to recover it. 00:34:17.739 [2024-07-14 05:48:24.774336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.739 [2024-07-14 05:48:24.774362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.739 qpair failed and we were unable to recover it. 00:34:17.739 [2024-07-14 05:48:24.774545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.739 [2024-07-14 05:48:24.774572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.739 qpair failed and we were unable to recover it. 00:34:17.739 [2024-07-14 05:48:24.774747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.739 [2024-07-14 05:48:24.774774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.739 qpair failed and we were unable to recover it. 00:34:17.739 [2024-07-14 05:48:24.774982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.739 [2024-07-14 05:48:24.775009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.739 qpair failed and we were unable to recover it. 00:34:17.739 [2024-07-14 05:48:24.775194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.739 [2024-07-14 05:48:24.775223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.739 qpair failed and we were unable to recover it. 00:34:17.739 [2024-07-14 05:48:24.775621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.739 [2024-07-14 05:48:24.775682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.739 qpair failed and we were unable to recover it. 00:34:17.739 [2024-07-14 05:48:24.775889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.739 [2024-07-14 05:48:24.775918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.739 qpair failed and we were unable to recover it. 00:34:17.739 [2024-07-14 05:48:24.776104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.739 [2024-07-14 05:48:24.776131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.740 qpair failed and we were unable to recover it. 00:34:17.740 [2024-07-14 05:48:24.776469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.740 [2024-07-14 05:48:24.776524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.740 qpair failed and we were unable to recover it. 00:34:17.740 [2024-07-14 05:48:24.776757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.740 [2024-07-14 05:48:24.776783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.740 qpair failed and we were unable to recover it. 00:34:17.740 [2024-07-14 05:48:24.776997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.740 [2024-07-14 05:48:24.777031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.740 qpair failed and we were unable to recover it. 00:34:17.740 [2024-07-14 05:48:24.777263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.740 [2024-07-14 05:48:24.777290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.740 qpair failed and we were unable to recover it. 00:34:17.740 [2024-07-14 05:48:24.777472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.740 [2024-07-14 05:48:24.777499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.740 qpair failed and we were unable to recover it. 00:34:17.740 [2024-07-14 05:48:24.777743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.740 [2024-07-14 05:48:24.777772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.740 qpair failed and we were unable to recover it. 00:34:17.740 [2024-07-14 05:48:24.777973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.740 [2024-07-14 05:48:24.778001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.740 qpair failed and we were unable to recover it. 00:34:17.740 [2024-07-14 05:48:24.778180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.740 [2024-07-14 05:48:24.778207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.740 qpair failed and we were unable to recover it. 00:34:17.740 [2024-07-14 05:48:24.778387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.740 [2024-07-14 05:48:24.778416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.740 qpair failed and we were unable to recover it. 00:34:17.740 [2024-07-14 05:48:24.778730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.740 [2024-07-14 05:48:24.778785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.740 qpair failed and we were unable to recover it. 00:34:17.740 [2024-07-14 05:48:24.779026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.740 [2024-07-14 05:48:24.779054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.740 qpair failed and we were unable to recover it. 00:34:17.740 [2024-07-14 05:48:24.779285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.740 [2024-07-14 05:48:24.779315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.740 qpair failed and we were unable to recover it. 00:34:17.740 [2024-07-14 05:48:24.779562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.740 [2024-07-14 05:48:24.779588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.740 qpair failed and we were unable to recover it. 00:34:17.740 [2024-07-14 05:48:24.779771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.740 [2024-07-14 05:48:24.779798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.740 qpair failed and we were unable to recover it. 00:34:17.740 [2024-07-14 05:48:24.780005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.740 [2024-07-14 05:48:24.780036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.740 qpair failed and we were unable to recover it. 00:34:17.740 [2024-07-14 05:48:24.780202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.740 [2024-07-14 05:48:24.780232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.740 qpair failed and we were unable to recover it. 00:34:17.740 [2024-07-14 05:48:24.780471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.740 [2024-07-14 05:48:24.780498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.740 qpair failed and we were unable to recover it. 00:34:17.740 [2024-07-14 05:48:24.780693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.740 [2024-07-14 05:48:24.780720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.740 qpair failed and we were unable to recover it. 00:34:17.740 [2024-07-14 05:48:24.780954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.740 [2024-07-14 05:48:24.780984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.740 qpair failed and we were unable to recover it. 00:34:17.740 [2024-07-14 05:48:24.781159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.740 [2024-07-14 05:48:24.781186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.740 qpair failed and we were unable to recover it. 00:34:17.740 [2024-07-14 05:48:24.781387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.740 [2024-07-14 05:48:24.781416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.740 qpair failed and we were unable to recover it. 00:34:17.740 [2024-07-14 05:48:24.781765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.740 [2024-07-14 05:48:24.781825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.740 qpair failed and we were unable to recover it. 00:34:17.740 [2024-07-14 05:48:24.782046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.740 [2024-07-14 05:48:24.782073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.740 qpair failed and we were unable to recover it. 00:34:17.740 [2024-07-14 05:48:24.782256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.740 [2024-07-14 05:48:24.782283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.740 qpair failed and we were unable to recover it. 00:34:17.740 [2024-07-14 05:48:24.782646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.740 [2024-07-14 05:48:24.782677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.740 qpair failed and we were unable to recover it. 00:34:17.740 [2024-07-14 05:48:24.782898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.740 [2024-07-14 05:48:24.782925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.740 qpair failed and we were unable to recover it. 00:34:17.740 [2024-07-14 05:48:24.783080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.740 [2024-07-14 05:48:24.783107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.740 qpair failed and we were unable to recover it. 00:34:17.740 [2024-07-14 05:48:24.783311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.740 [2024-07-14 05:48:24.783340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.740 qpair failed and we were unable to recover it. 00:34:17.740 [2024-07-14 05:48:24.783548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.740 [2024-07-14 05:48:24.783575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.740 qpair failed and we were unable to recover it. 00:34:17.740 [2024-07-14 05:48:24.783787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.740 [2024-07-14 05:48:24.783818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.740 qpair failed and we were unable to recover it. 00:34:17.740 [2024-07-14 05:48:24.784055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.740 [2024-07-14 05:48:24.784085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.740 qpair failed and we were unable to recover it. 00:34:17.740 [2024-07-14 05:48:24.784281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.740 [2024-07-14 05:48:24.784308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.740 qpair failed and we were unable to recover it. 00:34:17.740 [2024-07-14 05:48:24.784488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.740 [2024-07-14 05:48:24.784518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.740 qpair failed and we were unable to recover it. 00:34:17.740 [2024-07-14 05:48:24.784749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.740 [2024-07-14 05:48:24.784776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.740 qpair failed and we were unable to recover it. 00:34:17.741 [2024-07-14 05:48:24.784983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.741 [2024-07-14 05:48:24.785010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.741 qpair failed and we were unable to recover it. 00:34:17.741 [2024-07-14 05:48:24.785197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.741 [2024-07-14 05:48:24.785227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.741 qpair failed and we were unable to recover it. 00:34:17.741 [2024-07-14 05:48:24.785573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.741 [2024-07-14 05:48:24.785623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.741 qpair failed and we were unable to recover it. 00:34:17.741 [2024-07-14 05:48:24.785856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.741 [2024-07-14 05:48:24.785890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.741 qpair failed and we were unable to recover it. 00:34:17.741 [2024-07-14 05:48:24.786082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.741 [2024-07-14 05:48:24.786111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.741 qpair failed and we were unable to recover it. 00:34:17.741 [2024-07-14 05:48:24.786336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.741 [2024-07-14 05:48:24.786365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.741 qpair failed and we were unable to recover it. 00:34:17.741 [2024-07-14 05:48:24.786595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.741 [2024-07-14 05:48:24.786621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.741 qpair failed and we were unable to recover it. 00:34:17.741 [2024-07-14 05:48:24.786825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.741 [2024-07-14 05:48:24.786855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.741 qpair failed and we were unable to recover it. 00:34:17.741 [2024-07-14 05:48:24.787072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.741 [2024-07-14 05:48:24.787106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.741 qpair failed and we were unable to recover it. 00:34:17.741 [2024-07-14 05:48:24.787276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.741 [2024-07-14 05:48:24.787302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.741 qpair failed and we were unable to recover it. 00:34:17.741 [2024-07-14 05:48:24.787500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.741 [2024-07-14 05:48:24.787529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.741 qpair failed and we were unable to recover it. 00:34:17.741 [2024-07-14 05:48:24.787723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.741 [2024-07-14 05:48:24.787752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.741 qpair failed and we were unable to recover it. 00:34:17.741 [2024-07-14 05:48:24.787964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.741 [2024-07-14 05:48:24.787991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.741 qpair failed and we were unable to recover it. 00:34:17.741 [2024-07-14 05:48:24.788201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.741 [2024-07-14 05:48:24.788230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.741 qpair failed and we were unable to recover it. 00:34:17.741 [2024-07-14 05:48:24.788409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.741 [2024-07-14 05:48:24.788438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.741 qpair failed and we were unable to recover it. 00:34:17.741 [2024-07-14 05:48:24.788619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.741 [2024-07-14 05:48:24.788646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.741 qpair failed and we were unable to recover it. 00:34:17.741 [2024-07-14 05:48:24.788820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.741 [2024-07-14 05:48:24.788849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.741 qpair failed and we were unable to recover it. 00:34:17.741 [2024-07-14 05:48:24.789058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.741 [2024-07-14 05:48:24.789088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.741 qpair failed and we were unable to recover it. 00:34:17.741 [2024-07-14 05:48:24.789290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.741 [2024-07-14 05:48:24.789317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.741 qpair failed and we were unable to recover it. 00:34:17.741 [2024-07-14 05:48:24.789495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.741 [2024-07-14 05:48:24.789524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.741 qpair failed and we were unable to recover it. 00:34:17.741 [2024-07-14 05:48:24.789763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.741 [2024-07-14 05:48:24.789790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.741 qpair failed and we were unable to recover it. 00:34:17.741 [2024-07-14 05:48:24.789999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.741 [2024-07-14 05:48:24.790026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.741 qpair failed and we were unable to recover it. 00:34:17.741 [2024-07-14 05:48:24.790250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.741 [2024-07-14 05:48:24.790277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.741 qpair failed and we were unable to recover it. 00:34:17.741 [2024-07-14 05:48:24.790468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.741 [2024-07-14 05:48:24.790498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.741 qpair failed and we were unable to recover it. 00:34:17.741 [2024-07-14 05:48:24.790683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.741 [2024-07-14 05:48:24.790709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.741 qpair failed and we were unable to recover it. 00:34:17.741 [2024-07-14 05:48:24.790909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.741 [2024-07-14 05:48:24.790939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.741 qpair failed and we were unable to recover it. 00:34:17.741 [2024-07-14 05:48:24.791149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.741 [2024-07-14 05:48:24.791178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.741 qpair failed and we were unable to recover it. 00:34:17.741 [2024-07-14 05:48:24.791402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.741 [2024-07-14 05:48:24.791428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.741 qpair failed and we were unable to recover it. 00:34:17.741 [2024-07-14 05:48:24.791663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.741 [2024-07-14 05:48:24.791692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.741 qpair failed and we were unable to recover it. 00:34:17.741 [2024-07-14 05:48:24.791877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.741 [2024-07-14 05:48:24.791905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.741 qpair failed and we were unable to recover it. 00:34:17.741 [2024-07-14 05:48:24.792088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.741 [2024-07-14 05:48:24.792114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.741 qpair failed and we were unable to recover it. 00:34:17.741 [2024-07-14 05:48:24.792345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.741 [2024-07-14 05:48:24.792374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.741 qpair failed and we were unable to recover it. 00:34:17.741 [2024-07-14 05:48:24.792575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.741 [2024-07-14 05:48:24.792604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.741 qpair failed and we were unable to recover it. 00:34:17.741 [2024-07-14 05:48:24.792807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.741 [2024-07-14 05:48:24.792836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.741 qpair failed and we were unable to recover it. 00:34:17.741 [2024-07-14 05:48:24.793038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.741 [2024-07-14 05:48:24.793064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.741 qpair failed and we were unable to recover it. 00:34:17.741 [2024-07-14 05:48:24.793249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.741 [2024-07-14 05:48:24.793279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.741 qpair failed and we were unable to recover it. 00:34:17.741 [2024-07-14 05:48:24.793496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.741 [2024-07-14 05:48:24.793523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.741 qpair failed and we were unable to recover it. 00:34:17.741 [2024-07-14 05:48:24.793859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.741 [2024-07-14 05:48:24.793928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.741 qpair failed and we were unable to recover it. 00:34:17.741 [2024-07-14 05:48:24.794163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.741 [2024-07-14 05:48:24.794205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.741 qpair failed and we were unable to recover it. 00:34:17.741 [2024-07-14 05:48:24.794421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.741 [2024-07-14 05:48:24.794448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.741 qpair failed and we were unable to recover it. 00:34:17.741 [2024-07-14 05:48:24.794604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.741 [2024-07-14 05:48:24.794631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.741 qpair failed and we were unable to recover it. 00:34:17.741 [2024-07-14 05:48:24.794841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.741 [2024-07-14 05:48:24.794892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.741 qpair failed and we were unable to recover it. 00:34:17.741 [2024-07-14 05:48:24.795061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.741 [2024-07-14 05:48:24.795087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.742 qpair failed and we were unable to recover it. 00:34:17.742 [2024-07-14 05:48:24.795255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.742 [2024-07-14 05:48:24.795285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.742 qpair failed and we were unable to recover it. 00:34:17.742 [2024-07-14 05:48:24.795459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.742 [2024-07-14 05:48:24.795488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.742 qpair failed and we were unable to recover it. 00:34:17.742 [2024-07-14 05:48:24.795695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.742 [2024-07-14 05:48:24.795723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.742 qpair failed and we were unable to recover it. 00:34:17.742 [2024-07-14 05:48:24.795951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.742 [2024-07-14 05:48:24.795990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.742 qpair failed and we were unable to recover it. 00:34:17.742 [2024-07-14 05:48:24.796215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.742 [2024-07-14 05:48:24.796246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.742 qpair failed and we were unable to recover it. 00:34:17.742 [2024-07-14 05:48:24.796429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.742 [2024-07-14 05:48:24.796461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.742 qpair failed and we were unable to recover it. 00:34:17.742 [2024-07-14 05:48:24.796671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.742 [2024-07-14 05:48:24.796701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.742 qpair failed and we were unable to recover it. 00:34:17.742 [2024-07-14 05:48:24.796895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.742 [2024-07-14 05:48:24.796926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.742 qpair failed and we were unable to recover it. 00:34:17.742 [2024-07-14 05:48:24.797171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.742 [2024-07-14 05:48:24.797198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.742 qpair failed and we were unable to recover it. 00:34:17.742 [2024-07-14 05:48:24.797407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.742 [2024-07-14 05:48:24.797437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.742 qpair failed and we were unable to recover it. 00:34:17.742 [2024-07-14 05:48:24.797642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.742 [2024-07-14 05:48:24.797673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.742 qpair failed and we were unable to recover it. 00:34:17.742 [2024-07-14 05:48:24.797923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.742 [2024-07-14 05:48:24.797953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.742 qpair failed and we were unable to recover it. 00:34:17.742 [2024-07-14 05:48:24.798167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.742 [2024-07-14 05:48:24.798198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.742 qpair failed and we were unable to recover it. 00:34:17.742 [2024-07-14 05:48:24.798428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.742 [2024-07-14 05:48:24.798458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.742 qpair failed and we were unable to recover it. 00:34:17.742 [2024-07-14 05:48:24.798638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.742 [2024-07-14 05:48:24.798664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.742 qpair failed and we were unable to recover it. 00:34:17.742 [2024-07-14 05:48:24.798895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.742 [2024-07-14 05:48:24.798925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.742 qpair failed and we were unable to recover it. 00:34:17.742 [2024-07-14 05:48:24.799126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.742 [2024-07-14 05:48:24.799156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.742 qpair failed and we were unable to recover it. 00:34:17.742 [2024-07-14 05:48:24.799349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.742 [2024-07-14 05:48:24.799377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:17.742 qpair failed and we were unable to recover it. 00:34:17.742 [2024-07-14 05:48:24.799634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.020 [2024-07-14 05:48:24.799667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.020 qpair failed and we were unable to recover it. 00:34:18.020 [2024-07-14 05:48:24.799907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.020 [2024-07-14 05:48:24.799938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.020 qpair failed and we were unable to recover it. 00:34:18.020 [2024-07-14 05:48:24.800123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.020 [2024-07-14 05:48:24.800150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.020 qpair failed and we were unable to recover it. 00:34:18.020 [2024-07-14 05:48:24.800327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.020 [2024-07-14 05:48:24.800355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.020 qpair failed and we were unable to recover it. 00:34:18.020 [2024-07-14 05:48:24.800524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.020 [2024-07-14 05:48:24.800554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.020 qpair failed and we were unable to recover it. 00:34:18.020 [2024-07-14 05:48:24.800783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.020 [2024-07-14 05:48:24.800813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.020 qpair failed and we were unable to recover it. 00:34:18.020 [2024-07-14 05:48:24.801030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.020 [2024-07-14 05:48:24.801057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.020 qpair failed and we were unable to recover it. 00:34:18.020 [2024-07-14 05:48:24.801264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.020 [2024-07-14 05:48:24.801295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.020 qpair failed and we were unable to recover it. 00:34:18.020 [2024-07-14 05:48:24.801527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.020 [2024-07-14 05:48:24.801554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.020 qpair failed and we were unable to recover it. 00:34:18.020 [2024-07-14 05:48:24.801766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.020 [2024-07-14 05:48:24.801795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.020 qpair failed and we were unable to recover it. 00:34:18.020 [2024-07-14 05:48:24.801977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.020 [2024-07-14 05:48:24.802007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.020 qpair failed and we were unable to recover it. 00:34:18.020 [2024-07-14 05:48:24.802185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.020 [2024-07-14 05:48:24.802212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.020 qpair failed and we were unable to recover it. 00:34:18.020 [2024-07-14 05:48:24.802417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.020 [2024-07-14 05:48:24.802446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.020 qpair failed and we were unable to recover it. 00:34:18.020 [2024-07-14 05:48:24.802652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.020 [2024-07-14 05:48:24.802681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.020 qpair failed and we were unable to recover it. 00:34:18.020 [2024-07-14 05:48:24.802925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.021 [2024-07-14 05:48:24.802953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.021 qpair failed and we were unable to recover it. 00:34:18.021 [2024-07-14 05:48:24.803185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.021 [2024-07-14 05:48:24.803215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.021 qpair failed and we were unable to recover it. 00:34:18.021 [2024-07-14 05:48:24.803419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.021 [2024-07-14 05:48:24.803446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.021 qpair failed and we were unable to recover it. 00:34:18.021 [2024-07-14 05:48:24.803605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.021 [2024-07-14 05:48:24.803632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.021 qpair failed and we were unable to recover it. 00:34:18.021 [2024-07-14 05:48:24.803836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.021 [2024-07-14 05:48:24.803873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.021 qpair failed and we were unable to recover it. 00:34:18.021 [2024-07-14 05:48:24.804108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.021 [2024-07-14 05:48:24.804137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.021 qpair failed and we were unable to recover it. 00:34:18.021 [2024-07-14 05:48:24.804321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.021 [2024-07-14 05:48:24.804348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.021 qpair failed and we were unable to recover it. 00:34:18.021 [2024-07-14 05:48:24.804547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.021 [2024-07-14 05:48:24.804576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.021 qpair failed and we were unable to recover it. 00:34:18.021 [2024-07-14 05:48:24.804766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.021 [2024-07-14 05:48:24.804796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.021 qpair failed and we were unable to recover it. 00:34:18.021 [2024-07-14 05:48:24.804999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.021 [2024-07-14 05:48:24.805026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.021 qpair failed and we were unable to recover it. 00:34:18.021 [2024-07-14 05:48:24.805265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.021 [2024-07-14 05:48:24.805295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.021 qpair failed and we were unable to recover it. 00:34:18.021 [2024-07-14 05:48:24.805507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.021 [2024-07-14 05:48:24.805536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.021 qpair failed and we were unable to recover it. 00:34:18.021 [2024-07-14 05:48:24.805713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.021 [2024-07-14 05:48:24.805740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.021 qpair failed and we were unable to recover it. 00:34:18.021 [2024-07-14 05:48:24.805939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.021 [2024-07-14 05:48:24.805974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.021 qpair failed and we were unable to recover it. 00:34:18.021 [2024-07-14 05:48:24.806148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.021 [2024-07-14 05:48:24.806177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.021 qpair failed and we were unable to recover it. 00:34:18.021 [2024-07-14 05:48:24.806376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.021 [2024-07-14 05:48:24.806402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.021 qpair failed and we were unable to recover it. 00:34:18.021 [2024-07-14 05:48:24.806578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.021 [2024-07-14 05:48:24.806608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.021 qpair failed and we were unable to recover it. 00:34:18.021 [2024-07-14 05:48:24.806833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.021 [2024-07-14 05:48:24.806859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.021 qpair failed and we were unable to recover it. 00:34:18.021 [2024-07-14 05:48:24.807030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.021 [2024-07-14 05:48:24.807057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.021 qpair failed and we were unable to recover it. 00:34:18.021 [2024-07-14 05:48:24.807286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.021 [2024-07-14 05:48:24.807315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.021 qpair failed and we were unable to recover it. 00:34:18.021 [2024-07-14 05:48:24.807551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.021 [2024-07-14 05:48:24.807580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.021 qpair failed and we were unable to recover it. 00:34:18.021 [2024-07-14 05:48:24.807823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.021 [2024-07-14 05:48:24.807850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.021 qpair failed and we were unable to recover it. 00:34:18.021 [2024-07-14 05:48:24.808075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.021 [2024-07-14 05:48:24.808102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.021 qpair failed and we were unable to recover it. 00:34:18.021 [2024-07-14 05:48:24.808336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.021 [2024-07-14 05:48:24.808365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.021 qpair failed and we were unable to recover it. 00:34:18.021 [2024-07-14 05:48:24.808568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.021 [2024-07-14 05:48:24.808594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.021 qpair failed and we were unable to recover it. 00:34:18.021 [2024-07-14 05:48:24.808843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.021 [2024-07-14 05:48:24.808880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.021 qpair failed and we were unable to recover it. 00:34:18.021 [2024-07-14 05:48:24.809114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.021 [2024-07-14 05:48:24.809143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.021 qpair failed and we were unable to recover it. 00:34:18.021 [2024-07-14 05:48:24.809328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.021 [2024-07-14 05:48:24.809355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.021 qpair failed and we were unable to recover it. 00:34:18.021 [2024-07-14 05:48:24.809559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.021 [2024-07-14 05:48:24.809589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.021 qpair failed and we were unable to recover it. 00:34:18.021 [2024-07-14 05:48:24.809762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.021 [2024-07-14 05:48:24.809793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.021 qpair failed and we were unable to recover it. 00:34:18.021 [2024-07-14 05:48:24.810005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.021 [2024-07-14 05:48:24.810033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.021 qpair failed and we were unable to recover it. 00:34:18.021 [2024-07-14 05:48:24.810266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.021 [2024-07-14 05:48:24.810295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.021 qpair failed and we were unable to recover it. 00:34:18.021 [2024-07-14 05:48:24.810533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.021 [2024-07-14 05:48:24.810562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.021 qpair failed and we were unable to recover it. 00:34:18.021 [2024-07-14 05:48:24.810758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.021 [2024-07-14 05:48:24.810784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.021 qpair failed and we were unable to recover it. 00:34:18.021 [2024-07-14 05:48:24.810991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.021 [2024-07-14 05:48:24.811021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.021 qpair failed and we were unable to recover it. 00:34:18.021 [2024-07-14 05:48:24.811246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.021 [2024-07-14 05:48:24.811275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.021 qpair failed and we were unable to recover it. 00:34:18.021 [2024-07-14 05:48:24.811486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.021 [2024-07-14 05:48:24.811512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.021 qpair failed and we were unable to recover it. 00:34:18.021 [2024-07-14 05:48:24.811723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.021 [2024-07-14 05:48:24.811752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.021 qpair failed and we were unable to recover it. 00:34:18.021 [2024-07-14 05:48:24.811954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.021 [2024-07-14 05:48:24.811984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.021 qpair failed and we were unable to recover it. 00:34:18.021 [2024-07-14 05:48:24.812207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.021 [2024-07-14 05:48:24.812233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.021 qpair failed and we were unable to recover it. 00:34:18.021 [2024-07-14 05:48:24.812427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.021 [2024-07-14 05:48:24.812454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.021 qpair failed and we were unable to recover it. 00:34:18.021 [2024-07-14 05:48:24.812661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.021 [2024-07-14 05:48:24.812690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.021 qpair failed and we were unable to recover it. 00:34:18.021 [2024-07-14 05:48:24.812921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.021 [2024-07-14 05:48:24.812948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.021 qpair failed and we were unable to recover it. 00:34:18.022 [2024-07-14 05:48:24.813146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.022 [2024-07-14 05:48:24.813173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.022 qpair failed and we were unable to recover it. 00:34:18.022 [2024-07-14 05:48:24.813378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.022 [2024-07-14 05:48:24.813408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.022 qpair failed and we were unable to recover it. 00:34:18.022 [2024-07-14 05:48:24.813611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.022 [2024-07-14 05:48:24.813638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.022 qpair failed and we were unable to recover it. 00:34:18.022 [2024-07-14 05:48:24.813845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.022 [2024-07-14 05:48:24.813883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.022 qpair failed and we were unable to recover it. 00:34:18.022 [2024-07-14 05:48:24.814081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.022 [2024-07-14 05:48:24.814110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.022 qpair failed and we were unable to recover it. 00:34:18.022 [2024-07-14 05:48:24.814314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.022 [2024-07-14 05:48:24.814341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.022 qpair failed and we were unable to recover it. 00:34:18.022 [2024-07-14 05:48:24.814529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.022 [2024-07-14 05:48:24.814556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.022 qpair failed and we were unable to recover it. 00:34:18.022 [2024-07-14 05:48:24.814789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.022 [2024-07-14 05:48:24.814818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.022 qpair failed and we were unable to recover it. 00:34:18.022 [2024-07-14 05:48:24.815033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.022 [2024-07-14 05:48:24.815060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.022 qpair failed and we were unable to recover it. 00:34:18.022 [2024-07-14 05:48:24.815262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.022 [2024-07-14 05:48:24.815292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.022 qpair failed and we were unable to recover it. 00:34:18.022 [2024-07-14 05:48:24.815462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.022 [2024-07-14 05:48:24.815496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.022 qpair failed and we were unable to recover it. 00:34:18.022 [2024-07-14 05:48:24.815700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.022 [2024-07-14 05:48:24.815727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.022 qpair failed and we were unable to recover it. 00:34:18.022 [2024-07-14 05:48:24.815938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.022 [2024-07-14 05:48:24.815968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.022 qpair failed and we were unable to recover it. 00:34:18.022 [2024-07-14 05:48:24.816201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.022 [2024-07-14 05:48:24.816230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.022 qpair failed and we were unable to recover it. 00:34:18.022 [2024-07-14 05:48:24.816430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.022 [2024-07-14 05:48:24.816457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.022 qpair failed and we were unable to recover it. 00:34:18.022 [2024-07-14 05:48:24.816659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.022 [2024-07-14 05:48:24.816689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.022 qpair failed and we were unable to recover it. 00:34:18.022 [2024-07-14 05:48:24.816895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.022 [2024-07-14 05:48:24.816925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.022 qpair failed and we were unable to recover it. 00:34:18.022 [2024-07-14 05:48:24.817129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.022 [2024-07-14 05:48:24.817156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.022 qpair failed and we were unable to recover it. 00:34:18.022 [2024-07-14 05:48:24.817364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.022 [2024-07-14 05:48:24.817394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.022 qpair failed and we were unable to recover it. 00:34:18.022 [2024-07-14 05:48:24.817623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.022 [2024-07-14 05:48:24.817652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.022 qpair failed and we were unable to recover it. 00:34:18.022 [2024-07-14 05:48:24.817859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.022 [2024-07-14 05:48:24.817892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.022 qpair failed and we were unable to recover it. 00:34:18.022 [2024-07-14 05:48:24.818099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.022 [2024-07-14 05:48:24.818128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.022 qpair failed and we were unable to recover it. 00:34:18.022 [2024-07-14 05:48:24.818292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.022 [2024-07-14 05:48:24.818321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.022 qpair failed and we were unable to recover it. 00:34:18.022 [2024-07-14 05:48:24.818531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.022 [2024-07-14 05:48:24.818557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.022 qpair failed and we were unable to recover it. 00:34:18.022 [2024-07-14 05:48:24.818744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.022 [2024-07-14 05:48:24.818770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.022 qpair failed and we were unable to recover it. 00:34:18.022 [2024-07-14 05:48:24.818960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.022 [2024-07-14 05:48:24.818988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.022 qpair failed and we were unable to recover it. 00:34:18.022 [2024-07-14 05:48:24.819170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.022 [2024-07-14 05:48:24.819197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.022 qpair failed and we were unable to recover it. 00:34:18.022 [2024-07-14 05:48:24.819376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.022 [2024-07-14 05:48:24.819406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.022 qpair failed and we were unable to recover it. 00:34:18.022 [2024-07-14 05:48:24.819606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.022 [2024-07-14 05:48:24.819633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.022 qpair failed and we were unable to recover it. 00:34:18.022 [2024-07-14 05:48:24.819864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.022 [2024-07-14 05:48:24.819914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.022 qpair failed and we were unable to recover it. 00:34:18.022 [2024-07-14 05:48:24.820100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.022 [2024-07-14 05:48:24.820141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.022 qpair failed and we were unable to recover it. 00:34:18.022 [2024-07-14 05:48:24.820375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.022 [2024-07-14 05:48:24.820401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.022 qpair failed and we were unable to recover it. 00:34:18.022 [2024-07-14 05:48:24.820608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.022 [2024-07-14 05:48:24.820634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.022 qpair failed and we were unable to recover it. 00:34:18.022 [2024-07-14 05:48:24.820849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.022 [2024-07-14 05:48:24.820887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.022 qpair failed and we were unable to recover it. 00:34:18.022 [2024-07-14 05:48:24.821117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.022 [2024-07-14 05:48:24.821147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.022 qpair failed and we were unable to recover it. 00:34:18.022 [2024-07-14 05:48:24.821352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.022 [2024-07-14 05:48:24.821378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.022 qpair failed and we were unable to recover it. 00:34:18.022 [2024-07-14 05:48:24.821567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.022 [2024-07-14 05:48:24.821594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.022 qpair failed and we were unable to recover it. 00:34:18.022 [2024-07-14 05:48:24.821807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.022 [2024-07-14 05:48:24.821837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.022 qpair failed and we were unable to recover it. 00:34:18.022 [2024-07-14 05:48:24.822059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.022 [2024-07-14 05:48:24.822086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.022 qpair failed and we were unable to recover it. 00:34:18.022 [2024-07-14 05:48:24.822247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.022 [2024-07-14 05:48:24.822274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.022 qpair failed and we were unable to recover it. 00:34:18.022 [2024-07-14 05:48:24.822452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.022 [2024-07-14 05:48:24.822479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.022 qpair failed and we were unable to recover it. 00:34:18.022 [2024-07-14 05:48:24.822637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.022 [2024-07-14 05:48:24.822665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.023 qpair failed and we were unable to recover it. 00:34:18.023 [2024-07-14 05:48:24.822877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.023 [2024-07-14 05:48:24.822908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.023 qpair failed and we were unable to recover it. 00:34:18.023 [2024-07-14 05:48:24.823112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.023 [2024-07-14 05:48:24.823142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.023 qpair failed and we were unable to recover it. 00:34:18.023 [2024-07-14 05:48:24.823384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.023 [2024-07-14 05:48:24.823410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.023 qpair failed and we were unable to recover it. 00:34:18.023 [2024-07-14 05:48:24.823630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.023 [2024-07-14 05:48:24.823660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.023 qpair failed and we were unable to recover it. 00:34:18.023 [2024-07-14 05:48:24.823863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.023 [2024-07-14 05:48:24.823899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.023 qpair failed and we were unable to recover it. 00:34:18.023 [2024-07-14 05:48:24.824126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.023 [2024-07-14 05:48:24.824153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.023 qpair failed and we were unable to recover it. 00:34:18.023 [2024-07-14 05:48:24.824388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.023 [2024-07-14 05:48:24.824417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.023 qpair failed and we were unable to recover it. 00:34:18.023 [2024-07-14 05:48:24.824623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.023 [2024-07-14 05:48:24.824652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.023 qpair failed and we were unable to recover it. 00:34:18.023 [2024-07-14 05:48:24.824856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.023 [2024-07-14 05:48:24.824901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.023 qpair failed and we were unable to recover it. 00:34:18.023 [2024-07-14 05:48:24.825110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.023 [2024-07-14 05:48:24.825140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.023 qpair failed and we were unable to recover it. 00:34:18.023 [2024-07-14 05:48:24.825340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.023 [2024-07-14 05:48:24.825369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.023 qpair failed and we were unable to recover it. 00:34:18.023 [2024-07-14 05:48:24.825601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.023 [2024-07-14 05:48:24.825628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.023 qpair failed and we were unable to recover it. 00:34:18.023 [2024-07-14 05:48:24.825856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.023 [2024-07-14 05:48:24.825889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.023 qpair failed and we were unable to recover it. 00:34:18.023 [2024-07-14 05:48:24.826097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.023 [2024-07-14 05:48:24.826143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.023 qpair failed and we were unable to recover it. 00:34:18.023 [2024-07-14 05:48:24.826385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.023 [2024-07-14 05:48:24.826411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.023 qpair failed and we were unable to recover it. 00:34:18.023 [2024-07-14 05:48:24.826646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.023 [2024-07-14 05:48:24.826675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.023 qpair failed and we were unable to recover it. 00:34:18.023 [2024-07-14 05:48:24.826877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.023 [2024-07-14 05:48:24.826908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.023 qpair failed and we were unable to recover it. 00:34:18.023 [2024-07-14 05:48:24.827115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.023 [2024-07-14 05:48:24.827142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.023 qpair failed and we were unable to recover it. 00:34:18.023 [2024-07-14 05:48:24.827322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.023 [2024-07-14 05:48:24.827349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.023 qpair failed and we were unable to recover it. 00:34:18.023 [2024-07-14 05:48:24.827548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.023 [2024-07-14 05:48:24.827577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.023 qpair failed and we were unable to recover it. 00:34:18.023 [2024-07-14 05:48:24.827785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.023 [2024-07-14 05:48:24.827812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.023 qpair failed and we were unable to recover it. 00:34:18.023 [2024-07-14 05:48:24.828015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.023 [2024-07-14 05:48:24.828045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.023 qpair failed and we were unable to recover it. 00:34:18.023 [2024-07-14 05:48:24.828253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.023 [2024-07-14 05:48:24.828282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.023 qpair failed and we were unable to recover it. 00:34:18.023 [2024-07-14 05:48:24.828488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.023 [2024-07-14 05:48:24.828514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.023 qpair failed and we were unable to recover it. 00:34:18.023 [2024-07-14 05:48:24.828721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.023 [2024-07-14 05:48:24.828750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.023 qpair failed and we were unable to recover it. 00:34:18.023 [2024-07-14 05:48:24.828956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.023 [2024-07-14 05:48:24.828983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.023 qpair failed and we were unable to recover it. 00:34:18.023 [2024-07-14 05:48:24.829194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.023 [2024-07-14 05:48:24.829222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.023 qpair failed and we were unable to recover it. 00:34:18.023 [2024-07-14 05:48:24.829443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.023 [2024-07-14 05:48:24.829473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.023 qpair failed and we were unable to recover it. 00:34:18.023 [2024-07-14 05:48:24.829675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.023 [2024-07-14 05:48:24.829704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.023 qpair failed and we were unable to recover it. 00:34:18.023 [2024-07-14 05:48:24.829911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.023 [2024-07-14 05:48:24.829938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.023 qpair failed and we were unable to recover it. 00:34:18.023 [2024-07-14 05:48:24.830137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.023 [2024-07-14 05:48:24.830166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.023 qpair failed and we were unable to recover it. 00:34:18.023 [2024-07-14 05:48:24.830374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.023 [2024-07-14 05:48:24.830403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.023 qpair failed and we were unable to recover it. 00:34:18.023 [2024-07-14 05:48:24.830611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.023 [2024-07-14 05:48:24.830637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.023 qpair failed and we were unable to recover it. 00:34:18.023 [2024-07-14 05:48:24.830812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.023 [2024-07-14 05:48:24.830842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.023 qpair failed and we were unable to recover it. 00:34:18.023 [2024-07-14 05:48:24.831044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.023 [2024-07-14 05:48:24.831074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.023 qpair failed and we were unable to recover it. 00:34:18.023 [2024-07-14 05:48:24.831288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.023 [2024-07-14 05:48:24.831315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.023 qpair failed and we were unable to recover it. 00:34:18.023 [2024-07-14 05:48:24.831541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.023 [2024-07-14 05:48:24.831570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.023 qpair failed and we were unable to recover it. 00:34:18.023 [2024-07-14 05:48:24.831737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.023 [2024-07-14 05:48:24.831766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.023 qpair failed and we were unable to recover it. 00:34:18.023 [2024-07-14 05:48:24.831974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.023 [2024-07-14 05:48:24.832001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.023 qpair failed and we were unable to recover it. 00:34:18.023 [2024-07-14 05:48:24.832179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.023 [2024-07-14 05:48:24.832209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.023 qpair failed and we were unable to recover it. 00:34:18.023 [2024-07-14 05:48:24.832437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.023 [2024-07-14 05:48:24.832463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.023 qpair failed and we were unable to recover it. 00:34:18.023 [2024-07-14 05:48:24.832647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.023 [2024-07-14 05:48:24.832673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.023 qpair failed and we were unable to recover it. 00:34:18.023 [2024-07-14 05:48:24.832829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.023 [2024-07-14 05:48:24.832855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.023 qpair failed and we were unable to recover it. 00:34:18.023 [2024-07-14 05:48:24.833046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.023 [2024-07-14 05:48:24.833073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.023 qpair failed and we were unable to recover it. 00:34:18.023 [2024-07-14 05:48:24.833230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.023 [2024-07-14 05:48:24.833256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.023 qpair failed and we were unable to recover it. 00:34:18.024 [2024-07-14 05:48:24.833472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.024 [2024-07-14 05:48:24.833501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.024 qpair failed and we were unable to recover it. 00:34:18.024 [2024-07-14 05:48:24.833729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.024 [2024-07-14 05:48:24.833758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.024 qpair failed and we were unable to recover it. 00:34:18.024 [2024-07-14 05:48:24.833946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.024 [2024-07-14 05:48:24.833973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.024 qpair failed and we were unable to recover it. 00:34:18.024 [2024-07-14 05:48:24.834152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.024 [2024-07-14 05:48:24.834183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.024 qpair failed and we were unable to recover it. 00:34:18.024 [2024-07-14 05:48:24.834389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.024 [2024-07-14 05:48:24.834419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.024 qpair failed and we were unable to recover it. 00:34:18.024 [2024-07-14 05:48:24.834586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.024 [2024-07-14 05:48:24.834613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.024 qpair failed and we were unable to recover it. 00:34:18.024 [2024-07-14 05:48:24.834814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.024 [2024-07-14 05:48:24.834843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.024 qpair failed and we were unable to recover it. 00:34:18.024 [2024-07-14 05:48:24.835057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.024 [2024-07-14 05:48:24.835087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.024 qpair failed and we were unable to recover it. 00:34:18.024 [2024-07-14 05:48:24.835287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.024 [2024-07-14 05:48:24.835314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.024 qpair failed and we were unable to recover it. 00:34:18.024 [2024-07-14 05:48:24.835553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.024 [2024-07-14 05:48:24.835583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.024 qpair failed and we were unable to recover it. 00:34:18.024 [2024-07-14 05:48:24.835779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.024 [2024-07-14 05:48:24.835808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.024 qpair failed and we were unable to recover it. 00:34:18.024 [2024-07-14 05:48:24.836003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.024 [2024-07-14 05:48:24.836030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.024 qpair failed and we were unable to recover it. 00:34:18.024 [2024-07-14 05:48:24.836239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.024 [2024-07-14 05:48:24.836268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.024 qpair failed and we were unable to recover it. 00:34:18.024 [2024-07-14 05:48:24.836482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.024 [2024-07-14 05:48:24.836511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.024 qpair failed and we were unable to recover it. 00:34:18.024 [2024-07-14 05:48:24.836817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.024 [2024-07-14 05:48:24.836846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.024 qpair failed and we were unable to recover it. 00:34:18.024 [2024-07-14 05:48:24.837090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.024 [2024-07-14 05:48:24.837117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.024 qpair failed and we were unable to recover it. 00:34:18.024 [2024-07-14 05:48:24.837366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.024 [2024-07-14 05:48:24.837395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.024 qpair failed and we were unable to recover it. 00:34:18.024 [2024-07-14 05:48:24.837606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.024 [2024-07-14 05:48:24.837634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.024 qpair failed and we were unable to recover it. 00:34:18.024 [2024-07-14 05:48:24.837860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.024 [2024-07-14 05:48:24.837897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.024 qpair failed and we were unable to recover it. 00:34:18.024 [2024-07-14 05:48:24.838100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.024 [2024-07-14 05:48:24.838127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.024 qpair failed and we were unable to recover it. 00:34:18.024 [2024-07-14 05:48:24.838311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.024 [2024-07-14 05:48:24.838338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.024 qpair failed and we were unable to recover it. 00:34:18.024 [2024-07-14 05:48:24.838549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.024 [2024-07-14 05:48:24.838592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.024 qpair failed and we were unable to recover it. 00:34:18.024 [2024-07-14 05:48:24.838796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.024 [2024-07-14 05:48:24.838825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.024 qpair failed and we were unable to recover it. 00:34:18.024 [2024-07-14 05:48:24.839063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.024 [2024-07-14 05:48:24.839090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.024 qpair failed and we were unable to recover it. 00:34:18.024 [2024-07-14 05:48:24.839291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.024 [2024-07-14 05:48:24.839321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.024 qpair failed and we were unable to recover it. 00:34:18.024 [2024-07-14 05:48:24.839548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.024 [2024-07-14 05:48:24.839577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.024 qpair failed and we were unable to recover it. 00:34:18.024 [2024-07-14 05:48:24.839811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.024 [2024-07-14 05:48:24.839838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.024 qpair failed and we were unable to recover it. 00:34:18.024 [2024-07-14 05:48:24.840013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.024 [2024-07-14 05:48:24.840041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.024 qpair failed and we were unable to recover it. 00:34:18.024 [2024-07-14 05:48:24.840254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.024 [2024-07-14 05:48:24.840284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.024 qpair failed and we were unable to recover it. 00:34:18.024 [2024-07-14 05:48:24.840493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.024 [2024-07-14 05:48:24.840520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.024 qpair failed and we were unable to recover it. 00:34:18.024 [2024-07-14 05:48:24.840818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.024 [2024-07-14 05:48:24.840847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.024 qpair failed and we were unable to recover it. 00:34:18.024 [2024-07-14 05:48:24.841089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.024 [2024-07-14 05:48:24.841119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.024 qpair failed and we were unable to recover it. 00:34:18.024 [2024-07-14 05:48:24.841297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.024 [2024-07-14 05:48:24.841323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.024 qpair failed and we were unable to recover it. 00:34:18.024 [2024-07-14 05:48:24.841501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.024 [2024-07-14 05:48:24.841527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.024 qpair failed and we were unable to recover it. 00:34:18.024 [2024-07-14 05:48:24.841685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.024 [2024-07-14 05:48:24.841711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.024 qpair failed and we were unable to recover it. 00:34:18.024 [2024-07-14 05:48:24.841894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.024 [2024-07-14 05:48:24.841922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.024 qpair failed and we were unable to recover it. 00:34:18.025 [2024-07-14 05:48:24.842134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.025 [2024-07-14 05:48:24.842177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.025 qpair failed and we were unable to recover it. 00:34:18.025 [2024-07-14 05:48:24.842342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.025 [2024-07-14 05:48:24.842372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.025 qpair failed and we were unable to recover it. 00:34:18.025 [2024-07-14 05:48:24.842577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.025 [2024-07-14 05:48:24.842604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.025 qpair failed and we were unable to recover it. 00:34:18.025 [2024-07-14 05:48:24.842791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.025 [2024-07-14 05:48:24.842817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.025 qpair failed and we were unable to recover it. 00:34:18.025 [2024-07-14 05:48:24.843029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.025 [2024-07-14 05:48:24.843059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.025 qpair failed and we were unable to recover it. 00:34:18.025 [2024-07-14 05:48:24.843232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.025 [2024-07-14 05:48:24.843258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.025 qpair failed and we were unable to recover it. 00:34:18.025 [2024-07-14 05:48:24.843474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.025 [2024-07-14 05:48:24.843504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.025 qpair failed and we were unable to recover it. 00:34:18.025 [2024-07-14 05:48:24.843740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.025 [2024-07-14 05:48:24.843771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.025 qpair failed and we were unable to recover it. 00:34:18.025 [2024-07-14 05:48:24.843963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.025 [2024-07-14 05:48:24.843991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.025 qpair failed and we were unable to recover it. 00:34:18.025 [2024-07-14 05:48:24.844237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.025 [2024-07-14 05:48:24.844266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.025 qpair failed and we were unable to recover it. 00:34:18.025 [2024-07-14 05:48:24.844446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.025 [2024-07-14 05:48:24.844475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.025 qpair failed and we were unable to recover it. 00:34:18.025 [2024-07-14 05:48:24.844683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.025 [2024-07-14 05:48:24.844710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.025 qpair failed and we were unable to recover it. 00:34:18.025 [2024-07-14 05:48:24.844919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.025 [2024-07-14 05:48:24.844950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.025 qpair failed and we were unable to recover it. 00:34:18.025 [2024-07-14 05:48:24.845150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.025 [2024-07-14 05:48:24.845179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.025 qpair failed and we were unable to recover it. 00:34:18.025 [2024-07-14 05:48:24.845422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.025 [2024-07-14 05:48:24.845449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.025 qpair failed and we were unable to recover it. 00:34:18.025 [2024-07-14 05:48:24.845688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.025 [2024-07-14 05:48:24.845717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.025 qpair failed and we were unable to recover it. 00:34:18.025 [2024-07-14 05:48:24.845935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.025 [2024-07-14 05:48:24.845963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.025 qpair failed and we were unable to recover it. 00:34:18.025 [2024-07-14 05:48:24.846147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.025 [2024-07-14 05:48:24.846174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.025 qpair failed and we were unable to recover it. 00:34:18.025 [2024-07-14 05:48:24.846384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.025 [2024-07-14 05:48:24.846414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.025 qpair failed and we were unable to recover it. 00:34:18.025 [2024-07-14 05:48:24.846616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.025 [2024-07-14 05:48:24.846645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.025 qpair failed and we were unable to recover it. 00:34:18.025 [2024-07-14 05:48:24.846850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.025 [2024-07-14 05:48:24.846884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.025 qpair failed and we were unable to recover it. 00:34:18.025 [2024-07-14 05:48:24.847072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.025 [2024-07-14 05:48:24.847101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.025 qpair failed and we were unable to recover it. 00:34:18.025 [2024-07-14 05:48:24.847283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.025 [2024-07-14 05:48:24.847311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.025 qpair failed and we were unable to recover it. 00:34:18.025 [2024-07-14 05:48:24.847517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.025 [2024-07-14 05:48:24.847545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.025 qpair failed and we were unable to recover it. 00:34:18.025 [2024-07-14 05:48:24.847750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.025 [2024-07-14 05:48:24.847780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.025 qpair failed and we were unable to recover it. 00:34:18.025 [2024-07-14 05:48:24.847988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.025 [2024-07-14 05:48:24.848017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.025 qpair failed and we were unable to recover it. 00:34:18.025 [2024-07-14 05:48:24.848201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.025 [2024-07-14 05:48:24.848228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.025 qpair failed and we were unable to recover it. 00:34:18.025 [2024-07-14 05:48:24.848432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.025 [2024-07-14 05:48:24.848462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.025 qpair failed and we were unable to recover it. 00:34:18.025 [2024-07-14 05:48:24.848684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.025 [2024-07-14 05:48:24.848711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.025 qpair failed and we were unable to recover it. 00:34:18.025 [2024-07-14 05:48:24.848896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.025 [2024-07-14 05:48:24.848933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.025 qpair failed and we were unable to recover it. 00:34:18.025 [2024-07-14 05:48:24.849147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.025 [2024-07-14 05:48:24.849177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.025 qpair failed and we were unable to recover it. 00:34:18.025 [2024-07-14 05:48:24.849381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.025 [2024-07-14 05:48:24.849411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.025 qpair failed and we were unable to recover it. 00:34:18.025 [2024-07-14 05:48:24.849642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.025 [2024-07-14 05:48:24.849670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.025 qpair failed and we were unable to recover it. 00:34:18.025 [2024-07-14 05:48:24.849923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.025 [2024-07-14 05:48:24.849962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.025 qpair failed and we were unable to recover it. 00:34:18.025 [2024-07-14 05:48:24.850171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.025 [2024-07-14 05:48:24.850199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.025 qpair failed and we were unable to recover it. 00:34:18.025 [2024-07-14 05:48:24.850386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.025 [2024-07-14 05:48:24.850413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.025 qpair failed and we were unable to recover it. 00:34:18.025 [2024-07-14 05:48:24.850627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.025 [2024-07-14 05:48:24.850658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.025 qpair failed and we were unable to recover it. 00:34:18.025 [2024-07-14 05:48:24.850850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.025 [2024-07-14 05:48:24.850887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.025 qpair failed and we were unable to recover it. 00:34:18.025 [2024-07-14 05:48:24.851127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.025 [2024-07-14 05:48:24.851154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.025 qpair failed and we were unable to recover it. 00:34:18.025 [2024-07-14 05:48:24.851360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.025 [2024-07-14 05:48:24.851389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.025 qpair failed and we were unable to recover it. 00:34:18.025 [2024-07-14 05:48:24.851574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.025 [2024-07-14 05:48:24.851601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.025 qpair failed and we were unable to recover it. 00:34:18.025 [2024-07-14 05:48:24.851817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.025 [2024-07-14 05:48:24.851847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.025 qpair failed and we were unable to recover it. 00:34:18.025 [2024-07-14 05:48:24.852087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.026 [2024-07-14 05:48:24.852113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.026 qpair failed and we were unable to recover it. 00:34:18.026 [2024-07-14 05:48:24.852325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.026 [2024-07-14 05:48:24.852356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.026 qpair failed and we were unable to recover it. 00:34:18.026 [2024-07-14 05:48:24.852586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.026 [2024-07-14 05:48:24.852613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.026 qpair failed and we were unable to recover it. 00:34:18.026 [2024-07-14 05:48:24.852803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.026 [2024-07-14 05:48:24.852830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.026 qpair failed and we were unable to recover it. 00:34:18.026 [2024-07-14 05:48:24.853052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.026 [2024-07-14 05:48:24.853082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.026 qpair failed and we were unable to recover it. 00:34:18.026 [2024-07-14 05:48:24.853292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.026 [2024-07-14 05:48:24.853323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.026 qpair failed and we were unable to recover it. 00:34:18.026 [2024-07-14 05:48:24.853503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.026 [2024-07-14 05:48:24.853534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.026 qpair failed and we were unable to recover it. 00:34:18.026 [2024-07-14 05:48:24.853740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.026 [2024-07-14 05:48:24.853770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.026 qpair failed and we were unable to recover it. 00:34:18.026 [2024-07-14 05:48:24.853959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.026 [2024-07-14 05:48:24.853987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.026 qpair failed and we were unable to recover it. 00:34:18.026 [2024-07-14 05:48:24.854146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.026 [2024-07-14 05:48:24.854174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.026 qpair failed and we were unable to recover it. 00:34:18.026 [2024-07-14 05:48:24.854384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.026 [2024-07-14 05:48:24.854415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.026 qpair failed and we were unable to recover it. 00:34:18.026 [2024-07-14 05:48:24.854619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.026 [2024-07-14 05:48:24.854647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.026 qpair failed and we were unable to recover it. 00:34:18.026 [2024-07-14 05:48:24.854892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.026 [2024-07-14 05:48:24.854922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.026 qpair failed and we were unable to recover it. 00:34:18.026 [2024-07-14 05:48:24.855156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.026 [2024-07-14 05:48:24.855185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.026 qpair failed and we were unable to recover it. 00:34:18.026 [2024-07-14 05:48:24.855423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.026 [2024-07-14 05:48:24.855449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.026 qpair failed and we were unable to recover it. 00:34:18.026 [2024-07-14 05:48:24.855656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.026 [2024-07-14 05:48:24.855687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.026 qpair failed and we were unable to recover it. 00:34:18.026 [2024-07-14 05:48:24.855893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.026 [2024-07-14 05:48:24.855924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.026 qpair failed and we were unable to recover it. 00:34:18.026 [2024-07-14 05:48:24.856133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.026 [2024-07-14 05:48:24.856159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.026 qpair failed and we were unable to recover it. 00:34:18.026 [2024-07-14 05:48:24.856365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.026 [2024-07-14 05:48:24.856392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.026 qpair failed and we were unable to recover it. 00:34:18.026 [2024-07-14 05:48:24.856601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.026 [2024-07-14 05:48:24.856630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.026 qpair failed and we were unable to recover it. 00:34:18.026 [2024-07-14 05:48:24.856886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.026 [2024-07-14 05:48:24.856913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.026 qpair failed and we were unable to recover it. 00:34:18.026 [2024-07-14 05:48:24.857096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.026 [2024-07-14 05:48:24.857127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.026 qpair failed and we were unable to recover it. 00:34:18.026 [2024-07-14 05:48:24.857326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.026 [2024-07-14 05:48:24.857356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.026 qpair failed and we were unable to recover it. 00:34:18.026 [2024-07-14 05:48:24.857567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.026 [2024-07-14 05:48:24.857593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.026 qpair failed and we were unable to recover it. 00:34:18.026 [2024-07-14 05:48:24.857773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.026 [2024-07-14 05:48:24.857800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.026 qpair failed and we were unable to recover it. 00:34:18.026 [2024-07-14 05:48:24.857961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.026 [2024-07-14 05:48:24.857989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.026 qpair failed and we were unable to recover it. 00:34:18.026 [2024-07-14 05:48:24.858174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.026 [2024-07-14 05:48:24.858201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.026 qpair failed and we were unable to recover it. 00:34:18.026 [2024-07-14 05:48:24.858382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.026 [2024-07-14 05:48:24.858410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.026 qpair failed and we were unable to recover it. 00:34:18.026 [2024-07-14 05:48:24.858589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.026 [2024-07-14 05:48:24.858616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.026 qpair failed and we were unable to recover it. 00:34:18.026 [2024-07-14 05:48:24.858826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.026 [2024-07-14 05:48:24.858856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.026 qpair failed and we were unable to recover it. 00:34:18.026 [2024-07-14 05:48:24.859095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.026 [2024-07-14 05:48:24.859121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.026 qpair failed and we were unable to recover it. 00:34:18.026 [2024-07-14 05:48:24.859317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.026 [2024-07-14 05:48:24.859348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.026 qpair failed and we were unable to recover it. 00:34:18.026 [2024-07-14 05:48:24.859593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.026 [2024-07-14 05:48:24.859620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.026 qpair failed and we were unable to recover it. 00:34:18.026 [2024-07-14 05:48:24.859805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.026 [2024-07-14 05:48:24.859832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.026 qpair failed and we were unable to recover it. 00:34:18.026 [2024-07-14 05:48:24.860023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.026 [2024-07-14 05:48:24.860050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.026 qpair failed and we were unable to recover it. 00:34:18.026 [2024-07-14 05:48:24.860284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.026 [2024-07-14 05:48:24.860311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.026 qpair failed and we were unable to recover it. 00:34:18.026 [2024-07-14 05:48:24.860518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.026 [2024-07-14 05:48:24.860548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.026 qpair failed and we were unable to recover it. 00:34:18.026 [2024-07-14 05:48:24.860752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.026 [2024-07-14 05:48:24.860782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.026 qpair failed and we were unable to recover it. 00:34:18.026 [2024-07-14 05:48:24.860992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.026 [2024-07-14 05:48:24.861020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.026 qpair failed and we were unable to recover it. 00:34:18.026 [2024-07-14 05:48:24.861258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.026 [2024-07-14 05:48:24.861285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.027 qpair failed and we were unable to recover it. 00:34:18.027 [2024-07-14 05:48:24.861452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.027 [2024-07-14 05:48:24.861479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.027 qpair failed and we were unable to recover it. 00:34:18.027 [2024-07-14 05:48:24.861638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.027 [2024-07-14 05:48:24.861665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.027 qpair failed and we were unable to recover it. 00:34:18.027 [2024-07-14 05:48:24.861850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.027 [2024-07-14 05:48:24.861890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.027 qpair failed and we were unable to recover it. 00:34:18.027 [2024-07-14 05:48:24.862102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.027 [2024-07-14 05:48:24.862131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.027 qpair failed and we were unable to recover it. 00:34:18.027 [2024-07-14 05:48:24.862338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.027 [2024-07-14 05:48:24.862364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.027 qpair failed and we were unable to recover it. 00:34:18.027 [2024-07-14 05:48:24.862550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.027 [2024-07-14 05:48:24.862581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.027 qpair failed and we were unable to recover it. 00:34:18.027 [2024-07-14 05:48:24.862795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.027 [2024-07-14 05:48:24.862826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.027 qpair failed and we were unable to recover it. 00:34:18.027 [2024-07-14 05:48:24.863056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.027 [2024-07-14 05:48:24.863083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.027 qpair failed and we were unable to recover it. 00:34:18.027 [2024-07-14 05:48:24.863294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.027 [2024-07-14 05:48:24.863324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.027 qpair failed and we were unable to recover it. 00:34:18.027 [2024-07-14 05:48:24.863527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.027 [2024-07-14 05:48:24.863554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.027 qpair failed and we were unable to recover it. 00:34:18.027 [2024-07-14 05:48:24.863766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.027 [2024-07-14 05:48:24.863793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.027 qpair failed and we were unable to recover it. 00:34:18.027 [2024-07-14 05:48:24.864030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.027 [2024-07-14 05:48:24.864060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.027 qpair failed and we were unable to recover it. 00:34:18.027 [2024-07-14 05:48:24.864273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.027 [2024-07-14 05:48:24.864304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.027 qpair failed and we were unable to recover it. 00:34:18.027 [2024-07-14 05:48:24.864543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.027 [2024-07-14 05:48:24.864570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.027 qpair failed and we were unable to recover it. 00:34:18.027 [2024-07-14 05:48:24.864802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.027 [2024-07-14 05:48:24.864832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.027 qpair failed and we were unable to recover it. 00:34:18.027 [2024-07-14 05:48:24.865066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.027 [2024-07-14 05:48:24.865096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.027 qpair failed and we were unable to recover it. 00:34:18.027 [2024-07-14 05:48:24.865273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.027 [2024-07-14 05:48:24.865299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.027 qpair failed and we were unable to recover it. 00:34:18.027 [2024-07-14 05:48:24.865533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.027 [2024-07-14 05:48:24.865562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.027 qpair failed and we were unable to recover it. 00:34:18.027 [2024-07-14 05:48:24.865739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.027 [2024-07-14 05:48:24.865768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.027 qpair failed and we were unable to recover it. 00:34:18.027 [2024-07-14 05:48:24.866007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.027 [2024-07-14 05:48:24.866035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.027 qpair failed and we were unable to recover it. 00:34:18.027 [2024-07-14 05:48:24.866249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.027 [2024-07-14 05:48:24.866278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.027 qpair failed and we were unable to recover it. 00:34:18.027 [2024-07-14 05:48:24.866471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.027 [2024-07-14 05:48:24.866501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.027 qpair failed and we were unable to recover it. 00:34:18.027 [2024-07-14 05:48:24.866739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.027 [2024-07-14 05:48:24.866765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.027 qpair failed and we were unable to recover it. 00:34:18.027 [2024-07-14 05:48:24.867004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.027 [2024-07-14 05:48:24.867034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.027 qpair failed and we were unable to recover it. 00:34:18.027 [2024-07-14 05:48:24.867234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.027 [2024-07-14 05:48:24.867263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.027 qpair failed and we were unable to recover it. 00:34:18.027 [2024-07-14 05:48:24.867433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.027 [2024-07-14 05:48:24.867459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.027 qpair failed and we were unable to recover it. 00:34:18.027 [2024-07-14 05:48:24.867657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.027 [2024-07-14 05:48:24.867686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.027 qpair failed and we were unable to recover it. 00:34:18.027 [2024-07-14 05:48:24.867915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.027 [2024-07-14 05:48:24.867942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.027 qpair failed and we were unable to recover it. 00:34:18.027 [2024-07-14 05:48:24.868146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.027 [2024-07-14 05:48:24.868172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.027 qpair failed and we were unable to recover it. 00:34:18.027 [2024-07-14 05:48:24.868380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.027 [2024-07-14 05:48:24.868409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.027 qpair failed and we were unable to recover it. 00:34:18.027 [2024-07-14 05:48:24.868635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.027 [2024-07-14 05:48:24.868662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.027 qpair failed and we were unable to recover it. 00:34:18.027 [2024-07-14 05:48:24.868876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.027 [2024-07-14 05:48:24.868921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.027 qpair failed and we were unable to recover it. 00:34:18.027 [2024-07-14 05:48:24.869137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.027 [2024-07-14 05:48:24.869181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.027 qpair failed and we were unable to recover it. 00:34:18.027 [2024-07-14 05:48:24.869387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.027 [2024-07-14 05:48:24.869414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.027 qpair failed and we were unable to recover it. 00:34:18.027 [2024-07-14 05:48:24.869578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.027 [2024-07-14 05:48:24.869604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.027 qpair failed and we were unable to recover it. 00:34:18.027 [2024-07-14 05:48:24.869792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.027 [2024-07-14 05:48:24.869819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.027 qpair failed and we were unable to recover it. 00:34:18.027 [2024-07-14 05:48:24.870050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.027 [2024-07-14 05:48:24.870080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.027 qpair failed and we were unable to recover it. 00:34:18.027 [2024-07-14 05:48:24.870287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.027 [2024-07-14 05:48:24.870313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.027 qpair failed and we were unable to recover it. 00:34:18.027 [2024-07-14 05:48:24.870496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.027 [2024-07-14 05:48:24.870522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.027 qpair failed and we were unable to recover it. 00:34:18.027 [2024-07-14 05:48:24.870754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.027 [2024-07-14 05:48:24.870783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.027 qpair failed and we were unable to recover it. 00:34:18.027 [2024-07-14 05:48:24.871014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.027 [2024-07-14 05:48:24.871041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.027 qpair failed and we were unable to recover it. 00:34:18.027 [2024-07-14 05:48:24.871278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.027 [2024-07-14 05:48:24.871307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.027 qpair failed and we were unable to recover it. 00:34:18.027 [2024-07-14 05:48:24.871491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.027 [2024-07-14 05:48:24.871520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.027 qpair failed and we were unable to recover it. 00:34:18.027 [2024-07-14 05:48:24.871704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.027 [2024-07-14 05:48:24.871731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.027 qpair failed and we were unable to recover it. 00:34:18.027 [2024-07-14 05:48:24.871961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.028 [2024-07-14 05:48:24.871992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.028 qpair failed and we were unable to recover it. 00:34:18.028 [2024-07-14 05:48:24.872218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.028 [2024-07-14 05:48:24.872251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.028 qpair failed and we were unable to recover it. 00:34:18.028 [2024-07-14 05:48:24.872455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.028 [2024-07-14 05:48:24.872482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.028 qpair failed and we were unable to recover it. 00:34:18.028 [2024-07-14 05:48:24.872716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.028 [2024-07-14 05:48:24.872745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.028 qpair failed and we were unable to recover it. 00:34:18.028 [2024-07-14 05:48:24.872955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.028 [2024-07-14 05:48:24.872983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.028 qpair failed and we were unable to recover it. 00:34:18.028 [2024-07-14 05:48:24.873193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.028 [2024-07-14 05:48:24.873220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.028 qpair failed and we were unable to recover it. 00:34:18.028 [2024-07-14 05:48:24.873427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.028 [2024-07-14 05:48:24.873456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.028 qpair failed and we were unable to recover it. 00:34:18.028 [2024-07-14 05:48:24.873685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.028 [2024-07-14 05:48:24.873714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.028 qpair failed and we were unable to recover it. 00:34:18.028 [2024-07-14 05:48:24.873922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.028 [2024-07-14 05:48:24.873950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.028 qpair failed and we were unable to recover it. 00:34:18.028 [2024-07-14 05:48:24.874164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.028 [2024-07-14 05:48:24.874193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.028 qpair failed and we were unable to recover it. 00:34:18.028 [2024-07-14 05:48:24.874394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.028 [2024-07-14 05:48:24.874423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.028 qpair failed and we were unable to recover it. 00:34:18.028 [2024-07-14 05:48:24.874627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.028 [2024-07-14 05:48:24.874654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.028 qpair failed and we were unable to recover it. 00:34:18.028 [2024-07-14 05:48:24.874874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.028 [2024-07-14 05:48:24.874905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.028 qpair failed and we were unable to recover it. 00:34:18.028 [2024-07-14 05:48:24.875106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.028 [2024-07-14 05:48:24.875136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.028 qpair failed and we were unable to recover it. 00:34:18.028 [2024-07-14 05:48:24.875367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.028 [2024-07-14 05:48:24.875393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.028 qpair failed and we were unable to recover it. 00:34:18.028 [2024-07-14 05:48:24.875570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.028 [2024-07-14 05:48:24.875597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.028 qpair failed and we were unable to recover it. 00:34:18.028 [2024-07-14 05:48:24.875826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.028 [2024-07-14 05:48:24.875855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.028 qpair failed and we were unable to recover it. 00:34:18.028 [2024-07-14 05:48:24.876085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.028 [2024-07-14 05:48:24.876112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.028 qpair failed and we were unable to recover it. 00:34:18.028 [2024-07-14 05:48:24.876341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.028 [2024-07-14 05:48:24.876370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.028 qpair failed and we were unable to recover it. 00:34:18.028 [2024-07-14 05:48:24.876583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.028 [2024-07-14 05:48:24.876610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.028 qpair failed and we were unable to recover it. 00:34:18.028 [2024-07-14 05:48:24.876820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.028 [2024-07-14 05:48:24.876849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.028 qpair failed and we were unable to recover it. 00:34:18.028 [2024-07-14 05:48:24.877028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.028 [2024-07-14 05:48:24.877055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.028 qpair failed and we were unable to recover it. 00:34:18.028 [2024-07-14 05:48:24.877290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.028 [2024-07-14 05:48:24.877319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.028 qpair failed and we were unable to recover it. 00:34:18.028 [2024-07-14 05:48:24.877496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.028 [2024-07-14 05:48:24.877523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.028 qpair failed and we were unable to recover it. 00:34:18.028 [2024-07-14 05:48:24.877757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.028 [2024-07-14 05:48:24.877787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.028 qpair failed and we were unable to recover it. 00:34:18.028 [2024-07-14 05:48:24.877994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.028 [2024-07-14 05:48:24.878024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.028 qpair failed and we were unable to recover it. 00:34:18.028 [2024-07-14 05:48:24.878213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.028 [2024-07-14 05:48:24.878240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.028 qpair failed and we were unable to recover it. 00:34:18.028 [2024-07-14 05:48:24.878438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.028 [2024-07-14 05:48:24.878467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.028 qpair failed and we were unable to recover it. 00:34:18.028 [2024-07-14 05:48:24.878678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.028 [2024-07-14 05:48:24.878707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.028 qpair failed and we were unable to recover it. 00:34:18.028 [2024-07-14 05:48:24.878945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.028 [2024-07-14 05:48:24.878972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.028 qpair failed and we were unable to recover it. 00:34:18.028 [2024-07-14 05:48:24.879192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.028 [2024-07-14 05:48:24.879221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.028 qpair failed and we were unable to recover it. 00:34:18.028 [2024-07-14 05:48:24.879445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.028 [2024-07-14 05:48:24.879474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.028 qpair failed and we were unable to recover it. 00:34:18.028 [2024-07-14 05:48:24.879679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.028 [2024-07-14 05:48:24.879705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.028 qpair failed and we were unable to recover it. 00:34:18.028 [2024-07-14 05:48:24.879918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.028 [2024-07-14 05:48:24.879948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.028 qpair failed and we were unable to recover it. 00:34:18.028 [2024-07-14 05:48:24.880153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.028 [2024-07-14 05:48:24.880183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.028 qpair failed and we were unable to recover it. 00:34:18.028 [2024-07-14 05:48:24.880388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.028 [2024-07-14 05:48:24.880415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.028 qpair failed and we were unable to recover it. 00:34:18.028 [2024-07-14 05:48:24.880640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.028 [2024-07-14 05:48:24.880669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.028 qpair failed and we were unable to recover it. 00:34:18.028 [2024-07-14 05:48:24.880897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.028 [2024-07-14 05:48:24.880927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.028 qpair failed and we were unable to recover it. 00:34:18.028 [2024-07-14 05:48:24.881161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.028 [2024-07-14 05:48:24.881187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.028 qpair failed and we were unable to recover it. 00:34:18.029 [2024-07-14 05:48:24.881397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.029 [2024-07-14 05:48:24.881427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.029 qpair failed and we were unable to recover it. 00:34:18.029 [2024-07-14 05:48:24.881630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.029 [2024-07-14 05:48:24.881657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.029 qpair failed and we were unable to recover it. 00:34:18.029 [2024-07-14 05:48:24.881812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.029 [2024-07-14 05:48:24.881842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.029 qpair failed and we were unable to recover it. 00:34:18.029 [2024-07-14 05:48:24.882070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.029 [2024-07-14 05:48:24.882098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.029 qpair failed and we were unable to recover it. 00:34:18.029 [2024-07-14 05:48:24.882311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.029 [2024-07-14 05:48:24.882341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.029 qpair failed and we were unable to recover it. 00:34:18.029 [2024-07-14 05:48:24.882548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.029 [2024-07-14 05:48:24.882574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.029 qpair failed and we were unable to recover it. 00:34:18.029 [2024-07-14 05:48:24.882797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.029 [2024-07-14 05:48:24.882826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.029 qpair failed and we were unable to recover it. 00:34:18.029 [2024-07-14 05:48:24.883065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.029 [2024-07-14 05:48:24.883092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.029 qpair failed and we were unable to recover it. 00:34:18.029 [2024-07-14 05:48:24.883276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.029 [2024-07-14 05:48:24.883302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.029 qpair failed and we were unable to recover it. 00:34:18.029 [2024-07-14 05:48:24.883510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.029 [2024-07-14 05:48:24.883542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.029 qpair failed and we were unable to recover it. 00:34:18.029 [2024-07-14 05:48:24.883739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.029 [2024-07-14 05:48:24.883769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.029 qpair failed and we were unable to recover it. 00:34:18.029 [2024-07-14 05:48:24.883970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.029 [2024-07-14 05:48:24.883997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.029 qpair failed and we were unable to recover it. 00:34:18.029 [2024-07-14 05:48:24.884183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.029 [2024-07-14 05:48:24.884209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.029 qpair failed and we were unable to recover it. 00:34:18.029 [2024-07-14 05:48:24.884439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.029 [2024-07-14 05:48:24.884468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.029 qpair failed and we were unable to recover it. 00:34:18.029 [2024-07-14 05:48:24.884694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.029 [2024-07-14 05:48:24.884721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.029 qpair failed and we were unable to recover it. 00:34:18.029 [2024-07-14 05:48:24.884956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.029 [2024-07-14 05:48:24.884986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.029 qpair failed and we were unable to recover it. 00:34:18.029 [2024-07-14 05:48:24.885228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.029 [2024-07-14 05:48:24.885258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.029 qpair failed and we were unable to recover it. 00:34:18.029 [2024-07-14 05:48:24.885466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.029 [2024-07-14 05:48:24.885492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.029 qpair failed and we were unable to recover it. 00:34:18.029 [2024-07-14 05:48:24.885706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.029 [2024-07-14 05:48:24.885735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.029 qpair failed and we were unable to recover it. 00:34:18.029 [2024-07-14 05:48:24.885954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.029 [2024-07-14 05:48:24.885985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.029 qpair failed and we were unable to recover it. 00:34:18.029 [2024-07-14 05:48:24.886221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.029 [2024-07-14 05:48:24.886248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.029 qpair failed and we were unable to recover it. 00:34:18.029 [2024-07-14 05:48:24.886481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.029 [2024-07-14 05:48:24.886510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.029 qpair failed and we were unable to recover it. 00:34:18.029 [2024-07-14 05:48:24.886832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.029 [2024-07-14 05:48:24.886896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.029 qpair failed and we were unable to recover it. 00:34:18.029 [2024-07-14 05:48:24.887084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.029 [2024-07-14 05:48:24.887111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.029 qpair failed and we were unable to recover it. 00:34:18.029 [2024-07-14 05:48:24.887323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.029 [2024-07-14 05:48:24.887350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.029 qpair failed and we were unable to recover it. 00:34:18.029 [2024-07-14 05:48:24.887535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.029 [2024-07-14 05:48:24.887561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.029 qpair failed and we were unable to recover it. 00:34:18.029 [2024-07-14 05:48:24.887765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.029 [2024-07-14 05:48:24.887794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.029 qpair failed and we were unable to recover it. 00:34:18.029 [2024-07-14 05:48:24.887979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.029 [2024-07-14 05:48:24.888007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.029 qpair failed and we were unable to recover it. 00:34:18.029 [2024-07-14 05:48:24.888240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.029 [2024-07-14 05:48:24.888270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.029 qpair failed and we were unable to recover it. 00:34:18.029 [2024-07-14 05:48:24.888502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.029 [2024-07-14 05:48:24.888529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.029 qpair failed and we were unable to recover it. 00:34:18.029 [2024-07-14 05:48:24.888736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.029 [2024-07-14 05:48:24.888765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.029 qpair failed and we were unable to recover it. 00:34:18.029 [2024-07-14 05:48:24.888982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.029 [2024-07-14 05:48:24.889009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.029 qpair failed and we were unable to recover it. 00:34:18.029 [2024-07-14 05:48:24.889233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.029 [2024-07-14 05:48:24.889259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.029 qpair failed and we were unable to recover it. 00:34:18.029 [2024-07-14 05:48:24.889468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.029 [2024-07-14 05:48:24.889493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.029 qpair failed and we were unable to recover it. 00:34:18.029 [2024-07-14 05:48:24.889700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.029 [2024-07-14 05:48:24.889729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.029 qpair failed and we were unable to recover it. 00:34:18.029 [2024-07-14 05:48:24.889912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.029 [2024-07-14 05:48:24.889939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.029 qpair failed and we were unable to recover it. 00:34:18.029 [2024-07-14 05:48:24.890168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.029 [2024-07-14 05:48:24.890197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.029 qpair failed and we were unable to recover it. 00:34:18.029 [2024-07-14 05:48:24.890399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.029 [2024-07-14 05:48:24.890429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.029 qpair failed and we were unable to recover it. 00:34:18.029 [2024-07-14 05:48:24.890655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.029 [2024-07-14 05:48:24.890682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.029 qpair failed and we were unable to recover it. 00:34:18.029 [2024-07-14 05:48:24.890880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.029 [2024-07-14 05:48:24.890907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.029 qpair failed and we were unable to recover it. 00:34:18.029 [2024-07-14 05:48:24.891095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.029 [2024-07-14 05:48:24.891122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.029 qpair failed and we were unable to recover it. 00:34:18.029 [2024-07-14 05:48:24.891306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.029 [2024-07-14 05:48:24.891333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.029 qpair failed and we were unable to recover it. 00:34:18.029 [2024-07-14 05:48:24.891531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.029 [2024-07-14 05:48:24.891564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.029 qpair failed and we were unable to recover it. 00:34:18.029 [2024-07-14 05:48:24.891794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.029 [2024-07-14 05:48:24.891820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.029 qpair failed and we were unable to recover it. 00:34:18.029 [2024-07-14 05:48:24.892047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.029 [2024-07-14 05:48:24.892074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.029 qpair failed and we were unable to recover it. 00:34:18.030 [2024-07-14 05:48:24.892281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.030 [2024-07-14 05:48:24.892308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.030 qpair failed and we were unable to recover it. 00:34:18.030 [2024-07-14 05:48:24.892519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.030 [2024-07-14 05:48:24.892562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.030 qpair failed and we were unable to recover it. 00:34:18.030 [2024-07-14 05:48:24.892765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.030 [2024-07-14 05:48:24.892790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.030 qpair failed and we were unable to recover it. 00:34:18.030 [2024-07-14 05:48:24.892955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.030 [2024-07-14 05:48:24.892982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.030 qpair failed and we were unable to recover it. 00:34:18.030 [2024-07-14 05:48:24.893171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.030 [2024-07-14 05:48:24.893198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.030 qpair failed and we were unable to recover it. 00:34:18.030 [2024-07-14 05:48:24.893385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.030 [2024-07-14 05:48:24.893411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.030 qpair failed and we were unable to recover it. 00:34:18.030 [2024-07-14 05:48:24.893596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.030 [2024-07-14 05:48:24.893622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.030 qpair failed and we were unable to recover it. 00:34:18.030 [2024-07-14 05:48:24.893803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.030 [2024-07-14 05:48:24.893832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.030 qpair failed and we were unable to recover it. 00:34:18.030 [2024-07-14 05:48:24.894032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.030 [2024-07-14 05:48:24.894060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.030 qpair failed and we were unable to recover it. 00:34:18.030 [2024-07-14 05:48:24.894266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.030 [2024-07-14 05:48:24.894296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.030 qpair failed and we were unable to recover it. 00:34:18.030 [2024-07-14 05:48:24.894524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.030 [2024-07-14 05:48:24.894551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.030 qpair failed and we were unable to recover it. 00:34:18.030 [2024-07-14 05:48:24.894805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.030 [2024-07-14 05:48:24.894834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.030 qpair failed and we were unable to recover it. 00:34:18.030 [2024-07-14 05:48:24.895046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.030 [2024-07-14 05:48:24.895073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.030 qpair failed and we were unable to recover it. 00:34:18.030 [2024-07-14 05:48:24.895254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.030 [2024-07-14 05:48:24.895284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.030 qpair failed and we were unable to recover it. 00:34:18.030 [2024-07-14 05:48:24.895523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.030 [2024-07-14 05:48:24.895550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.030 qpair failed and we were unable to recover it. 00:34:18.030 [2024-07-14 05:48:24.895763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.030 [2024-07-14 05:48:24.895793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.030 qpair failed and we were unable to recover it. 00:34:18.030 [2024-07-14 05:48:24.895976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.030 [2024-07-14 05:48:24.896007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.030 qpair failed and we were unable to recover it. 00:34:18.030 [2024-07-14 05:48:24.896188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.030 [2024-07-14 05:48:24.896215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.030 qpair failed and we were unable to recover it. 00:34:18.030 [2024-07-14 05:48:24.896426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.030 [2024-07-14 05:48:24.896456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.030 qpair failed and we were unable to recover it. 00:34:18.030 [2024-07-14 05:48:24.896683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.030 [2024-07-14 05:48:24.896713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.030 qpair failed and we were unable to recover it. 00:34:18.030 [2024-07-14 05:48:24.896890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.030 [2024-07-14 05:48:24.896917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.030 qpair failed and we were unable to recover it. 00:34:18.030 [2024-07-14 05:48:24.897088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.030 [2024-07-14 05:48:24.897117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.030 qpair failed and we were unable to recover it. 00:34:18.030 [2024-07-14 05:48:24.897303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.030 [2024-07-14 05:48:24.897333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.030 qpair failed and we were unable to recover it. 00:34:18.030 [2024-07-14 05:48:24.897544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.030 [2024-07-14 05:48:24.897571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.030 qpair failed and we were unable to recover it. 00:34:18.030 [2024-07-14 05:48:24.897772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.030 [2024-07-14 05:48:24.897806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.030 qpair failed and we were unable to recover it. 00:34:18.030 [2024-07-14 05:48:24.898016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.030 [2024-07-14 05:48:24.898044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.030 qpair failed and we were unable to recover it. 00:34:18.030 [2024-07-14 05:48:24.898258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.030 [2024-07-14 05:48:24.898285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.030 qpair failed and we were unable to recover it. 00:34:18.030 [2024-07-14 05:48:24.898505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.030 [2024-07-14 05:48:24.898531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.030 qpair failed and we were unable to recover it. 00:34:18.030 [2024-07-14 05:48:24.898714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.030 [2024-07-14 05:48:24.898742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.030 qpair failed and we were unable to recover it. 00:34:18.030 [2024-07-14 05:48:24.898936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.030 [2024-07-14 05:48:24.898964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.030 qpair failed and we were unable to recover it. 00:34:18.030 [2024-07-14 05:48:24.899123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.030 [2024-07-14 05:48:24.899151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.030 qpair failed and we were unable to recover it. 00:34:18.030 [2024-07-14 05:48:24.899357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.030 [2024-07-14 05:48:24.899388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.030 qpair failed and we were unable to recover it. 00:34:18.030 [2024-07-14 05:48:24.899594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.030 [2024-07-14 05:48:24.899621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.030 qpair failed and we were unable to recover it. 00:34:18.030 [2024-07-14 05:48:24.899829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.030 [2024-07-14 05:48:24.899858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.030 qpair failed and we were unable to recover it. 00:34:18.030 [2024-07-14 05:48:24.900064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.030 [2024-07-14 05:48:24.900094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.030 qpair failed and we were unable to recover it. 00:34:18.030 [2024-07-14 05:48:24.900304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.030 [2024-07-14 05:48:24.900331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.030 qpair failed and we were unable to recover it. 00:34:18.030 [2024-07-14 05:48:24.900534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.030 [2024-07-14 05:48:24.900563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.030 qpair failed and we were unable to recover it. 00:34:18.030 [2024-07-14 05:48:24.900764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.030 [2024-07-14 05:48:24.900795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.030 qpair failed and we were unable to recover it. 00:34:18.030 [2024-07-14 05:48:24.901037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.030 [2024-07-14 05:48:24.901064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.030 qpair failed and we were unable to recover it. 00:34:18.030 [2024-07-14 05:48:24.901251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.030 [2024-07-14 05:48:24.901280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.030 qpair failed and we were unable to recover it. 00:34:18.030 [2024-07-14 05:48:24.901484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.030 [2024-07-14 05:48:24.901513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.031 qpair failed and we were unable to recover it. 00:34:18.031 [2024-07-14 05:48:24.901723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.031 [2024-07-14 05:48:24.901749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.031 qpair failed and we were unable to recover it. 00:34:18.031 [2024-07-14 05:48:24.901905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.031 [2024-07-14 05:48:24.901933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.031 qpair failed and we were unable to recover it. 00:34:18.031 [2024-07-14 05:48:24.902114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.031 [2024-07-14 05:48:24.902140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.031 qpair failed and we were unable to recover it. 00:34:18.031 [2024-07-14 05:48:24.902323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.031 [2024-07-14 05:48:24.902349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.031 qpair failed and we were unable to recover it. 00:34:18.031 [2024-07-14 05:48:24.902558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.031 [2024-07-14 05:48:24.902587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.031 qpair failed and we were unable to recover it. 00:34:18.031 [2024-07-14 05:48:24.902785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.031 [2024-07-14 05:48:24.902814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.031 qpair failed and we were unable to recover it. 00:34:18.031 [2024-07-14 05:48:24.903025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.031 [2024-07-14 05:48:24.903053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.031 qpair failed and we were unable to recover it. 00:34:18.031 [2024-07-14 05:48:24.903262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.031 [2024-07-14 05:48:24.903293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.031 qpair failed and we were unable to recover it. 00:34:18.031 [2024-07-14 05:48:24.903533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.031 [2024-07-14 05:48:24.903563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.031 qpair failed and we were unable to recover it. 00:34:18.031 [2024-07-14 05:48:24.903766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.031 [2024-07-14 05:48:24.903795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.031 qpair failed and we were unable to recover it. 00:34:18.031 [2024-07-14 05:48:24.904031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.031 [2024-07-14 05:48:24.904059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.031 qpair failed and we were unable to recover it. 00:34:18.031 [2024-07-14 05:48:24.904302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.031 [2024-07-14 05:48:24.904332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.031 qpair failed and we were unable to recover it. 00:34:18.031 [2024-07-14 05:48:24.904543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.031 [2024-07-14 05:48:24.904570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.031 qpair failed and we were unable to recover it. 00:34:18.031 [2024-07-14 05:48:24.904755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.031 [2024-07-14 05:48:24.904782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.031 qpair failed and we were unable to recover it. 00:34:18.031 [2024-07-14 05:48:24.904998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.031 [2024-07-14 05:48:24.905025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.031 qpair failed and we were unable to recover it. 00:34:18.031 [2024-07-14 05:48:24.905234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.031 [2024-07-14 05:48:24.905260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.031 qpair failed and we were unable to recover it. 00:34:18.031 [2024-07-14 05:48:24.905465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.031 [2024-07-14 05:48:24.905495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.031 qpair failed and we were unable to recover it. 00:34:18.031 [2024-07-14 05:48:24.905720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.031 [2024-07-14 05:48:24.905747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.031 qpair failed and we were unable to recover it. 00:34:18.031 [2024-07-14 05:48:24.905931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.031 [2024-07-14 05:48:24.905959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.031 qpair failed and we were unable to recover it. 00:34:18.031 [2024-07-14 05:48:24.906176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.031 [2024-07-14 05:48:24.906205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.031 qpair failed and we were unable to recover it. 00:34:18.031 [2024-07-14 05:48:24.906412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.031 [2024-07-14 05:48:24.906439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.031 qpair failed and we were unable to recover it. 00:34:18.031 [2024-07-14 05:48:24.906623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.031 [2024-07-14 05:48:24.906651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.031 qpair failed and we were unable to recover it. 00:34:18.031 [2024-07-14 05:48:24.906863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.031 [2024-07-14 05:48:24.906900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.031 qpair failed and we were unable to recover it. 00:34:18.031 [2024-07-14 05:48:24.907133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.031 [2024-07-14 05:48:24.907169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.031 qpair failed and we were unable to recover it. 00:34:18.031 [2024-07-14 05:48:24.907405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.031 [2024-07-14 05:48:24.907432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.031 qpair failed and we were unable to recover it. 00:34:18.031 [2024-07-14 05:48:24.907663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.031 [2024-07-14 05:48:24.907693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.031 qpair failed and we were unable to recover it. 00:34:18.031 [2024-07-14 05:48:24.907894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.031 [2024-07-14 05:48:24.907924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.031 qpair failed and we were unable to recover it. 00:34:18.031 [2024-07-14 05:48:24.908106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.031 [2024-07-14 05:48:24.908132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.031 qpair failed and we were unable to recover it. 00:34:18.031 [2024-07-14 05:48:24.908319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.031 [2024-07-14 05:48:24.908346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.031 qpair failed and we were unable to recover it. 00:34:18.031 [2024-07-14 05:48:24.908570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.031 [2024-07-14 05:48:24.908596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.031 qpair failed and we were unable to recover it. 00:34:18.031 [2024-07-14 05:48:24.908811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.031 [2024-07-14 05:48:24.908837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.031 qpair failed and we were unable to recover it. 00:34:18.031 [2024-07-14 05:48:24.909063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.031 [2024-07-14 05:48:24.909090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.031 qpair failed and we were unable to recover it. 00:34:18.031 [2024-07-14 05:48:24.909291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.031 [2024-07-14 05:48:24.909320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.031 qpair failed and we were unable to recover it. 00:34:18.031 [2024-07-14 05:48:24.909528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.031 [2024-07-14 05:48:24.909555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.031 qpair failed and we were unable to recover it. 00:34:18.031 [2024-07-14 05:48:24.909726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.031 [2024-07-14 05:48:24.909755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.031 qpair failed and we were unable to recover it. 00:34:18.031 [2024-07-14 05:48:24.909982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.031 [2024-07-14 05:48:24.910012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.031 qpair failed and we were unable to recover it. 00:34:18.031 [2024-07-14 05:48:24.910218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.031 [2024-07-14 05:48:24.910245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.031 qpair failed and we were unable to recover it. 00:34:18.031 [2024-07-14 05:48:24.910463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.031 [2024-07-14 05:48:24.910493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.031 qpair failed and we were unable to recover it. 00:34:18.031 [2024-07-14 05:48:24.910690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.031 [2024-07-14 05:48:24.910721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.031 qpair failed and we were unable to recover it. 00:34:18.031 [2024-07-14 05:48:24.910957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.031 [2024-07-14 05:48:24.910984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.031 qpair failed and we were unable to recover it. 00:34:18.031 [2024-07-14 05:48:24.911197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.031 [2024-07-14 05:48:24.911226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.031 qpair failed and we were unable to recover it. 00:34:18.031 [2024-07-14 05:48:24.911426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.031 [2024-07-14 05:48:24.911457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.031 qpair failed and we were unable to recover it. 00:34:18.031 [2024-07-14 05:48:24.911668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.031 [2024-07-14 05:48:24.911696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.031 qpair failed and we were unable to recover it. 00:34:18.031 [2024-07-14 05:48:24.911908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.031 [2024-07-14 05:48:24.911938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.031 qpair failed and we were unable to recover it. 00:34:18.031 [2024-07-14 05:48:24.912107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.031 [2024-07-14 05:48:24.912138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.031 qpair failed and we were unable to recover it. 00:34:18.031 [2024-07-14 05:48:24.912314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.031 [2024-07-14 05:48:24.912341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.032 qpair failed and we were unable to recover it. 00:34:18.032 [2024-07-14 05:48:24.912571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.032 [2024-07-14 05:48:24.912600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.032 qpair failed and we were unable to recover it. 00:34:18.032 [2024-07-14 05:48:24.912778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.032 [2024-07-14 05:48:24.912809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.032 qpair failed and we were unable to recover it. 00:34:18.032 [2024-07-14 05:48:24.912981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.032 [2024-07-14 05:48:24.913008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.032 qpair failed and we were unable to recover it. 00:34:18.032 [2024-07-14 05:48:24.913207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.032 [2024-07-14 05:48:24.913239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.032 qpair failed and we were unable to recover it. 00:34:18.032 [2024-07-14 05:48:24.913414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.032 [2024-07-14 05:48:24.913444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.032 qpair failed and we were unable to recover it. 00:34:18.032 [2024-07-14 05:48:24.913652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.032 [2024-07-14 05:48:24.913679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.032 qpair failed and we were unable to recover it. 00:34:18.032 [2024-07-14 05:48:24.913863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.032 [2024-07-14 05:48:24.913908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.032 qpair failed and we were unable to recover it. 00:34:18.032 [2024-07-14 05:48:24.914118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.032 [2024-07-14 05:48:24.914162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.032 qpair failed and we were unable to recover it. 00:34:18.032 [2024-07-14 05:48:24.914376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.032 [2024-07-14 05:48:24.914403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.032 qpair failed and we were unable to recover it. 00:34:18.032 [2024-07-14 05:48:24.914577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.032 [2024-07-14 05:48:24.914606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.032 qpair failed and we were unable to recover it. 00:34:18.032 [2024-07-14 05:48:24.914807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.032 [2024-07-14 05:48:24.914836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.032 qpair failed and we were unable to recover it. 00:34:18.032 [2024-07-14 05:48:24.915080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.032 [2024-07-14 05:48:24.915107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.032 qpair failed and we were unable to recover it. 00:34:18.032 [2024-07-14 05:48:24.915326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.032 [2024-07-14 05:48:24.915352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.032 qpair failed and we were unable to recover it. 00:34:18.032 [2024-07-14 05:48:24.915533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.032 [2024-07-14 05:48:24.915559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.032 qpair failed and we were unable to recover it. 00:34:18.032 [2024-07-14 05:48:24.915765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.032 [2024-07-14 05:48:24.915792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.032 qpair failed and we were unable to recover it. 00:34:18.032 [2024-07-14 05:48:24.916001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.032 [2024-07-14 05:48:24.916031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.032 qpair failed and we were unable to recover it. 00:34:18.032 [2024-07-14 05:48:24.916201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.032 [2024-07-14 05:48:24.916230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.032 qpair failed and we were unable to recover it. 00:34:18.032 [2024-07-14 05:48:24.916460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.032 [2024-07-14 05:48:24.916491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.032 qpair failed and we were unable to recover it. 00:34:18.032 [2024-07-14 05:48:24.916733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.032 [2024-07-14 05:48:24.916763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.032 qpair failed and we were unable to recover it. 00:34:18.032 [2024-07-14 05:48:24.916967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.032 [2024-07-14 05:48:24.916998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.032 qpair failed and we were unable to recover it. 00:34:18.032 [2024-07-14 05:48:24.917211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.032 [2024-07-14 05:48:24.917237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.032 qpair failed and we were unable to recover it. 00:34:18.032 [2024-07-14 05:48:24.917416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.032 [2024-07-14 05:48:24.917446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.032 qpair failed and we were unable to recover it. 00:34:18.032 [2024-07-14 05:48:24.917653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.032 [2024-07-14 05:48:24.917680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.032 qpair failed and we were unable to recover it. 00:34:18.032 [2024-07-14 05:48:24.917863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.032 [2024-07-14 05:48:24.917895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.032 qpair failed and we were unable to recover it. 00:34:18.032 [2024-07-14 05:48:24.918136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.032 [2024-07-14 05:48:24.918165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.032 qpair failed and we were unable to recover it. 00:34:18.032 [2024-07-14 05:48:24.918362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.032 [2024-07-14 05:48:24.918392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.032 qpair failed and we were unable to recover it. 00:34:18.032 [2024-07-14 05:48:24.918569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.032 [2024-07-14 05:48:24.918596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.032 qpair failed and we were unable to recover it. 00:34:18.032 [2024-07-14 05:48:24.918822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.032 [2024-07-14 05:48:24.918852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.032 qpair failed and we were unable to recover it. 00:34:18.032 [2024-07-14 05:48:24.919086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.032 [2024-07-14 05:48:24.919116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.032 qpair failed and we were unable to recover it. 00:34:18.032 [2024-07-14 05:48:24.919324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.032 [2024-07-14 05:48:24.919350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.032 qpair failed and we were unable to recover it. 00:34:18.032 [2024-07-14 05:48:24.919536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.032 [2024-07-14 05:48:24.919563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.032 qpair failed and we were unable to recover it. 00:34:18.032 [2024-07-14 05:48:24.919807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.032 [2024-07-14 05:48:24.919836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.032 qpair failed and we were unable to recover it. 00:34:18.032 [2024-07-14 05:48:24.920088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.032 [2024-07-14 05:48:24.920114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.032 qpair failed and we were unable to recover it. 00:34:18.032 [2024-07-14 05:48:24.920318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.032 [2024-07-14 05:48:24.920348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.032 qpair failed and we were unable to recover it. 00:34:18.032 [2024-07-14 05:48:24.920577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.032 [2024-07-14 05:48:24.920606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.032 qpair failed and we were unable to recover it. 00:34:18.032 [2024-07-14 05:48:24.920801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.032 [2024-07-14 05:48:24.920828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.032 qpair failed and we were unable to recover it. 00:34:18.032 [2024-07-14 05:48:24.921023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.032 [2024-07-14 05:48:24.921050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.032 qpair failed and we were unable to recover it. 00:34:18.033 [2024-07-14 05:48:24.921283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.033 [2024-07-14 05:48:24.921312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.033 qpair failed and we were unable to recover it. 00:34:18.033 [2024-07-14 05:48:24.921499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.033 [2024-07-14 05:48:24.921526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.033 qpair failed and we were unable to recover it. 00:34:18.033 [2024-07-14 05:48:24.921735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.033 [2024-07-14 05:48:24.921765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.033 qpair failed and we were unable to recover it. 00:34:18.033 [2024-07-14 05:48:24.921973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.033 [2024-07-14 05:48:24.922000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.033 qpair failed and we were unable to recover it. 00:34:18.033 [2024-07-14 05:48:24.922214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.033 [2024-07-14 05:48:24.922240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.033 qpair failed and we were unable to recover it. 00:34:18.033 [2024-07-14 05:48:24.922427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.033 [2024-07-14 05:48:24.922453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.033 qpair failed and we were unable to recover it. 00:34:18.033 [2024-07-14 05:48:24.922633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.033 [2024-07-14 05:48:24.922659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.033 qpair failed and we were unable to recover it. 00:34:18.033 [2024-07-14 05:48:24.922882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.033 [2024-07-14 05:48:24.922909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.033 qpair failed and we were unable to recover it. 00:34:18.033 [2024-07-14 05:48:24.923074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.033 [2024-07-14 05:48:24.923101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.033 qpair failed and we were unable to recover it. 00:34:18.033 [2024-07-14 05:48:24.923300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.033 [2024-07-14 05:48:24.923329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.033 qpair failed and we were unable to recover it. 00:34:18.033 [2024-07-14 05:48:24.923565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.033 [2024-07-14 05:48:24.923591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.033 qpair failed and we were unable to recover it. 00:34:18.033 [2024-07-14 05:48:24.923831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.033 [2024-07-14 05:48:24.923860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.033 qpair failed and we were unable to recover it. 00:34:18.033 [2024-07-14 05:48:24.924048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.033 [2024-07-14 05:48:24.924078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.033 qpair failed and we were unable to recover it. 00:34:18.033 [2024-07-14 05:48:24.924320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.033 [2024-07-14 05:48:24.924346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.033 qpair failed and we were unable to recover it. 00:34:18.033 [2024-07-14 05:48:24.924557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.033 [2024-07-14 05:48:24.924586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.033 qpair failed and we were unable to recover it. 00:34:18.033 [2024-07-14 05:48:24.924786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.033 [2024-07-14 05:48:24.924817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.033 qpair failed and we were unable to recover it. 00:34:18.033 [2024-07-14 05:48:24.925053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.033 [2024-07-14 05:48:24.925081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.033 qpair failed and we were unable to recover it. 00:34:18.033 [2024-07-14 05:48:24.925299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.033 [2024-07-14 05:48:24.925328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.033 qpair failed and we were unable to recover it. 00:34:18.033 [2024-07-14 05:48:24.925530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.033 [2024-07-14 05:48:24.925559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.033 qpair failed and we were unable to recover it. 00:34:18.033 [2024-07-14 05:48:24.925849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.033 [2024-07-14 05:48:24.925885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.033 qpair failed and we were unable to recover it. 00:34:18.033 [2024-07-14 05:48:24.926130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.033 [2024-07-14 05:48:24.926165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.033 qpair failed and we were unable to recover it. 00:34:18.033 [2024-07-14 05:48:24.926394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.033 [2024-07-14 05:48:24.926424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.033 qpair failed and we were unable to recover it. 00:34:18.033 [2024-07-14 05:48:24.926635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.033 [2024-07-14 05:48:24.926662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.033 qpair failed and we were unable to recover it. 00:34:18.033 [2024-07-14 05:48:24.926894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.033 [2024-07-14 05:48:24.926925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.033 qpair failed and we were unable to recover it. 00:34:18.033 [2024-07-14 05:48:24.927123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.033 [2024-07-14 05:48:24.927153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.033 qpair failed and we were unable to recover it. 00:34:18.033 [2024-07-14 05:48:24.927357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.033 [2024-07-14 05:48:24.927384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.033 qpair failed and we were unable to recover it. 00:34:18.033 [2024-07-14 05:48:24.927546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.033 [2024-07-14 05:48:24.927572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.033 qpair failed and we were unable to recover it. 00:34:18.033 [2024-07-14 05:48:24.927776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.033 [2024-07-14 05:48:24.927806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.033 qpair failed and we were unable to recover it. 00:34:18.033 [2024-07-14 05:48:24.928000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.033 [2024-07-14 05:48:24.928028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.033 qpair failed and we were unable to recover it. 00:34:18.033 [2024-07-14 05:48:24.928212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.033 [2024-07-14 05:48:24.928240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.033 qpair failed and we were unable to recover it. 00:34:18.033 [2024-07-14 05:48:24.928424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.033 [2024-07-14 05:48:24.928451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.033 qpair failed and we were unable to recover it. 00:34:18.033 [2024-07-14 05:48:24.928657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.033 [2024-07-14 05:48:24.928684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.033 qpair failed and we were unable to recover it. 00:34:18.033 [2024-07-14 05:48:24.928919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.033 [2024-07-14 05:48:24.928949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.033 qpair failed and we were unable to recover it. 00:34:18.033 [2024-07-14 05:48:24.929174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.033 [2024-07-14 05:48:24.929203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.033 qpair failed and we were unable to recover it. 00:34:18.033 [2024-07-14 05:48:24.929391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.033 [2024-07-14 05:48:24.929418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.033 qpair failed and we were unable to recover it. 00:34:18.033 [2024-07-14 05:48:24.929649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.033 [2024-07-14 05:48:24.929679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.033 qpair failed and we were unable to recover it. 00:34:18.033 [2024-07-14 05:48:24.929915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.033 [2024-07-14 05:48:24.929958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.033 qpair failed and we were unable to recover it. 00:34:18.033 [2024-07-14 05:48:24.930144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.033 [2024-07-14 05:48:24.930170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.033 qpair failed and we were unable to recover it. 00:34:18.033 [2024-07-14 05:48:24.930381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.033 [2024-07-14 05:48:24.930424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.033 qpair failed and we were unable to recover it. 00:34:18.033 [2024-07-14 05:48:24.930631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.033 [2024-07-14 05:48:24.930662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.033 qpair failed and we were unable to recover it. 00:34:18.033 [2024-07-14 05:48:24.930862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.033 [2024-07-14 05:48:24.930897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.033 qpair failed and we were unable to recover it. 00:34:18.033 [2024-07-14 05:48:24.931103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.033 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3393366 Killed "${NVMF_APP[@]}" "$@" 00:34:18.033 [2024-07-14 05:48:24.931134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.033 qpair failed and we were unable to recover it. 00:34:18.033 [2024-07-14 05:48:24.931340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.033 [2024-07-14 05:48:24.931369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.033 qpair failed and we were unable to recover it. 00:34:18.033 05:48:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:34:18.033 [2024-07-14 05:48:24.931580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.033 [2024-07-14 05:48:24.931607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.033 qpair failed and we were unable to recover it. 00:34:18.034 05:48:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:18.034 05:48:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:18.034 [2024-07-14 05:48:24.931840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.034 [2024-07-14 05:48:24.931878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.034 qpair failed and we were unable to recover it. 00:34:18.034 05:48:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:18.034 [2024-07-14 05:48:24.932058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.034 [2024-07-14 05:48:24.932088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.034 qpair failed and we were unable to recover it. 00:34:18.034 05:48:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:18.034 [2024-07-14 05:48:24.932303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.034 [2024-07-14 05:48:24.932329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.034 qpair failed and we were unable to recover it. 00:34:18.034 [2024-07-14 05:48:24.932512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.034 [2024-07-14 05:48:24.932539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.034 qpair failed and we were unable to recover it. 00:34:18.034 [2024-07-14 05:48:24.932738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.034 [2024-07-14 05:48:24.932768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.034 qpair failed and we were unable to recover it. 00:34:18.034 [2024-07-14 05:48:24.932970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.034 [2024-07-14 05:48:24.932997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.034 qpair failed and we were unable to recover it. 00:34:18.034 [2024-07-14 05:48:24.933181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.034 [2024-07-14 05:48:24.933210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.034 qpair failed and we were unable to recover it. 00:34:18.034 [2024-07-14 05:48:24.933418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.034 [2024-07-14 05:48:24.933448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.034 qpair failed and we were unable to recover it. 00:34:18.034 [2024-07-14 05:48:24.933628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.034 [2024-07-14 05:48:24.933655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.034 qpair failed and we were unable to recover it. 00:34:18.034 [2024-07-14 05:48:24.933861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.034 [2024-07-14 05:48:24.933899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.034 qpair failed and we were unable to recover it. 00:34:18.034 [2024-07-14 05:48:24.934129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.034 [2024-07-14 05:48:24.934156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.034 qpair failed and we were unable to recover it. 00:34:18.034 [2024-07-14 05:48:24.934337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.034 [2024-07-14 05:48:24.934367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.034 qpair failed and we were unable to recover it. 00:34:18.034 [2024-07-14 05:48:24.934586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.034 [2024-07-14 05:48:24.934631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.034 qpair failed and we were unable to recover it. 00:34:18.034 [2024-07-14 05:48:24.934829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.034 [2024-07-14 05:48:24.934858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.034 qpair failed and we were unable to recover it. 00:34:18.034 [2024-07-14 05:48:24.935080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.034 [2024-07-14 05:48:24.935107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.034 qpair failed and we were unable to recover it. 00:34:18.034 [2024-07-14 05:48:24.935315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.034 [2024-07-14 05:48:24.935344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.034 qpair failed and we were unable to recover it. 00:34:18.034 [2024-07-14 05:48:24.935543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.034 [2024-07-14 05:48:24.935573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.034 qpair failed and we were unable to recover it. 00:34:18.034 [2024-07-14 05:48:24.935778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.034 [2024-07-14 05:48:24.935806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.034 qpair failed and we were unable to recover it. 00:34:18.034 05:48:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3393917 00:34:18.034 05:48:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:18.034 [2024-07-14 05:48:24.936003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.034 [2024-07-14 05:48:24.936031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.034 qpair failed and we were unable to recover it. 00:34:18.034 05:48:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3393917 00:34:18.034 [2024-07-14 05:48:24.936228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.034 [2024-07-14 05:48:24.936258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.034 05:48:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 3393917 ']' 00:34:18.034 qpair failed and we were unable to recover it. 00:34:18.034 05:48:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:18.034 [2024-07-14 05:48:24.936472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.034 [2024-07-14 05:48:24.936499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.034 qpair failed and we were unable to recover it. 00:34:18.034 05:48:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:18.034 05:48:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:18.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:18.034 [2024-07-14 05:48:24.936713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.034 [2024-07-14 05:48:24.936743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.034 qpair failed and we were unable to recover it. 00:34:18.034 05:48:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:18.034 05:48:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:18.034 [2024-07-14 05:48:24.936962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.034 [2024-07-14 05:48:24.936989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.034 qpair failed and we were unable to recover it. 00:34:18.034 [2024-07-14 05:48:24.937154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.034 [2024-07-14 05:48:24.937181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.034 qpair failed and we were unable to recover it. 00:34:18.034 [2024-07-14 05:48:24.937363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.034 [2024-07-14 05:48:24.937390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.034 qpair failed and we were unable to recover it. 00:34:18.034 [2024-07-14 05:48:24.937542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.034 [2024-07-14 05:48:24.937569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.034 qpair failed and we were unable to recover it. 00:34:18.034 [2024-07-14 05:48:24.937758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.034 [2024-07-14 05:48:24.937784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.034 qpair failed and we were unable to recover it. 00:34:18.034 [2024-07-14 05:48:24.937998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.034 [2024-07-14 05:48:24.938027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.034 qpair failed and we were unable to recover it. 00:34:18.034 [2024-07-14 05:48:24.938212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.034 [2024-07-14 05:48:24.938239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.034 qpair failed and we were unable to recover it. 00:34:18.034 [2024-07-14 05:48:24.938396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.034 [2024-07-14 05:48:24.938423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.034 qpair failed and we were unable to recover it. 00:34:18.034 [2024-07-14 05:48:24.938579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.034 [2024-07-14 05:48:24.938606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.034 qpair failed and we were unable to recover it. 00:34:18.034 [2024-07-14 05:48:24.938782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.034 [2024-07-14 05:48:24.938809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.034 qpair failed and we were unable to recover it. 00:34:18.034 [2024-07-14 05:48:24.938996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.034 [2024-07-14 05:48:24.939023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.034 qpair failed and we were unable to recover it. 00:34:18.034 [2024-07-14 05:48:24.939209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.034 [2024-07-14 05:48:24.939237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.034 qpair failed and we were unable to recover it. 00:34:18.034 [2024-07-14 05:48:24.939419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.034 [2024-07-14 05:48:24.939446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.034 qpair failed and we were unable to recover it. 00:34:18.034 [2024-07-14 05:48:24.939642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.034 [2024-07-14 05:48:24.939670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.034 qpair failed and we were unable to recover it. 00:34:18.034 [2024-07-14 05:48:24.939872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.034 [2024-07-14 05:48:24.939900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.034 qpair failed and we were unable to recover it. 00:34:18.034 [2024-07-14 05:48:24.940071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.034 [2024-07-14 05:48:24.940097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.034 qpair failed and we were unable to recover it. 00:34:18.034 [2024-07-14 05:48:24.940281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.035 [2024-07-14 05:48:24.940307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.035 qpair failed and we were unable to recover it. 00:34:18.035 [2024-07-14 05:48:24.940539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.035 [2024-07-14 05:48:24.940567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.035 qpair failed and we were unable to recover it. 00:34:18.035 [2024-07-14 05:48:24.940795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.035 [2024-07-14 05:48:24.940824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.035 qpair failed and we were unable to recover it. 00:34:18.035 [2024-07-14 05:48:24.941014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.035 [2024-07-14 05:48:24.941041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.035 qpair failed and we were unable to recover it. 00:34:18.035 [2024-07-14 05:48:24.941231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.035 [2024-07-14 05:48:24.941257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.035 qpair failed and we were unable to recover it. 00:34:18.035 [2024-07-14 05:48:24.941442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.035 [2024-07-14 05:48:24.941484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.035 qpair failed and we were unable to recover it. 00:34:18.035 [2024-07-14 05:48:24.941723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.035 [2024-07-14 05:48:24.941754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.035 qpair failed and we were unable to recover it. 00:34:18.035 [2024-07-14 05:48:24.941970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.035 [2024-07-14 05:48:24.941999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.035 qpair failed and we were unable to recover it. 00:34:18.035 [2024-07-14 05:48:24.942179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.035 [2024-07-14 05:48:24.942205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.035 qpair failed and we were unable to recover it. 00:34:18.035 [2024-07-14 05:48:24.942380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.035 [2024-07-14 05:48:24.942407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.035 qpair failed and we were unable to recover it. 00:34:18.035 [2024-07-14 05:48:24.942624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.035 [2024-07-14 05:48:24.942652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.035 qpair failed and we were unable to recover it. 00:34:18.035 [2024-07-14 05:48:24.942874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.035 [2024-07-14 05:48:24.942922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.035 qpair failed and we were unable to recover it. 00:34:18.035 [2024-07-14 05:48:24.943112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.035 [2024-07-14 05:48:24.943139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.035 qpair failed and we were unable to recover it. 00:34:18.035 [2024-07-14 05:48:24.943305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.035 [2024-07-14 05:48:24.943331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.035 qpair failed and we were unable to recover it. 00:34:18.035 [2024-07-14 05:48:24.943507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.035 [2024-07-14 05:48:24.943535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.035 qpair failed and we were unable to recover it. 00:34:18.035 [2024-07-14 05:48:24.943811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.035 [2024-07-14 05:48:24.943839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.035 qpair failed and we were unable to recover it. 00:34:18.035 [2024-07-14 05:48:24.944049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.035 [2024-07-14 05:48:24.944077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.035 qpair failed and we were unable to recover it. 00:34:18.035 [2024-07-14 05:48:24.944265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.035 [2024-07-14 05:48:24.944292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.035 qpair failed and we were unable to recover it. 00:34:18.035 [2024-07-14 05:48:24.944481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.035 [2024-07-14 05:48:24.944509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.035 qpair failed and we were unable to recover it. 00:34:18.035 [2024-07-14 05:48:24.944692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.035 [2024-07-14 05:48:24.944720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.035 qpair failed and we were unable to recover it. 00:34:18.035 [2024-07-14 05:48:24.944926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.035 [2024-07-14 05:48:24.944954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.035 qpair failed and we were unable to recover it. 00:34:18.035 [2024-07-14 05:48:24.945105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.035 [2024-07-14 05:48:24.945149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.035 qpair failed and we were unable to recover it. 00:34:18.035 [2024-07-14 05:48:24.945363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.035 [2024-07-14 05:48:24.945390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.035 qpair failed and we were unable to recover it. 00:34:18.035 [2024-07-14 05:48:24.945555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.035 [2024-07-14 05:48:24.945582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.035 qpair failed and we were unable to recover it. 00:34:18.035 [2024-07-14 05:48:24.945792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.035 [2024-07-14 05:48:24.945819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.035 qpair failed and we were unable to recover it. 00:34:18.035 [2024-07-14 05:48:24.946050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.035 [2024-07-14 05:48:24.946078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.035 qpair failed and we were unable to recover it. 00:34:18.035 [2024-07-14 05:48:24.946248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.035 [2024-07-14 05:48:24.946275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.035 qpair failed and we were unable to recover it. 00:34:18.035 [2024-07-14 05:48:24.946454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.035 [2024-07-14 05:48:24.946481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.035 qpair failed and we were unable to recover it. 00:34:18.035 [2024-07-14 05:48:24.946642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.035 [2024-07-14 05:48:24.946669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.035 qpair failed and we were unable to recover it. 00:34:18.035 [2024-07-14 05:48:24.946832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.035 [2024-07-14 05:48:24.946859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.035 qpair failed and we were unable to recover it. 00:34:18.035 [2024-07-14 05:48:24.947028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.035 [2024-07-14 05:48:24.947055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.035 qpair failed and we were unable to recover it. 00:34:18.035 [2024-07-14 05:48:24.947248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.035 [2024-07-14 05:48:24.947275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.035 qpair failed and we were unable to recover it. 00:34:18.035 [2024-07-14 05:48:24.947435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.035 [2024-07-14 05:48:24.947461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.035 qpair failed and we were unable to recover it. 00:34:18.035 [2024-07-14 05:48:24.947671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.035 [2024-07-14 05:48:24.947697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.035 qpair failed and we were unable to recover it. 00:34:18.035 [2024-07-14 05:48:24.947888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.035 [2024-07-14 05:48:24.947915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.035 qpair failed and we were unable to recover it. 00:34:18.035 [2024-07-14 05:48:24.948100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.035 [2024-07-14 05:48:24.948127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.035 qpair failed and we were unable to recover it. 00:34:18.035 [2024-07-14 05:48:24.948338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.035 [2024-07-14 05:48:24.948365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.035 qpair failed and we were unable to recover it. 00:34:18.035 [2024-07-14 05:48:24.948547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.035 [2024-07-14 05:48:24.948575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.035 qpair failed and we were unable to recover it. 00:34:18.035 [2024-07-14 05:48:24.948787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.035 [2024-07-14 05:48:24.948813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.035 qpair failed and we were unable to recover it. 00:34:18.035 [2024-07-14 05:48:24.948973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.035 [2024-07-14 05:48:24.948999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.035 qpair failed and we were unable to recover it. 00:34:18.035 [2024-07-14 05:48:24.949180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.035 [2024-07-14 05:48:24.949206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.035 qpair failed and we were unable to recover it. 00:34:18.035 [2024-07-14 05:48:24.949392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.035 [2024-07-14 05:48:24.949419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.035 qpair failed and we were unable to recover it. 00:34:18.035 [2024-07-14 05:48:24.949601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.035 [2024-07-14 05:48:24.949627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.035 qpair failed and we were unable to recover it. 00:34:18.035 [2024-07-14 05:48:24.949832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.036 [2024-07-14 05:48:24.949858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.036 qpair failed and we were unable to recover it. 00:34:18.036 [2024-07-14 05:48:24.950053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.036 [2024-07-14 05:48:24.950079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.036 qpair failed and we were unable to recover it. 00:34:18.036 [2024-07-14 05:48:24.950234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.036 [2024-07-14 05:48:24.950261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.036 qpair failed and we were unable to recover it. 00:34:18.036 [2024-07-14 05:48:24.950441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.036 [2024-07-14 05:48:24.950467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.036 qpair failed and we were unable to recover it. 00:34:18.036 [2024-07-14 05:48:24.950653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.036 [2024-07-14 05:48:24.950680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.036 qpair failed and we were unable to recover it. 00:34:18.036 [2024-07-14 05:48:24.950871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.036 [2024-07-14 05:48:24.950899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.036 qpair failed and we were unable to recover it. 00:34:18.036 [2024-07-14 05:48:24.951049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.036 [2024-07-14 05:48:24.951076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.036 qpair failed and we were unable to recover it. 00:34:18.036 [2024-07-14 05:48:24.951268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.036 [2024-07-14 05:48:24.951294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.036 qpair failed and we were unable to recover it. 00:34:18.036 [2024-07-14 05:48:24.951476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.036 [2024-07-14 05:48:24.951507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.036 qpair failed and we were unable to recover it. 00:34:18.036 [2024-07-14 05:48:24.951672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.036 [2024-07-14 05:48:24.951699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.036 qpair failed and we were unable to recover it. 00:34:18.036 [2024-07-14 05:48:24.951844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.036 [2024-07-14 05:48:24.951876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.036 qpair failed and we were unable to recover it. 00:34:18.036 [2024-07-14 05:48:24.952032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.036 [2024-07-14 05:48:24.952058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.036 qpair failed and we were unable to recover it. 00:34:18.036 [2024-07-14 05:48:24.952242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.036 [2024-07-14 05:48:24.952269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.036 qpair failed and we were unable to recover it. 00:34:18.036 [2024-07-14 05:48:24.952457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.036 [2024-07-14 05:48:24.952485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.036 qpair failed and we were unable to recover it. 00:34:18.036 [2024-07-14 05:48:24.952637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.036 [2024-07-14 05:48:24.952664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.036 qpair failed and we were unable to recover it. 00:34:18.036 [2024-07-14 05:48:24.952876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.036 [2024-07-14 05:48:24.952904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.036 qpair failed and we were unable to recover it. 00:34:18.036 [2024-07-14 05:48:24.953066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.036 [2024-07-14 05:48:24.953092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.036 qpair failed and we were unable to recover it. 00:34:18.036 [2024-07-14 05:48:24.953237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.036 [2024-07-14 05:48:24.953263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.036 qpair failed and we were unable to recover it. 00:34:18.036 [2024-07-14 05:48:24.953449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.036 [2024-07-14 05:48:24.953476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.036 qpair failed and we were unable to recover it. 00:34:18.036 [2024-07-14 05:48:24.953656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.036 [2024-07-14 05:48:24.953683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.036 qpair failed and we were unable to recover it. 00:34:18.036 [2024-07-14 05:48:24.953892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.036 [2024-07-14 05:48:24.953919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.036 qpair failed and we were unable to recover it. 00:34:18.036 [2024-07-14 05:48:24.954127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.036 [2024-07-14 05:48:24.954153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.036 qpair failed and we were unable to recover it. 00:34:18.036 [2024-07-14 05:48:24.954344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.036 [2024-07-14 05:48:24.954371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.036 qpair failed and we were unable to recover it. 00:34:18.036 [2024-07-14 05:48:24.954556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.036 [2024-07-14 05:48:24.954582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.036 qpair failed and we were unable to recover it. 00:34:18.036 [2024-07-14 05:48:24.954765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.036 [2024-07-14 05:48:24.954791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.036 qpair failed and we were unable to recover it. 00:34:18.036 [2024-07-14 05:48:24.954952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.036 [2024-07-14 05:48:24.954979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.036 qpair failed and we were unable to recover it. 00:34:18.036 [2024-07-14 05:48:24.955186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.036 [2024-07-14 05:48:24.955212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.036 qpair failed and we were unable to recover it. 00:34:18.036 [2024-07-14 05:48:24.955399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.036 [2024-07-14 05:48:24.955426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.036 qpair failed and we were unable to recover it. 00:34:18.036 [2024-07-14 05:48:24.955583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.036 [2024-07-14 05:48:24.955610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.036 qpair failed and we were unable to recover it. 00:34:18.036 [2024-07-14 05:48:24.955770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.036 [2024-07-14 05:48:24.955798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.036 qpair failed and we were unable to recover it. 00:34:18.036 [2024-07-14 05:48:24.955987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.036 [2024-07-14 05:48:24.956014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.036 qpair failed and we were unable to recover it. 00:34:18.036 [2024-07-14 05:48:24.956198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.036 [2024-07-14 05:48:24.956225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.036 qpair failed and we were unable to recover it. 00:34:18.036 [2024-07-14 05:48:24.956406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.036 [2024-07-14 05:48:24.956432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.036 qpair failed and we were unable to recover it. 00:34:18.036 [2024-07-14 05:48:24.956607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.036 [2024-07-14 05:48:24.956634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.036 qpair failed and we were unable to recover it. 00:34:18.036 [2024-07-14 05:48:24.956812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.036 [2024-07-14 05:48:24.956839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.036 qpair failed and we were unable to recover it. 00:34:18.036 [2024-07-14 05:48:24.957071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.036 [2024-07-14 05:48:24.957115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.036 qpair failed and we were unable to recover it. 00:34:18.036 [2024-07-14 05:48:24.957330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.036 [2024-07-14 05:48:24.957358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.036 qpair failed and we were unable to recover it. 00:34:18.036 [2024-07-14 05:48:24.957523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.036 [2024-07-14 05:48:24.957552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.036 qpair failed and we were unable to recover it. 00:34:18.037 [2024-07-14 05:48:24.957717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.037 [2024-07-14 05:48:24.957744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.037 qpair failed and we were unable to recover it. 00:34:18.037 [2024-07-14 05:48:24.957947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.037 [2024-07-14 05:48:24.957978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.037 qpair failed and we were unable to recover it. 00:34:18.037 [2024-07-14 05:48:24.958141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.037 [2024-07-14 05:48:24.958168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.037 qpair failed and we were unable to recover it. 00:34:18.037 [2024-07-14 05:48:24.958334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.037 [2024-07-14 05:48:24.958363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.037 qpair failed and we were unable to recover it. 00:34:18.037 [2024-07-14 05:48:24.958551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.037 [2024-07-14 05:48:24.958589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.037 qpair failed and we were unable to recover it. 00:34:18.037 [2024-07-14 05:48:24.958797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.037 [2024-07-14 05:48:24.958826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.037 qpair failed and we were unable to recover it. 00:34:18.037 [2024-07-14 05:48:24.959047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.037 [2024-07-14 05:48:24.959086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.037 qpair failed and we were unable to recover it. 00:34:18.037 [2024-07-14 05:48:24.959307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.037 [2024-07-14 05:48:24.959335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.037 qpair failed and we were unable to recover it. 00:34:18.037 [2024-07-14 05:48:24.959522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.037 [2024-07-14 05:48:24.959548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.037 qpair failed and we were unable to recover it. 00:34:18.037 [2024-07-14 05:48:24.959761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.037 [2024-07-14 05:48:24.959791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.037 qpair failed and we were unable to recover it. 00:34:18.037 [2024-07-14 05:48:24.960025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.037 [2024-07-14 05:48:24.960060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.037 qpair failed and we were unable to recover it. 00:34:18.037 [2024-07-14 05:48:24.960255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.037 [2024-07-14 05:48:24.960288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.037 qpair failed and we were unable to recover it. 00:34:18.037 [2024-07-14 05:48:24.960488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.037 [2024-07-14 05:48:24.960517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.037 qpair failed and we were unable to recover it. 00:34:18.037 [2024-07-14 05:48:24.960706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.037 [2024-07-14 05:48:24.960733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.037 qpair failed and we were unable to recover it. 00:34:18.037 [2024-07-14 05:48:24.960893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.037 [2024-07-14 05:48:24.960920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.037 qpair failed and we were unable to recover it. 00:34:18.037 [2024-07-14 05:48:24.961118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.037 [2024-07-14 05:48:24.961160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.037 qpair failed and we were unable to recover it. 00:34:18.037 [2024-07-14 05:48:24.961353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.037 [2024-07-14 05:48:24.961380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.037 qpair failed and we were unable to recover it. 00:34:18.037 [2024-07-14 05:48:24.961580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.037 [2024-07-14 05:48:24.961610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.037 qpair failed and we were unable to recover it. 00:34:18.037 [2024-07-14 05:48:24.961806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.037 [2024-07-14 05:48:24.961833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.037 qpair failed and we were unable to recover it. 00:34:18.037 [2024-07-14 05:48:24.962004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.037 [2024-07-14 05:48:24.962031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.037 qpair failed and we were unable to recover it. 00:34:18.037 [2024-07-14 05:48:24.962184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.037 [2024-07-14 05:48:24.962211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.037 qpair failed and we were unable to recover it. 00:34:18.037 [2024-07-14 05:48:24.962584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.037 [2024-07-14 05:48:24.962642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.037 qpair failed and we were unable to recover it. 00:34:18.037 [2024-07-14 05:48:24.962842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.037 [2024-07-14 05:48:24.962881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.037 qpair failed and we were unable to recover it. 00:34:18.037 [2024-07-14 05:48:24.963067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.037 [2024-07-14 05:48:24.963093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.037 qpair failed and we were unable to recover it. 00:34:18.037 [2024-07-14 05:48:24.963307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.037 [2024-07-14 05:48:24.963334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.037 qpair failed and we were unable to recover it. 00:34:18.037 [2024-07-14 05:48:24.963497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.037 [2024-07-14 05:48:24.963524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.037 qpair failed and we were unable to recover it. 00:34:18.037 [2024-07-14 05:48:24.963683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.037 [2024-07-14 05:48:24.963711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.037 qpair failed and we were unable to recover it. 00:34:18.037 [2024-07-14 05:48:24.963876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.037 [2024-07-14 05:48:24.963903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.037 qpair failed and we were unable to recover it. 00:34:18.037 [2024-07-14 05:48:24.964063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.037 [2024-07-14 05:48:24.964090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.037 qpair failed and we were unable to recover it. 00:34:18.037 [2024-07-14 05:48:24.964274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.037 [2024-07-14 05:48:24.964301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.037 qpair failed and we were unable to recover it. 00:34:18.037 [2024-07-14 05:48:24.964512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.037 [2024-07-14 05:48:24.964542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.037 qpair failed and we were unable to recover it. 00:34:18.037 [2024-07-14 05:48:24.964748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.037 [2024-07-14 05:48:24.964777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.037 qpair failed and we were unable to recover it. 00:34:18.037 [2024-07-14 05:48:24.964956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.037 [2024-07-14 05:48:24.964983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.037 qpair failed and we were unable to recover it. 00:34:18.037 [2024-07-14 05:48:24.965144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.037 [2024-07-14 05:48:24.965170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.037 qpair failed and we were unable to recover it. 00:34:18.037 [2024-07-14 05:48:24.965383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.037 [2024-07-14 05:48:24.965409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.037 qpair failed and we were unable to recover it. 00:34:18.037 [2024-07-14 05:48:24.965594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.037 [2024-07-14 05:48:24.965620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.037 qpair failed and we were unable to recover it. 00:34:18.037 [2024-07-14 05:48:24.965816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.037 [2024-07-14 05:48:24.965844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.037 qpair failed and we were unable to recover it. 00:34:18.037 [2024-07-14 05:48:24.966011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.037 [2024-07-14 05:48:24.966038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.037 qpair failed and we were unable to recover it. 00:34:18.037 [2024-07-14 05:48:24.966224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.037 [2024-07-14 05:48:24.966250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.037 qpair failed and we were unable to recover it. 00:34:18.037 [2024-07-14 05:48:24.966584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.038 [2024-07-14 05:48:24.966637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.038 qpair failed and we were unable to recover it. 00:34:18.038 [2024-07-14 05:48:24.966835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.038 [2024-07-14 05:48:24.966882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.038 qpair failed and we were unable to recover it. 00:34:18.038 [2024-07-14 05:48:24.967080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.038 [2024-07-14 05:48:24.967107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.038 qpair failed and we were unable to recover it. 00:34:18.038 [2024-07-14 05:48:24.967295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.038 [2024-07-14 05:48:24.967322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.038 qpair failed and we were unable to recover it. 00:34:18.038 [2024-07-14 05:48:24.967530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.038 [2024-07-14 05:48:24.967558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.038 qpair failed and we were unable to recover it. 00:34:18.038 [2024-07-14 05:48:24.967713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.038 [2024-07-14 05:48:24.967739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.038 qpair failed and we were unable to recover it. 00:34:18.038 [2024-07-14 05:48:24.967932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.038 [2024-07-14 05:48:24.967960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.038 qpair failed and we were unable to recover it. 00:34:18.038 [2024-07-14 05:48:24.968149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.038 [2024-07-14 05:48:24.968177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.038 qpair failed and we were unable to recover it. 00:34:18.038 [2024-07-14 05:48:24.968356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.038 [2024-07-14 05:48:24.968383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.038 qpair failed and we were unable to recover it. 00:34:18.038 [2024-07-14 05:48:24.968686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.038 [2024-07-14 05:48:24.968744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.038 qpair failed and we were unable to recover it. 00:34:18.038 [2024-07-14 05:48:24.968968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.038 [2024-07-14 05:48:24.968996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.038 qpair failed and we were unable to recover it. 00:34:18.038 [2024-07-14 05:48:24.969158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.038 [2024-07-14 05:48:24.969200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.038 qpair failed and we were unable to recover it. 00:34:18.038 [2024-07-14 05:48:24.969594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.038 [2024-07-14 05:48:24.969649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.038 qpair failed and we were unable to recover it. 00:34:18.038 [2024-07-14 05:48:24.969849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.038 [2024-07-14 05:48:24.969887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.038 qpair failed and we were unable to recover it. 00:34:18.038 [2024-07-14 05:48:24.970086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.038 [2024-07-14 05:48:24.970112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.038 qpair failed and we were unable to recover it. 00:34:18.038 [2024-07-14 05:48:24.970390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.038 [2024-07-14 05:48:24.970440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.038 qpair failed and we were unable to recover it. 00:34:18.038 [2024-07-14 05:48:24.970609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.038 [2024-07-14 05:48:24.970638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.038 qpair failed and we were unable to recover it. 00:34:18.038 [2024-07-14 05:48:24.970838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.038 [2024-07-14 05:48:24.970864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.038 qpair failed and we were unable to recover it. 00:34:18.038 [2024-07-14 05:48:24.971081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.038 [2024-07-14 05:48:24.971108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.038 qpair failed and we were unable to recover it. 00:34:18.038 [2024-07-14 05:48:24.971319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.038 [2024-07-14 05:48:24.971345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.038 qpair failed and we were unable to recover it. 00:34:18.038 [2024-07-14 05:48:24.971524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.038 [2024-07-14 05:48:24.971550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.038 qpair failed and we were unable to recover it. 00:34:18.038 [2024-07-14 05:48:24.971738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.038 [2024-07-14 05:48:24.971767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.038 qpair failed and we were unable to recover it. 00:34:18.038 [2024-07-14 05:48:24.971974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.038 [2024-07-14 05:48:24.972002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.038 qpair failed and we were unable to recover it. 00:34:18.038 [2024-07-14 05:48:24.972177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.038 [2024-07-14 05:48:24.972204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.038 qpair failed and we were unable to recover it. 00:34:18.038 [2024-07-14 05:48:24.972530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.038 [2024-07-14 05:48:24.972576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.038 qpair failed and we were unable to recover it. 00:34:18.038 [2024-07-14 05:48:24.972803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.038 [2024-07-14 05:48:24.972832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.038 qpair failed and we were unable to recover it. 00:34:18.038 [2024-07-14 05:48:24.973042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.038 [2024-07-14 05:48:24.973069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.038 qpair failed and we were unable to recover it. 00:34:18.038 [2024-07-14 05:48:24.973250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.038 [2024-07-14 05:48:24.973276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.038 qpair failed and we were unable to recover it. 00:34:18.038 [2024-07-14 05:48:24.973458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.038 [2024-07-14 05:48:24.973499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.038 qpair failed and we were unable to recover it. 00:34:18.038 [2024-07-14 05:48:24.973695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.038 [2024-07-14 05:48:24.973724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.038 qpair failed and we were unable to recover it. 00:34:18.038 [2024-07-14 05:48:24.973923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.038 [2024-07-14 05:48:24.973950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.038 qpair failed and we were unable to recover it. 00:34:18.038 [2024-07-14 05:48:24.974134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.038 [2024-07-14 05:48:24.974161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.038 qpair failed and we were unable to recover it. 00:34:18.038 [2024-07-14 05:48:24.974318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.038 [2024-07-14 05:48:24.974361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.038 qpair failed and we were unable to recover it. 00:34:18.038 [2024-07-14 05:48:24.974757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.038 [2024-07-14 05:48:24.974813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.038 qpair failed and we were unable to recover it. 00:34:18.038 [2024-07-14 05:48:24.974974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.038 [2024-07-14 05:48:24.975002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.038 qpair failed and we were unable to recover it. 00:34:18.038 [2024-07-14 05:48:24.975192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.038 [2024-07-14 05:48:24.975218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.038 qpair failed and we were unable to recover it. 00:34:18.038 [2024-07-14 05:48:24.975423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.038 [2024-07-14 05:48:24.975449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.038 qpair failed and we were unable to recover it. 00:34:18.038 [2024-07-14 05:48:24.975607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.038 [2024-07-14 05:48:24.975633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.038 qpair failed and we were unable to recover it. 00:34:18.038 [2024-07-14 05:48:24.975818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.038 [2024-07-14 05:48:24.975844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.038 qpair failed and we were unable to recover it. 00:34:18.038 [2024-07-14 05:48:24.976050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.038 [2024-07-14 05:48:24.976077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.038 qpair failed and we were unable to recover it. 00:34:18.038 [2024-07-14 05:48:24.976264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.038 [2024-07-14 05:48:24.976291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.038 qpair failed and we were unable to recover it. 00:34:18.038 [2024-07-14 05:48:24.976500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.038 [2024-07-14 05:48:24.976526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.038 qpair failed and we were unable to recover it. 00:34:18.038 [2024-07-14 05:48:24.976729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.038 [2024-07-14 05:48:24.976759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.038 qpair failed and we were unable to recover it. 00:34:18.038 [2024-07-14 05:48:24.976953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.038 [2024-07-14 05:48:24.976980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.038 qpair failed and we were unable to recover it. 00:34:18.038 [2024-07-14 05:48:24.977148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.038 [2024-07-14 05:48:24.977175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.039 qpair failed and we were unable to recover it. 00:34:18.039 [2024-07-14 05:48:24.977352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.039 [2024-07-14 05:48:24.977379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.039 qpair failed and we were unable to recover it. 00:34:18.039 [2024-07-14 05:48:24.977569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.039 [2024-07-14 05:48:24.977596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.039 qpair failed and we were unable to recover it. 00:34:18.039 [2024-07-14 05:48:24.977790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.039 [2024-07-14 05:48:24.977817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.039 qpair failed and we were unable to recover it. 00:34:18.039 [2024-07-14 05:48:24.978010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.039 [2024-07-14 05:48:24.978037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.039 qpair failed and we were unable to recover it. 00:34:18.039 [2024-07-14 05:48:24.978196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.039 [2024-07-14 05:48:24.978223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.039 qpair failed and we were unable to recover it. 00:34:18.039 [2024-07-14 05:48:24.978427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.039 [2024-07-14 05:48:24.978454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.039 qpair failed and we were unable to recover it. 00:34:18.039 [2024-07-14 05:48:24.978846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.039 [2024-07-14 05:48:24.978912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.039 qpair failed and we were unable to recover it. 00:34:18.039 [2024-07-14 05:48:24.979150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.039 [2024-07-14 05:48:24.979177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.039 qpair failed and we were unable to recover it. 00:34:18.039 [2024-07-14 05:48:24.979379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.039 [2024-07-14 05:48:24.979406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.039 qpair failed and we were unable to recover it. 00:34:18.039 [2024-07-14 05:48:24.979717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.039 [2024-07-14 05:48:24.979770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.039 qpair failed and we were unable to recover it. 00:34:18.039 [2024-07-14 05:48:24.979981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.039 [2024-07-14 05:48:24.980009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.039 qpair failed and we were unable to recover it. 00:34:18.039 [2024-07-14 05:48:24.980191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.039 [2024-07-14 05:48:24.980219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.039 qpair failed and we were unable to recover it. 00:34:18.039 [2024-07-14 05:48:24.980380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.039 [2024-07-14 05:48:24.980407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.039 qpair failed and we were unable to recover it. 00:34:18.039 [2024-07-14 05:48:24.980620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.039 [2024-07-14 05:48:24.980647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.039 qpair failed and we were unable to recover it. 00:34:18.039 [2024-07-14 05:48:24.980810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.039 [2024-07-14 05:48:24.980837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.039 qpair failed and we were unable to recover it. 00:34:18.039 [2024-07-14 05:48:24.981032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.039 [2024-07-14 05:48:24.981059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.039 qpair failed and we were unable to recover it. 00:34:18.039 [2024-07-14 05:48:24.981207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.039 [2024-07-14 05:48:24.981234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.039 qpair failed and we were unable to recover it. 00:34:18.039 [2024-07-14 05:48:24.981444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.039 [2024-07-14 05:48:24.981471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.039 qpair failed and we were unable to recover it. 00:34:18.039 [2024-07-14 05:48:24.981691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.039 [2024-07-14 05:48:24.981720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.039 qpair failed and we were unable to recover it. 00:34:18.039 [2024-07-14 05:48:24.981937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.039 [2024-07-14 05:48:24.981964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.039 qpair failed and we were unable to recover it. 00:34:18.039 [2024-07-14 05:48:24.982128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.039 [2024-07-14 05:48:24.982155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.039 qpair failed and we were unable to recover it. 00:34:18.039 [2024-07-14 05:48:24.982308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.039 [2024-07-14 05:48:24.982334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.039 qpair failed and we were unable to recover it. 00:34:18.039 [2024-07-14 05:48:24.982498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.039 [2024-07-14 05:48:24.982526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.039 qpair failed and we were unable to recover it. 00:34:18.039 [2024-07-14 05:48:24.982710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.039 [2024-07-14 05:48:24.982736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.039 qpair failed and we were unable to recover it. 00:34:18.039 [2024-07-14 05:48:24.982936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.039 [2024-07-14 05:48:24.982964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.039 qpair failed and we were unable to recover it. 00:34:18.039 [2024-07-14 05:48:24.983161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.039 [2024-07-14 05:48:24.983191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.039 qpair failed and we were unable to recover it. 00:34:18.039 [2024-07-14 05:48:24.983402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.039 [2024-07-14 05:48:24.983432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.039 qpair failed and we were unable to recover it. 00:34:18.039 [2024-07-14 05:48:24.983596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.039 [2024-07-14 05:48:24.983577] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:34:18.039 [2024-07-14 05:48:24.983623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.039 qpair failed and we were unable to recover it. 00:34:18.039 [2024-07-14 05:48:24.983654] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:18.039 [2024-07-14 05:48:24.983857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.039 [2024-07-14 05:48:24.983916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.039 qpair failed and we were unable to recover it. 00:34:18.039 [2024-07-14 05:48:24.984103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.039 [2024-07-14 05:48:24.984129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.039 qpair failed and we were unable to recover it. 00:34:18.039 [2024-07-14 05:48:24.984309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.039 [2024-07-14 05:48:24.984336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.039 qpair failed and we were unable to recover it. 00:34:18.039 [2024-07-14 05:48:24.984493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.039 [2024-07-14 05:48:24.984519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.039 qpair failed and we were unable to recover it. 00:34:18.039 [2024-07-14 05:48:24.984729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.039 [2024-07-14 05:48:24.984759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.039 qpair failed and we were unable to recover it. 00:34:18.039 [2024-07-14 05:48:24.984964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.039 [2024-07-14 05:48:24.984991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.039 qpair failed and we were unable to recover it. 00:34:18.039 [2024-07-14 05:48:24.985165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.039 [2024-07-14 05:48:24.985192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.039 qpair failed and we were unable to recover it. 00:34:18.039 [2024-07-14 05:48:24.985395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.039 [2024-07-14 05:48:24.985422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.039 qpair failed and we were unable to recover it. 00:34:18.039 [2024-07-14 05:48:24.985609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.039 [2024-07-14 05:48:24.985636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.039 qpair failed and we were unable to recover it. 00:34:18.039 [2024-07-14 05:48:24.985822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.039 [2024-07-14 05:48:24.985849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.039 qpair failed and we were unable to recover it. 00:34:18.039 [2024-07-14 05:48:24.986070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.039 [2024-07-14 05:48:24.986100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.039 qpair failed and we were unable to recover it. 00:34:18.039 [2024-07-14 05:48:24.986313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.039 [2024-07-14 05:48:24.986340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.039 qpair failed and we were unable to recover it. 00:34:18.039 [2024-07-14 05:48:24.986521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.039 [2024-07-14 05:48:24.986548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.039 qpair failed and we were unable to recover it. 00:34:18.039 [2024-07-14 05:48:24.986704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.039 [2024-07-14 05:48:24.986732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.039 qpair failed and we were unable to recover it. 00:34:18.039 [2024-07-14 05:48:24.986944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.039 [2024-07-14 05:48:24.986972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.039 qpair failed and we were unable to recover it. 00:34:18.040 [2024-07-14 05:48:24.987177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.040 [2024-07-14 05:48:24.987207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.040 qpair failed and we were unable to recover it. 00:34:18.040 [2024-07-14 05:48:24.987439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.040 [2024-07-14 05:48:24.987469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.040 qpair failed and we were unable to recover it. 00:34:18.040 [2024-07-14 05:48:24.987642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.040 [2024-07-14 05:48:24.987673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.040 qpair failed and we were unable to recover it. 00:34:18.040 [2024-07-14 05:48:24.987882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.040 [2024-07-14 05:48:24.987913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.040 qpair failed and we were unable to recover it. 00:34:18.040 [2024-07-14 05:48:24.988138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.040 [2024-07-14 05:48:24.988167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.040 qpair failed and we were unable to recover it. 00:34:18.040 [2024-07-14 05:48:24.988375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.040 [2024-07-14 05:48:24.988402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.040 qpair failed and we were unable to recover it. 00:34:18.040 [2024-07-14 05:48:24.988614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.040 [2024-07-14 05:48:24.988640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.040 qpair failed and we were unable to recover it. 00:34:18.040 [2024-07-14 05:48:24.988804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.040 [2024-07-14 05:48:24.988831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.040 qpair failed and we were unable to recover it. 00:34:18.040 [2024-07-14 05:48:24.988998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.040 [2024-07-14 05:48:24.989026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.040 qpair failed and we were unable to recover it. 00:34:18.040 [2024-07-14 05:48:24.989182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.040 [2024-07-14 05:48:24.989209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.040 qpair failed and we were unable to recover it. 00:34:18.040 [2024-07-14 05:48:24.989413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.040 [2024-07-14 05:48:24.989440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.040 qpair failed and we were unable to recover it. 00:34:18.040 [2024-07-14 05:48:24.989646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.040 [2024-07-14 05:48:24.989673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.040 qpair failed and we were unable to recover it. 00:34:18.040 [2024-07-14 05:48:24.989861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.040 [2024-07-14 05:48:24.989897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.040 qpair failed and we were unable to recover it. 00:34:18.040 [2024-07-14 05:48:24.990106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.040 [2024-07-14 05:48:24.990150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.040 qpair failed and we were unable to recover it. 00:34:18.040 [2024-07-14 05:48:24.990371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.040 [2024-07-14 05:48:24.990397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.040 qpair failed and we were unable to recover it. 00:34:18.040 [2024-07-14 05:48:24.990581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.040 [2024-07-14 05:48:24.990608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.040 qpair failed and we were unable to recover it. 00:34:18.040 [2024-07-14 05:48:24.990805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.040 [2024-07-14 05:48:24.990831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.040 qpair failed and we were unable to recover it. 00:34:18.040 [2024-07-14 05:48:24.991052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.040 [2024-07-14 05:48:24.991079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.040 qpair failed and we were unable to recover it. 00:34:18.040 [2024-07-14 05:48:24.991267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.040 [2024-07-14 05:48:24.991294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.040 qpair failed and we were unable to recover it. 00:34:18.040 [2024-07-14 05:48:24.991504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.040 [2024-07-14 05:48:24.991545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.040 qpair failed and we were unable to recover it. 00:34:18.040 [2024-07-14 05:48:24.991762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.040 [2024-07-14 05:48:24.991788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.040 qpair failed and we were unable to recover it. 00:34:18.040 [2024-07-14 05:48:24.991960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.040 [2024-07-14 05:48:24.991988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.040 qpair failed and we were unable to recover it. 00:34:18.040 [2024-07-14 05:48:24.992213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.040 [2024-07-14 05:48:24.992243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.040 qpair failed and we were unable to recover it. 00:34:18.040 [2024-07-14 05:48:24.992442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.040 [2024-07-14 05:48:24.992468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.040 qpair failed and we were unable to recover it. 00:34:18.040 [2024-07-14 05:48:24.992676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.040 [2024-07-14 05:48:24.992705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.040 qpair failed and we were unable to recover it. 00:34:18.040 [2024-07-14 05:48:24.992928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.040 [2024-07-14 05:48:24.992956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.040 qpair failed and we were unable to recover it. 00:34:18.040 [2024-07-14 05:48:24.993118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.040 [2024-07-14 05:48:24.993146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.040 qpair failed and we were unable to recover it. 00:34:18.040 [2024-07-14 05:48:24.993357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.040 [2024-07-14 05:48:24.993384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.040 qpair failed and we were unable to recover it. 00:34:18.040 [2024-07-14 05:48:24.993564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.040 [2024-07-14 05:48:24.993590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.040 qpair failed and we were unable to recover it. 00:34:18.040 [2024-07-14 05:48:24.993781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.040 [2024-07-14 05:48:24.993808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.040 qpair failed and we were unable to recover it. 00:34:18.040 [2024-07-14 05:48:24.994020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.040 [2024-07-14 05:48:24.994047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.040 qpair failed and we were unable to recover it. 00:34:18.040 [2024-07-14 05:48:24.994236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.040 [2024-07-14 05:48:24.994265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.040 qpair failed and we were unable to recover it. 00:34:18.040 [2024-07-14 05:48:24.994450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.040 [2024-07-14 05:48:24.994477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.040 qpair failed and we were unable to recover it. 00:34:18.040 [2024-07-14 05:48:24.994657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.040 [2024-07-14 05:48:24.994683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.040 qpair failed and we were unable to recover it. 00:34:18.040 [2024-07-14 05:48:24.994906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.040 [2024-07-14 05:48:24.994940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.041 qpair failed and we were unable to recover it. 00:34:18.041 [2024-07-14 05:48:24.995172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.041 [2024-07-14 05:48:24.995201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.041 qpair failed and we were unable to recover it. 00:34:18.041 [2024-07-14 05:48:24.995360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.041 [2024-07-14 05:48:24.995388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.041 qpair failed and we were unable to recover it. 00:34:18.041 [2024-07-14 05:48:24.995577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.041 [2024-07-14 05:48:24.995604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.041 qpair failed and we were unable to recover it. 00:34:18.041 [2024-07-14 05:48:24.995759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.041 [2024-07-14 05:48:24.995785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.041 qpair failed and we were unable to recover it. 00:34:18.041 [2024-07-14 05:48:24.995983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.041 [2024-07-14 05:48:24.996011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.041 qpair failed and we were unable to recover it. 00:34:18.041 [2024-07-14 05:48:24.996198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.041 [2024-07-14 05:48:24.996225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.041 qpair failed and we were unable to recover it. 00:34:18.041 [2024-07-14 05:48:24.996492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.041 [2024-07-14 05:48:24.996518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.041 qpair failed and we were unable to recover it. 00:34:18.041 [2024-07-14 05:48:24.996682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.041 [2024-07-14 05:48:24.996712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.041 qpair failed and we were unable to recover it. 00:34:18.041 [2024-07-14 05:48:24.996902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.041 [2024-07-14 05:48:24.996931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.041 qpair failed and we were unable to recover it. 00:34:18.041 [2024-07-14 05:48:24.997116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.041 [2024-07-14 05:48:24.997143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.041 qpair failed and we were unable to recover it. 00:34:18.041 [2024-07-14 05:48:24.997330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.041 [2024-07-14 05:48:24.997356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.041 qpair failed and we were unable to recover it. 00:34:18.041 [2024-07-14 05:48:24.997537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.041 [2024-07-14 05:48:24.997564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.041 qpair failed and we were unable to recover it. 00:34:18.041 [2024-07-14 05:48:24.997775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.041 [2024-07-14 05:48:24.997801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.041 qpair failed and we were unable to recover it. 00:34:18.041 [2024-07-14 05:48:24.997982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.041 [2024-07-14 05:48:24.998010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.041 qpair failed and we were unable to recover it. 00:34:18.041 [2024-07-14 05:48:24.998166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.041 [2024-07-14 05:48:24.998192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.041 qpair failed and we were unable to recover it. 00:34:18.041 [2024-07-14 05:48:24.998401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.041 [2024-07-14 05:48:24.998427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.041 qpair failed and we were unable to recover it. 00:34:18.041 [2024-07-14 05:48:24.998620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.041 [2024-07-14 05:48:24.998648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.041 qpair failed and we were unable to recover it. 00:34:18.041 [2024-07-14 05:48:24.998831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.041 [2024-07-14 05:48:24.998860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.041 qpair failed and we were unable to recover it. 00:34:18.041 [2024-07-14 05:48:24.999057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.041 [2024-07-14 05:48:24.999085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.041 qpair failed and we were unable to recover it. 00:34:18.041 [2024-07-14 05:48:24.999261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.041 [2024-07-14 05:48:24.999288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.041 qpair failed and we were unable to recover it. 00:34:18.041 [2024-07-14 05:48:24.999534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.041 [2024-07-14 05:48:24.999560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.041 qpair failed and we were unable to recover it. 00:34:18.041 [2024-07-14 05:48:24.999762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.041 [2024-07-14 05:48:24.999789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.041 qpair failed and we were unable to recover it. 00:34:18.041 [2024-07-14 05:48:25.000000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.041 [2024-07-14 05:48:25.000028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.041 qpair failed and we were unable to recover it. 00:34:18.041 [2024-07-14 05:48:25.000204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.041 [2024-07-14 05:48:25.000235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.041 qpair failed and we were unable to recover it. 00:34:18.041 [2024-07-14 05:48:25.000411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.041 [2024-07-14 05:48:25.000438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.041 qpair failed and we were unable to recover it. 00:34:18.041 [2024-07-14 05:48:25.000673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.041 [2024-07-14 05:48:25.000702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.041 qpair failed and we were unable to recover it. 00:34:18.041 [2024-07-14 05:48:25.000890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.041 [2024-07-14 05:48:25.000919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.041 qpair failed and we were unable to recover it. 00:34:18.041 [2024-07-14 05:48:25.001142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.041 [2024-07-14 05:48:25.001169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.041 qpair failed and we were unable to recover it. 00:34:18.041 [2024-07-14 05:48:25.001391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.041 [2024-07-14 05:48:25.001418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.041 qpair failed and we were unable to recover it. 00:34:18.041 [2024-07-14 05:48:25.001590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.041 [2024-07-14 05:48:25.001619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.041 qpair failed and we were unable to recover it. 00:34:18.041 [2024-07-14 05:48:25.001818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.041 [2024-07-14 05:48:25.001845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.041 qpair failed and we were unable to recover it. 00:34:18.041 [2024-07-14 05:48:25.002042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.041 [2024-07-14 05:48:25.002069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.041 qpair failed and we were unable to recover it. 00:34:18.041 [2024-07-14 05:48:25.002259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.041 [2024-07-14 05:48:25.002286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.041 qpair failed and we were unable to recover it. 00:34:18.041 [2024-07-14 05:48:25.002446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.041 [2024-07-14 05:48:25.002473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.041 qpair failed and we were unable to recover it. 00:34:18.041 [2024-07-14 05:48:25.002657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.041 [2024-07-14 05:48:25.002684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.041 qpair failed and we were unable to recover it. 00:34:18.041 [2024-07-14 05:48:25.002844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.041 [2024-07-14 05:48:25.002885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.041 qpair failed and we were unable to recover it. 00:34:18.041 [2024-07-14 05:48:25.003075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.041 [2024-07-14 05:48:25.003101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.041 qpair failed and we were unable to recover it. 00:34:18.041 [2024-07-14 05:48:25.003279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.041 [2024-07-14 05:48:25.003305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.041 qpair failed and we were unable to recover it. 00:34:18.041 [2024-07-14 05:48:25.003487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.041 [2024-07-14 05:48:25.003514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.041 qpair failed and we were unable to recover it. 00:34:18.041 [2024-07-14 05:48:25.003664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.041 [2024-07-14 05:48:25.003690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.041 qpair failed and we were unable to recover it. 00:34:18.041 [2024-07-14 05:48:25.003841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.041 [2024-07-14 05:48:25.003876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.041 qpair failed and we were unable to recover it. 00:34:18.041 [2024-07-14 05:48:25.004086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.041 [2024-07-14 05:48:25.004130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.041 qpair failed and we were unable to recover it. 00:34:18.041 [2024-07-14 05:48:25.004334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.041 [2024-07-14 05:48:25.004360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.041 qpair failed and we were unable to recover it. 00:34:18.041 [2024-07-14 05:48:25.004568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.041 [2024-07-14 05:48:25.004594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.041 qpair failed and we were unable to recover it. 00:34:18.041 [2024-07-14 05:48:25.004774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.041 [2024-07-14 05:48:25.004804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.041 qpair failed and we were unable to recover it. 00:34:18.041 [2024-07-14 05:48:25.004999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.041 [2024-07-14 05:48:25.005026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.041 qpair failed and we were unable to recover it. 00:34:18.041 [2024-07-14 05:48:25.005186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.042 [2024-07-14 05:48:25.005224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.042 qpair failed and we were unable to recover it. 00:34:18.042 [2024-07-14 05:48:25.005435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.042 [2024-07-14 05:48:25.005466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.042 qpair failed and we were unable to recover it. 00:34:18.042 [2024-07-14 05:48:25.005653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.042 [2024-07-14 05:48:25.005680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.042 qpair failed and we were unable to recover it. 00:34:18.042 [2024-07-14 05:48:25.005840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.042 [2024-07-14 05:48:25.005889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.042 qpair failed and we were unable to recover it. 00:34:18.042 [2024-07-14 05:48:25.006069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.042 [2024-07-14 05:48:25.006099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.042 qpair failed and we were unable to recover it. 00:34:18.042 [2024-07-14 05:48:25.006285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.042 [2024-07-14 05:48:25.006312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.042 qpair failed and we were unable to recover it. 00:34:18.042 [2024-07-14 05:48:25.006497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.042 [2024-07-14 05:48:25.006524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.042 qpair failed and we were unable to recover it. 00:34:18.042 [2024-07-14 05:48:25.006731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.042 [2024-07-14 05:48:25.006760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.042 qpair failed and we were unable to recover it. 00:34:18.042 [2024-07-14 05:48:25.006978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.042 [2024-07-14 05:48:25.007005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.042 qpair failed and we were unable to recover it. 00:34:18.042 [2024-07-14 05:48:25.007210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.042 [2024-07-14 05:48:25.007241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.042 qpair failed and we were unable to recover it. 00:34:18.042 [2024-07-14 05:48:25.007471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.042 [2024-07-14 05:48:25.007501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.042 qpair failed and we were unable to recover it. 00:34:18.042 [2024-07-14 05:48:25.007680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.042 [2024-07-14 05:48:25.007707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.042 qpair failed and we were unable to recover it. 00:34:18.042 [2024-07-14 05:48:25.007895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.042 [2024-07-14 05:48:25.007923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.042 qpair failed and we were unable to recover it. 00:34:18.042 [2024-07-14 05:48:25.008109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.042 [2024-07-14 05:48:25.008135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.042 qpair failed and we were unable to recover it. 00:34:18.042 [2024-07-14 05:48:25.008322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.042 [2024-07-14 05:48:25.008348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.042 qpair failed and we were unable to recover it. 00:34:18.042 [2024-07-14 05:48:25.008564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.042 [2024-07-14 05:48:25.008593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.042 qpair failed and we were unable to recover it. 00:34:18.042 [2024-07-14 05:48:25.008768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.042 [2024-07-14 05:48:25.008797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.042 qpair failed and we were unable to recover it. 00:34:18.042 [2024-07-14 05:48:25.009007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.042 [2024-07-14 05:48:25.009035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.042 qpair failed and we were unable to recover it. 00:34:18.042 [2024-07-14 05:48:25.009191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.042 [2024-07-14 05:48:25.009219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.042 qpair failed and we were unable to recover it. 00:34:18.042 [2024-07-14 05:48:25.009380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.042 [2024-07-14 05:48:25.009408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.042 qpair failed and we were unable to recover it. 00:34:18.042 [2024-07-14 05:48:25.009619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.042 [2024-07-14 05:48:25.009645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.042 qpair failed and we were unable to recover it. 00:34:18.042 [2024-07-14 05:48:25.009854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.042 [2024-07-14 05:48:25.009889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.042 qpair failed and we were unable to recover it. 00:34:18.042 [2024-07-14 05:48:25.010100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.042 [2024-07-14 05:48:25.010131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.042 qpair failed and we were unable to recover it. 00:34:18.042 [2024-07-14 05:48:25.010352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.042 [2024-07-14 05:48:25.010379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.042 qpair failed and we were unable to recover it. 00:34:18.042 [2024-07-14 05:48:25.010568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.042 [2024-07-14 05:48:25.010594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.042 qpair failed and we were unable to recover it. 00:34:18.042 [2024-07-14 05:48:25.010814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.042 [2024-07-14 05:48:25.010843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.042 qpair failed and we were unable to recover it. 00:34:18.042 [2024-07-14 05:48:25.011032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.042 [2024-07-14 05:48:25.011059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.042 qpair failed and we were unable to recover it. 00:34:18.042 [2024-07-14 05:48:25.011210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.042 [2024-07-14 05:48:25.011237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.042 qpair failed and we were unable to recover it. 00:34:18.042 [2024-07-14 05:48:25.011478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.042 [2024-07-14 05:48:25.011525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.042 qpair failed and we were unable to recover it. 00:34:18.042 [2024-07-14 05:48:25.011717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.042 [2024-07-14 05:48:25.011747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.042 qpair failed and we were unable to recover it. 00:34:18.042 [2024-07-14 05:48:25.011931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.042 [2024-07-14 05:48:25.011959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.042 qpair failed and we were unable to recover it. 00:34:18.042 [2024-07-14 05:48:25.012145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.042 [2024-07-14 05:48:25.012171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.042 qpair failed and we were unable to recover it. 00:34:18.042 [2024-07-14 05:48:25.012380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.042 [2024-07-14 05:48:25.012407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.042 qpair failed and we were unable to recover it. 00:34:18.042 [2024-07-14 05:48:25.012617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.042 [2024-07-14 05:48:25.012647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.042 qpair failed and we were unable to recover it. 00:34:18.042 [2024-07-14 05:48:25.012881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.042 [2024-07-14 05:48:25.012911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.042 qpair failed and we were unable to recover it. 00:34:18.042 [2024-07-14 05:48:25.013084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.042 [2024-07-14 05:48:25.013111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.042 qpair failed and we were unable to recover it. 00:34:18.042 [2024-07-14 05:48:25.013272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.042 [2024-07-14 05:48:25.013300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.042 qpair failed and we were unable to recover it. 00:34:18.042 [2024-07-14 05:48:25.013493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.042 [2024-07-14 05:48:25.013520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.042 qpair failed and we were unable to recover it. 00:34:18.042 [2024-07-14 05:48:25.013704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.042 [2024-07-14 05:48:25.013738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.042 qpair failed and we were unable to recover it. 00:34:18.042 [2024-07-14 05:48:25.013898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.042 [2024-07-14 05:48:25.013926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.042 qpair failed and we were unable to recover it. 00:34:18.042 [2024-07-14 05:48:25.014109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.042 [2024-07-14 05:48:25.014136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.042 qpair failed and we were unable to recover it. 00:34:18.042 [2024-07-14 05:48:25.014346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.042 [2024-07-14 05:48:25.014379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.042 qpair failed and we were unable to recover it. 00:34:18.042 [2024-07-14 05:48:25.014564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.042 [2024-07-14 05:48:25.014591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.042 qpair failed and we were unable to recover it. 00:34:18.042 [2024-07-14 05:48:25.014807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.042 [2024-07-14 05:48:25.014836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.042 qpair failed and we were unable to recover it. 00:34:18.042 [2024-07-14 05:48:25.015066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.043 [2024-07-14 05:48:25.015093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.043 qpair failed and we were unable to recover it. 00:34:18.043 [2024-07-14 05:48:25.015336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.043 [2024-07-14 05:48:25.015365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.043 qpair failed and we were unable to recover it. 00:34:18.043 [2024-07-14 05:48:25.015527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.043 [2024-07-14 05:48:25.015556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.043 qpair failed and we were unable to recover it. 00:34:18.043 [2024-07-14 05:48:25.015761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.043 [2024-07-14 05:48:25.015787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.043 qpair failed and we were unable to recover it. 00:34:18.043 [2024-07-14 05:48:25.015983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.043 [2024-07-14 05:48:25.016010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.043 qpair failed and we were unable to recover it. 00:34:18.043 [2024-07-14 05:48:25.016189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.043 [2024-07-14 05:48:25.016216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.043 qpair failed and we were unable to recover it. 00:34:18.043 [2024-07-14 05:48:25.016465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.043 [2024-07-14 05:48:25.016492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.043 qpair failed and we were unable to recover it. 00:34:18.043 [2024-07-14 05:48:25.016692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.043 [2024-07-14 05:48:25.016719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.043 qpair failed and we were unable to recover it. 00:34:18.043 [2024-07-14 05:48:25.016961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.043 [2024-07-14 05:48:25.016989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.043 qpair failed and we were unable to recover it. 00:34:18.043 [2024-07-14 05:48:25.017172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.043 [2024-07-14 05:48:25.017199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.043 qpair failed and we were unable to recover it. 00:34:18.043 [2024-07-14 05:48:25.017406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.043 [2024-07-14 05:48:25.017434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.043 qpair failed and we were unable to recover it. 00:34:18.043 [2024-07-14 05:48:25.017656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.043 [2024-07-14 05:48:25.017685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.043 qpair failed and we were unable to recover it. 00:34:18.043 [2024-07-14 05:48:25.017890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.043 [2024-07-14 05:48:25.017917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.043 qpair failed and we were unable to recover it. 00:34:18.043 [2024-07-14 05:48:25.018084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.043 [2024-07-14 05:48:25.018111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.043 qpair failed and we were unable to recover it. 00:34:18.043 [2024-07-14 05:48:25.018300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.043 [2024-07-14 05:48:25.018327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.043 qpair failed and we were unable to recover it. 00:34:18.043 [2024-07-14 05:48:25.018569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.043 [2024-07-14 05:48:25.018596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.043 qpair failed and we were unable to recover it. 00:34:18.043 [2024-07-14 05:48:25.018771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.043 [2024-07-14 05:48:25.018799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.043 qpair failed and we were unable to recover it. 00:34:18.043 [2024-07-14 05:48:25.019032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.043 [2024-07-14 05:48:25.019059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.043 qpair failed and we were unable to recover it. 00:34:18.043 [2024-07-14 05:48:25.019252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.043 [2024-07-14 05:48:25.019280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.043 qpair failed and we were unable to recover it. 00:34:18.043 [2024-07-14 05:48:25.019465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.043 [2024-07-14 05:48:25.019493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.043 qpair failed and we were unable to recover it. 00:34:18.043 [2024-07-14 05:48:25.019675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.043 [2024-07-14 05:48:25.019702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.043 qpair failed and we were unable to recover it. 00:34:18.043 [2024-07-14 05:48:25.019883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.043 [2024-07-14 05:48:25.019910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.043 qpair failed and we were unable to recover it. 00:34:18.043 [2024-07-14 05:48:25.020070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.043 [2024-07-14 05:48:25.020097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.043 qpair failed and we were unable to recover it. 00:34:18.043 [2024-07-14 05:48:25.020310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.043 [2024-07-14 05:48:25.020336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.043 qpair failed and we were unable to recover it. 00:34:18.043 [2024-07-14 05:48:25.020525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.043 [2024-07-14 05:48:25.020552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.043 qpair failed and we were unable to recover it. 00:34:18.043 [2024-07-14 05:48:25.020729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.043 [2024-07-14 05:48:25.020756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.043 qpair failed and we were unable to recover it. 00:34:18.043 [2024-07-14 05:48:25.020925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.043 [2024-07-14 05:48:25.020966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.043 qpair failed and we were unable to recover it. 00:34:18.043 [2024-07-14 05:48:25.021162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.043 [2024-07-14 05:48:25.021190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.043 qpair failed and we were unable to recover it. 00:34:18.043 [2024-07-14 05:48:25.021380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.043 [2024-07-14 05:48:25.021406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.043 qpair failed and we were unable to recover it. 00:34:18.043 [2024-07-14 05:48:25.021594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.043 [2024-07-14 05:48:25.021622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.043 qpair failed and we were unable to recover it. 00:34:18.043 [2024-07-14 05:48:25.021813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.043 [2024-07-14 05:48:25.021840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.043 qpair failed and we were unable to recover it. 00:34:18.043 [2024-07-14 05:48:25.022058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.043 [2024-07-14 05:48:25.022085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.043 qpair failed and we were unable to recover it. 00:34:18.043 [2024-07-14 05:48:25.022273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.043 [2024-07-14 05:48:25.022300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.043 qpair failed and we were unable to recover it. 00:34:18.043 [2024-07-14 05:48:25.022510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.043 [2024-07-14 05:48:25.022536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.043 qpair failed and we were unable to recover it. 00:34:18.043 [2024-07-14 05:48:25.022721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.043 [2024-07-14 05:48:25.022748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.043 qpair failed and we were unable to recover it. 00:34:18.043 [2024-07-14 05:48:25.022944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.043 [2024-07-14 05:48:25.022972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.043 qpair failed and we were unable to recover it. 00:34:18.043 [2024-07-14 05:48:25.023189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.043 [2024-07-14 05:48:25.023216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.043 EAL: No free 2048 kB hugepages reported on node 1 00:34:18.043 qpair failed and we were unable to recover it. 00:34:18.043 [2024-07-14 05:48:25.023376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.043 [2024-07-14 05:48:25.023407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.043 qpair failed and we were unable to recover it. 00:34:18.043 [2024-07-14 05:48:25.023593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.043 [2024-07-14 05:48:25.023621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.043 qpair failed and we were unable to recover it. 00:34:18.043 [2024-07-14 05:48:25.023780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.043 [2024-07-14 05:48:25.023806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.043 qpair failed and we were unable to recover it. 00:34:18.043 [2024-07-14 05:48:25.024007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.043 [2024-07-14 05:48:25.024034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.043 qpair failed and we were unable to recover it. 00:34:18.043 [2024-07-14 05:48:25.024222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.043 [2024-07-14 05:48:25.024249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.043 qpair failed and we were unable to recover it. 00:34:18.043 [2024-07-14 05:48:25.024442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.043 [2024-07-14 05:48:25.024470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.043 qpair failed and we were unable to recover it. 00:34:18.043 [2024-07-14 05:48:25.024684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.043 [2024-07-14 05:48:25.024711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.043 qpair failed and we were unable to recover it. 00:34:18.043 [2024-07-14 05:48:25.024895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.043 [2024-07-14 05:48:25.024923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.043 qpair failed and we were unable to recover it. 00:34:18.043 [2024-07-14 05:48:25.025106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.043 [2024-07-14 05:48:25.025133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.043 qpair failed and we were unable to recover it. 00:34:18.043 [2024-07-14 05:48:25.025324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.043 [2024-07-14 05:48:25.025352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.043 qpair failed and we were unable to recover it. 00:34:18.044 [2024-07-14 05:48:25.025516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.044 [2024-07-14 05:48:25.025543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.044 qpair failed and we were unable to recover it. 00:34:18.044 [2024-07-14 05:48:25.025720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.044 [2024-07-14 05:48:25.025746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.044 qpair failed and we were unable to recover it. 00:34:18.044 [2024-07-14 05:48:25.025930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.044 [2024-07-14 05:48:25.025957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.044 qpair failed and we were unable to recover it. 00:34:18.044 [2024-07-14 05:48:25.026166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.044 [2024-07-14 05:48:25.026193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.044 qpair failed and we were unable to recover it. 00:34:18.044 [2024-07-14 05:48:25.026383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.044 [2024-07-14 05:48:25.026410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.044 qpair failed and we were unable to recover it. 00:34:18.044 [2024-07-14 05:48:25.026592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.044 [2024-07-14 05:48:25.026619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.044 qpair failed and we were unable to recover it. 00:34:18.044 [2024-07-14 05:48:25.026830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.044 [2024-07-14 05:48:25.026874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.044 qpair failed and we were unable to recover it. 00:34:18.044 [2024-07-14 05:48:25.027087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.044 [2024-07-14 05:48:25.027114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.044 qpair failed and we were unable to recover it. 00:34:18.044 [2024-07-14 05:48:25.027275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.044 [2024-07-14 05:48:25.027303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.044 qpair failed and we were unable to recover it. 00:34:18.044 [2024-07-14 05:48:25.027494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.044 [2024-07-14 05:48:25.027522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.044 qpair failed and we were unable to recover it. 00:34:18.044 [2024-07-14 05:48:25.027687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.044 [2024-07-14 05:48:25.027713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.044 qpair failed and we were unable to recover it. 00:34:18.044 [2024-07-14 05:48:25.027927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.044 [2024-07-14 05:48:25.027954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.044 qpair failed and we were unable to recover it. 00:34:18.044 [2024-07-14 05:48:25.028106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.044 [2024-07-14 05:48:25.028133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.044 qpair failed and we were unable to recover it. 00:34:18.044 [2024-07-14 05:48:25.028322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.044 [2024-07-14 05:48:25.028349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.044 qpair failed and we were unable to recover it. 00:34:18.044 [2024-07-14 05:48:25.028531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.044 [2024-07-14 05:48:25.028558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.044 qpair failed and we were unable to recover it. 00:34:18.044 [2024-07-14 05:48:25.028752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.044 [2024-07-14 05:48:25.028782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.044 qpair failed and we were unable to recover it. 00:34:18.044 [2024-07-14 05:48:25.028965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.044 [2024-07-14 05:48:25.028993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.044 qpair failed and we were unable to recover it. 00:34:18.044 [2024-07-14 05:48:25.029150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.044 [2024-07-14 05:48:25.029176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.044 qpair failed and we were unable to recover it. 00:34:18.044 [2024-07-14 05:48:25.029338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.044 [2024-07-14 05:48:25.029365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.044 qpair failed and we were unable to recover it. 00:34:18.044 [2024-07-14 05:48:25.029554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.044 [2024-07-14 05:48:25.029580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.044 qpair failed and we were unable to recover it. 00:34:18.044 [2024-07-14 05:48:25.029738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.044 [2024-07-14 05:48:25.029763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.044 qpair failed and we were unable to recover it. 00:34:18.044 [2024-07-14 05:48:25.029985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.044 [2024-07-14 05:48:25.030013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.044 qpair failed and we were unable to recover it. 00:34:18.044 [2024-07-14 05:48:25.030200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.044 [2024-07-14 05:48:25.030238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.044 qpair failed and we were unable to recover it. 00:34:18.044 [2024-07-14 05:48:25.030417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.044 [2024-07-14 05:48:25.030443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.044 qpair failed and we were unable to recover it. 00:34:18.044 [2024-07-14 05:48:25.030603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.044 [2024-07-14 05:48:25.030630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.044 qpair failed and we were unable to recover it. 00:34:18.044 [2024-07-14 05:48:25.030790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.044 [2024-07-14 05:48:25.030817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.044 qpair failed and we were unable to recover it. 00:34:18.044 [2024-07-14 05:48:25.031026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.044 [2024-07-14 05:48:25.031053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.044 qpair failed and we were unable to recover it. 00:34:18.044 [2024-07-14 05:48:25.031206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.044 [2024-07-14 05:48:25.031233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.044 qpair failed and we were unable to recover it. 00:34:18.044 [2024-07-14 05:48:25.031418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.044 [2024-07-14 05:48:25.031445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.044 qpair failed and we were unable to recover it. 00:34:18.044 [2024-07-14 05:48:25.031604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.044 [2024-07-14 05:48:25.031630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.044 qpair failed and we were unable to recover it. 00:34:18.044 [2024-07-14 05:48:25.031841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.044 [2024-07-14 05:48:25.031882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.044 qpair failed and we were unable to recover it. 00:34:18.044 [2024-07-14 05:48:25.032106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.044 [2024-07-14 05:48:25.032133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.044 qpair failed and we were unable to recover it. 00:34:18.044 [2024-07-14 05:48:25.032321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.044 [2024-07-14 05:48:25.032347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.044 qpair failed and we were unable to recover it. 00:34:18.044 [2024-07-14 05:48:25.032524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.044 [2024-07-14 05:48:25.032551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.044 qpair failed and we were unable to recover it. 00:34:18.044 [2024-07-14 05:48:25.032733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.044 [2024-07-14 05:48:25.032760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.044 qpair failed and we were unable to recover it. 00:34:18.044 [2024-07-14 05:48:25.032949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.044 [2024-07-14 05:48:25.032975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.044 qpair failed and we were unable to recover it. 00:34:18.044 [2024-07-14 05:48:25.033126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.044 [2024-07-14 05:48:25.033152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.044 qpair failed and we were unable to recover it. 00:34:18.044 [2024-07-14 05:48:25.033311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.044 [2024-07-14 05:48:25.033337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.044 qpair failed and we were unable to recover it. 00:34:18.044 [2024-07-14 05:48:25.033546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.044 [2024-07-14 05:48:25.033572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.045 qpair failed and we were unable to recover it. 00:34:18.045 [2024-07-14 05:48:25.033726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.045 [2024-07-14 05:48:25.033752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.045 qpair failed and we were unable to recover it. 00:34:18.045 [2024-07-14 05:48:25.033944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.045 [2024-07-14 05:48:25.033971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.045 qpair failed and we were unable to recover it. 00:34:18.045 [2024-07-14 05:48:25.034151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.045 [2024-07-14 05:48:25.034177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.045 qpair failed and we were unable to recover it. 00:34:18.045 [2024-07-14 05:48:25.034326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.045 [2024-07-14 05:48:25.034353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.045 qpair failed and we were unable to recover it. 00:34:18.045 [2024-07-14 05:48:25.034562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.045 [2024-07-14 05:48:25.034588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.045 qpair failed and we were unable to recover it. 00:34:18.045 [2024-07-14 05:48:25.034781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.045 [2024-07-14 05:48:25.034807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.045 qpair failed and we were unable to recover it. 00:34:18.045 [2024-07-14 05:48:25.035007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.045 [2024-07-14 05:48:25.035034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.045 qpair failed and we were unable to recover it. 00:34:18.045 [2024-07-14 05:48:25.035212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.045 [2024-07-14 05:48:25.035238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.045 qpair failed and we were unable to recover it. 00:34:18.045 [2024-07-14 05:48:25.035454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.045 [2024-07-14 05:48:25.035480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.045 qpair failed and we were unable to recover it. 00:34:18.045 [2024-07-14 05:48:25.035638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.045 [2024-07-14 05:48:25.035664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.045 qpair failed and we were unable to recover it. 00:34:18.045 [2024-07-14 05:48:25.035850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.045 [2024-07-14 05:48:25.035885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.045 qpair failed and we were unable to recover it. 00:34:18.045 [2024-07-14 05:48:25.036041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.045 [2024-07-14 05:48:25.036068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.045 qpair failed and we were unable to recover it. 00:34:18.045 [2024-07-14 05:48:25.036214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.045 [2024-07-14 05:48:25.036240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.045 qpair failed and we were unable to recover it. 00:34:18.045 [2024-07-14 05:48:25.036437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.045 [2024-07-14 05:48:25.036463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.045 qpair failed and we were unable to recover it. 00:34:18.045 [2024-07-14 05:48:25.036623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.045 [2024-07-14 05:48:25.036649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.045 qpair failed and we were unable to recover it. 00:34:18.045 [2024-07-14 05:48:25.036863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.045 [2024-07-14 05:48:25.036895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.045 qpair failed and we were unable to recover it. 00:34:18.045 [2024-07-14 05:48:25.037055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.045 [2024-07-14 05:48:25.037082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.045 qpair failed and we were unable to recover it. 00:34:18.045 [2024-07-14 05:48:25.037279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.045 [2024-07-14 05:48:25.037305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.045 qpair failed and we were unable to recover it. 00:34:18.045 [2024-07-14 05:48:25.037506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.045 [2024-07-14 05:48:25.037532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.045 qpair failed and we were unable to recover it. 00:34:18.045 [2024-07-14 05:48:25.037741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.045 [2024-07-14 05:48:25.037767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.045 qpair failed and we were unable to recover it. 00:34:18.045 [2024-07-14 05:48:25.037933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.045 [2024-07-14 05:48:25.037960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.045 qpair failed and we were unable to recover it. 00:34:18.045 [2024-07-14 05:48:25.038142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.045 [2024-07-14 05:48:25.038174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.045 qpair failed and we were unable to recover it. 00:34:18.045 [2024-07-14 05:48:25.038358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.045 [2024-07-14 05:48:25.038384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.045 qpair failed and we were unable to recover it. 00:34:18.045 [2024-07-14 05:48:25.038563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.045 [2024-07-14 05:48:25.038590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.045 qpair failed and we were unable to recover it. 00:34:18.045 [2024-07-14 05:48:25.038770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.045 [2024-07-14 05:48:25.038796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.045 qpair failed and we were unable to recover it. 00:34:18.045 [2024-07-14 05:48:25.038990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.045 [2024-07-14 05:48:25.039017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.045 qpair failed and we were unable to recover it. 00:34:18.045 [2024-07-14 05:48:25.039173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.045 [2024-07-14 05:48:25.039200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.045 qpair failed and we were unable to recover it. 00:34:18.045 [2024-07-14 05:48:25.039379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.045 [2024-07-14 05:48:25.039405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.045 qpair failed and we were unable to recover it. 00:34:18.045 [2024-07-14 05:48:25.039590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.045 [2024-07-14 05:48:25.039616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.045 qpair failed and we were unable to recover it. 00:34:18.045 [2024-07-14 05:48:25.039826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.045 [2024-07-14 05:48:25.039853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.045 qpair failed and we were unable to recover it. 00:34:18.045 [2024-07-14 05:48:25.040025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.045 [2024-07-14 05:48:25.040052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.045 qpair failed and we were unable to recover it. 00:34:18.045 [2024-07-14 05:48:25.040276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.045 [2024-07-14 05:48:25.040306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.045 qpair failed and we were unable to recover it. 00:34:18.045 [2024-07-14 05:48:25.040465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.045 [2024-07-14 05:48:25.040492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.045 qpair failed and we were unable to recover it. 00:34:18.045 [2024-07-14 05:48:25.040676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.045 [2024-07-14 05:48:25.040703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.045 qpair failed and we were unable to recover it. 00:34:18.045 [2024-07-14 05:48:25.040905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.045 [2024-07-14 05:48:25.040932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.045 qpair failed and we were unable to recover it. 00:34:18.045 [2024-07-14 05:48:25.041117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.045 [2024-07-14 05:48:25.041143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.045 qpair failed and we were unable to recover it. 00:34:18.045 [2024-07-14 05:48:25.041321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.045 [2024-07-14 05:48:25.041348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.045 qpair failed and we were unable to recover it. 00:34:18.045 [2024-07-14 05:48:25.041536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.045 [2024-07-14 05:48:25.041562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.045 qpair failed and we were unable to recover it. 00:34:18.045 [2024-07-14 05:48:25.041724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.045 [2024-07-14 05:48:25.041750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.045 qpair failed and we were unable to recover it. 00:34:18.045 [2024-07-14 05:48:25.041958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.045 [2024-07-14 05:48:25.041999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.045 qpair failed and we were unable to recover it. 00:34:18.045 [2024-07-14 05:48:25.042189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.045 [2024-07-14 05:48:25.042217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.045 qpair failed and we were unable to recover it. 00:34:18.045 [2024-07-14 05:48:25.042375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.045 [2024-07-14 05:48:25.042403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.045 qpair failed and we were unable to recover it. 00:34:18.045 [2024-07-14 05:48:25.042565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.045 [2024-07-14 05:48:25.042594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.045 qpair failed and we were unable to recover it. 00:34:18.045 [2024-07-14 05:48:25.042778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.045 [2024-07-14 05:48:25.042805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.045 qpair failed and we were unable to recover it. 00:34:18.046 [2024-07-14 05:48:25.042984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.046 [2024-07-14 05:48:25.043012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.046 qpair failed and we were unable to recover it. 00:34:18.046 [2024-07-14 05:48:25.043177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.046 [2024-07-14 05:48:25.043206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.046 qpair failed and we were unable to recover it. 00:34:18.046 [2024-07-14 05:48:25.043420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.046 [2024-07-14 05:48:25.043447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.046 qpair failed and we were unable to recover it. 00:34:18.046 [2024-07-14 05:48:25.043655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.046 [2024-07-14 05:48:25.043682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.046 qpair failed and we were unable to recover it. 00:34:18.046 [2024-07-14 05:48:25.043863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.046 [2024-07-14 05:48:25.043895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.046 qpair failed and we were unable to recover it. 00:34:18.046 [2024-07-14 05:48:25.044047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.046 [2024-07-14 05:48:25.044074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.046 qpair failed and we were unable to recover it. 00:34:18.046 [2024-07-14 05:48:25.044290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.046 [2024-07-14 05:48:25.044317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.046 qpair failed and we were unable to recover it. 00:34:18.046 [2024-07-14 05:48:25.044499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.046 [2024-07-14 05:48:25.044525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.046 qpair failed and we were unable to recover it. 00:34:18.046 [2024-07-14 05:48:25.044710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.046 [2024-07-14 05:48:25.044738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.046 qpair failed and we were unable to recover it. 00:34:18.046 [2024-07-14 05:48:25.044928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.046 [2024-07-14 05:48:25.044956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.046 qpair failed and we were unable to recover it. 00:34:18.046 [2024-07-14 05:48:25.045138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.046 [2024-07-14 05:48:25.045166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.046 qpair failed and we were unable to recover it. 00:34:18.046 [2024-07-14 05:48:25.045378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.046 [2024-07-14 05:48:25.045405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.046 qpair failed and we were unable to recover it. 00:34:18.046 [2024-07-14 05:48:25.045560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.046 [2024-07-14 05:48:25.045588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.046 qpair failed and we were unable to recover it. 00:34:18.046 [2024-07-14 05:48:25.045776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.046 [2024-07-14 05:48:25.045804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.046 qpair failed and we were unable to recover it. 00:34:18.046 [2024-07-14 05:48:25.045973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.046 [2024-07-14 05:48:25.046000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.046 qpair failed and we were unable to recover it. 00:34:18.046 [2024-07-14 05:48:25.046159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.046 [2024-07-14 05:48:25.046185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.046 qpair failed and we were unable to recover it. 00:34:18.046 [2024-07-14 05:48:25.046365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.046 [2024-07-14 05:48:25.046392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.046 qpair failed and we were unable to recover it. 00:34:18.046 [2024-07-14 05:48:25.046604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.046 [2024-07-14 05:48:25.046630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.046 qpair failed and we were unable to recover it. 00:34:18.046 [2024-07-14 05:48:25.046781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.046 [2024-07-14 05:48:25.046808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.046 qpair failed and we were unable to recover it. 00:34:18.046 [2024-07-14 05:48:25.046998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.046 [2024-07-14 05:48:25.047025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.046 qpair failed and we were unable to recover it. 00:34:18.046 [2024-07-14 05:48:25.047175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.046 [2024-07-14 05:48:25.047202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.046 qpair failed and we were unable to recover it. 00:34:18.046 [2024-07-14 05:48:25.047359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.046 [2024-07-14 05:48:25.047386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.046 qpair failed and we were unable to recover it. 00:34:18.046 [2024-07-14 05:48:25.047571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.046 [2024-07-14 05:48:25.047599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.046 qpair failed and we were unable to recover it. 00:34:18.046 [2024-07-14 05:48:25.047755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.046 [2024-07-14 05:48:25.047782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.046 qpair failed and we were unable to recover it. 00:34:18.046 [2024-07-14 05:48:25.047962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.046 [2024-07-14 05:48:25.047989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.046 qpair failed and we were unable to recover it. 00:34:18.046 [2024-07-14 05:48:25.048170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.046 [2024-07-14 05:48:25.048197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.046 qpair failed and we were unable to recover it. 00:34:18.046 [2024-07-14 05:48:25.048406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.046 [2024-07-14 05:48:25.048432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.046 qpair failed and we were unable to recover it. 00:34:18.046 [2024-07-14 05:48:25.048616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.046 [2024-07-14 05:48:25.048648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.046 qpair failed and we were unable to recover it. 00:34:18.046 [2024-07-14 05:48:25.048806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.046 [2024-07-14 05:48:25.048832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.046 qpair failed and we were unable to recover it. 00:34:18.046 [2024-07-14 05:48:25.049042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.046 [2024-07-14 05:48:25.049070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.046 qpair failed and we were unable to recover it. 00:34:18.046 [2024-07-14 05:48:25.049271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.046 [2024-07-14 05:48:25.049298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.046 qpair failed and we were unable to recover it. 00:34:18.046 [2024-07-14 05:48:25.049516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.046 [2024-07-14 05:48:25.049543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.046 qpair failed and we were unable to recover it. 00:34:18.046 [2024-07-14 05:48:25.049749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.046 [2024-07-14 05:48:25.049775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.046 qpair failed and we were unable to recover it. 00:34:18.046 [2024-07-14 05:48:25.049971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.046 [2024-07-14 05:48:25.049999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.046 qpair failed and we were unable to recover it. 00:34:18.046 [2024-07-14 05:48:25.050181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.046 [2024-07-14 05:48:25.050208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.046 qpair failed and we were unable to recover it. 00:34:18.046 [2024-07-14 05:48:25.050390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.046 [2024-07-14 05:48:25.050417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.046 qpair failed and we were unable to recover it. 00:34:18.046 [2024-07-14 05:48:25.050578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.046 [2024-07-14 05:48:25.050604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.046 qpair failed and we were unable to recover it. 00:34:18.046 [2024-07-14 05:48:25.050765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.046 [2024-07-14 05:48:25.050792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.046 qpair failed and we were unable to recover it. 00:34:18.046 [2024-07-14 05:48:25.050988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.046 [2024-07-14 05:48:25.051015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.046 qpair failed and we were unable to recover it. 00:34:18.046 [2024-07-14 05:48:25.051179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.046 [2024-07-14 05:48:25.051207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.046 qpair failed and we were unable to recover it. 00:34:18.046 [2024-07-14 05:48:25.051366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.046 [2024-07-14 05:48:25.051393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.046 qpair failed and we were unable to recover it. 00:34:18.046 [2024-07-14 05:48:25.051580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.046 [2024-07-14 05:48:25.051607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.046 qpair failed and we were unable to recover it. 00:34:18.046 [2024-07-14 05:48:25.051794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.046 [2024-07-14 05:48:25.051820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.046 qpair failed and we were unable to recover it. 00:34:18.046 [2024-07-14 05:48:25.051988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.047 [2024-07-14 05:48:25.052015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.047 qpair failed and we were unable to recover it. 00:34:18.047 [2024-07-14 05:48:25.052178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.047 [2024-07-14 05:48:25.052204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.047 qpair failed and we were unable to recover it. 00:34:18.047 [2024-07-14 05:48:25.052388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.047 [2024-07-14 05:48:25.052415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.047 qpair failed and we were unable to recover it. 00:34:18.047 [2024-07-14 05:48:25.052564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.047 [2024-07-14 05:48:25.052591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.047 qpair failed and we were unable to recover it. 00:34:18.047 [2024-07-14 05:48:25.052765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.047 [2024-07-14 05:48:25.052792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.047 qpair failed and we were unable to recover it. 00:34:18.047 [2024-07-14 05:48:25.052960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.047 [2024-07-14 05:48:25.052988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.047 qpair failed and we were unable to recover it. 00:34:18.047 [2024-07-14 05:48:25.053168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.047 [2024-07-14 05:48:25.053194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.047 qpair failed and we were unable to recover it. 00:34:18.047 [2024-07-14 05:48:25.053346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.047 [2024-07-14 05:48:25.053372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.047 qpair failed and we were unable to recover it. 00:34:18.047 [2024-07-14 05:48:25.053551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.047 [2024-07-14 05:48:25.053577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.047 qpair failed and we were unable to recover it. 00:34:18.047 [2024-07-14 05:48:25.053761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.047 [2024-07-14 05:48:25.053788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.047 qpair failed and we were unable to recover it. 00:34:18.047 [2024-07-14 05:48:25.053981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.047 [2024-07-14 05:48:25.054008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.047 qpair failed and we were unable to recover it. 00:34:18.047 [2024-07-14 05:48:25.054191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.047 [2024-07-14 05:48:25.054218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.047 qpair failed and we were unable to recover it. 00:34:18.047 [2024-07-14 05:48:25.054397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.047 [2024-07-14 05:48:25.054423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.047 qpair failed and we were unable to recover it. 00:34:18.047 [2024-07-14 05:48:25.054592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.047 [2024-07-14 05:48:25.054618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.047 qpair failed and we were unable to recover it. 00:34:18.047 [2024-07-14 05:48:25.054768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.047 [2024-07-14 05:48:25.054796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.047 qpair failed and we were unable to recover it. 00:34:18.047 [2024-07-14 05:48:25.054970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.047 [2024-07-14 05:48:25.055012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.047 qpair failed and we were unable to recover it. 00:34:18.047 [2024-07-14 05:48:25.055238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.047 [2024-07-14 05:48:25.055267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.047 qpair failed and we were unable to recover it. 00:34:18.047 [2024-07-14 05:48:25.055455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.047 [2024-07-14 05:48:25.055482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.047 qpair failed and we were unable to recover it. 00:34:18.047 [2024-07-14 05:48:25.055666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.047 [2024-07-14 05:48:25.055694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.047 qpair failed and we were unable to recover it. 00:34:18.047 [2024-07-14 05:48:25.055879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.047 [2024-07-14 05:48:25.055907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.047 qpair failed and we were unable to recover it. 00:34:18.047 [2024-07-14 05:48:25.056068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.047 [2024-07-14 05:48:25.056097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.047 qpair failed and we were unable to recover it. 00:34:18.047 [2024-07-14 05:48:25.056280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.047 [2024-07-14 05:48:25.056307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.047 qpair failed and we were unable to recover it. 00:34:18.047 [2024-07-14 05:48:25.056466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.047 [2024-07-14 05:48:25.056494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.047 qpair failed and we were unable to recover it. 00:34:18.047 [2024-07-14 05:48:25.056674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.047 [2024-07-14 05:48:25.056701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.047 qpair failed and we were unable to recover it. 00:34:18.047 [2024-07-14 05:48:25.056884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.047 [2024-07-14 05:48:25.056918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.047 qpair failed and we were unable to recover it. 00:34:18.047 [2024-07-14 05:48:25.057131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.047 [2024-07-14 05:48:25.057159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.047 qpair failed and we were unable to recover it. 00:34:18.047 [2024-07-14 05:48:25.057340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.047 [2024-07-14 05:48:25.057368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.047 qpair failed and we were unable to recover it. 00:34:18.047 [2024-07-14 05:48:25.057578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.047 [2024-07-14 05:48:25.057605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.047 qpair failed and we were unable to recover it. 00:34:18.047 [2024-07-14 05:48:25.057621] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:18.047 [2024-07-14 05:48:25.057762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.047 [2024-07-14 05:48:25.057790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.047 qpair failed and we were unable to recover it. 00:34:18.047 [2024-07-14 05:48:25.057957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.047 [2024-07-14 05:48:25.057985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.047 qpair failed and we were unable to recover it. 00:34:18.047 [2024-07-14 05:48:25.058169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.047 [2024-07-14 05:48:25.058197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.047 qpair failed and we were unable to recover it. 00:34:18.047 [2024-07-14 05:48:25.058378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.047 [2024-07-14 05:48:25.058405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.047 qpair failed and we were unable to recover it. 00:34:18.047 [2024-07-14 05:48:25.058570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.047 [2024-07-14 05:48:25.058598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.047 qpair failed and we were unable to recover it. 00:34:18.047 [2024-07-14 05:48:25.058782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.047 [2024-07-14 05:48:25.058810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.047 qpair failed and we were unable to recover it. 00:34:18.047 [2024-07-14 05:48:25.058996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.047 [2024-07-14 05:48:25.059023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.047 qpair failed and we were unable to recover it. 00:34:18.047 [2024-07-14 05:48:25.059207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.047 [2024-07-14 05:48:25.059234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.047 qpair failed and we were unable to recover it. 00:34:18.047 [2024-07-14 05:48:25.059421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.047 [2024-07-14 05:48:25.059448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.047 qpair failed and we were unable to recover it. 00:34:18.047 [2024-07-14 05:48:25.059634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.047 [2024-07-14 05:48:25.059666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.047 qpair failed and we were unable to recover it. 00:34:18.047 [2024-07-14 05:48:25.059823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.047 [2024-07-14 05:48:25.059852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.047 qpair failed and we were unable to recover it. 00:34:18.047 [2024-07-14 05:48:25.060051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.047 [2024-07-14 05:48:25.060079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.047 qpair failed and we were unable to recover it. 00:34:18.047 [2024-07-14 05:48:25.060288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.047 [2024-07-14 05:48:25.060315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.047 qpair failed and we were unable to recover it. 00:34:18.047 [2024-07-14 05:48:25.060466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.047 [2024-07-14 05:48:25.060495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.047 qpair failed and we were unable to recover it. 00:34:18.047 [2024-07-14 05:48:25.060680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.047 [2024-07-14 05:48:25.060710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.047 qpair failed and we were unable to recover it. 00:34:18.047 [2024-07-14 05:48:25.060872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.047 [2024-07-14 05:48:25.060900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.048 qpair failed and we were unable to recover it. 00:34:18.048 [2024-07-14 05:48:25.061117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.048 [2024-07-14 05:48:25.061144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.048 qpair failed and we were unable to recover it. 00:34:18.048 [2024-07-14 05:48:25.061333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.048 [2024-07-14 05:48:25.061361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.048 qpair failed and we were unable to recover it. 00:34:18.048 [2024-07-14 05:48:25.061541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.048 [2024-07-14 05:48:25.061567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.048 qpair failed and we were unable to recover it. 00:34:18.048 [2024-07-14 05:48:25.061722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.048 [2024-07-14 05:48:25.061749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.048 qpair failed and we were unable to recover it. 00:34:18.048 [2024-07-14 05:48:25.061907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.048 [2024-07-14 05:48:25.061934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.048 qpair failed and we were unable to recover it. 00:34:18.048 [2024-07-14 05:48:25.062123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.048 [2024-07-14 05:48:25.062151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.048 qpair failed and we were unable to recover it. 00:34:18.048 [2024-07-14 05:48:25.062343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.048 [2024-07-14 05:48:25.062370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.048 qpair failed and we were unable to recover it. 00:34:18.048 [2024-07-14 05:48:25.062531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.048 [2024-07-14 05:48:25.062557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.048 qpair failed and we were unable to recover it. 00:34:18.048 [2024-07-14 05:48:25.062741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.048 [2024-07-14 05:48:25.062768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.048 qpair failed and we were unable to recover it. 00:34:18.048 [2024-07-14 05:48:25.062984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.048 [2024-07-14 05:48:25.063012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.048 qpair failed and we were unable to recover it. 00:34:18.048 [2024-07-14 05:48:25.063169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.048 [2024-07-14 05:48:25.063196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.048 qpair failed and we were unable to recover it. 00:34:18.048 [2024-07-14 05:48:25.063384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.048 [2024-07-14 05:48:25.063411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.048 qpair failed and we were unable to recover it. 00:34:18.048 [2024-07-14 05:48:25.063597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.048 [2024-07-14 05:48:25.063625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.048 qpair failed and we were unable to recover it. 00:34:18.048 [2024-07-14 05:48:25.063774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.048 [2024-07-14 05:48:25.063802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.048 qpair failed and we were unable to recover it. 00:34:18.048 [2024-07-14 05:48:25.064002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.048 [2024-07-14 05:48:25.064029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.048 qpair failed and we were unable to recover it. 00:34:18.048 [2024-07-14 05:48:25.064206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.048 [2024-07-14 05:48:25.064232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.048 qpair failed and we were unable to recover it. 00:34:18.048 [2024-07-14 05:48:25.064413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.048 [2024-07-14 05:48:25.064440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.048 qpair failed and we were unable to recover it. 00:34:18.048 [2024-07-14 05:48:25.064620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.048 [2024-07-14 05:48:25.064647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.048 qpair failed and we were unable to recover it. 00:34:18.048 [2024-07-14 05:48:25.064840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.048 [2024-07-14 05:48:25.064873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.048 qpair failed and we were unable to recover it. 00:34:18.048 [2024-07-14 05:48:25.065033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.048 [2024-07-14 05:48:25.065059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.048 qpair failed and we were unable to recover it. 00:34:18.048 [2024-07-14 05:48:25.065214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.048 [2024-07-14 05:48:25.065242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.048 qpair failed and we were unable to recover it. 00:34:18.048 [2024-07-14 05:48:25.065429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.048 [2024-07-14 05:48:25.065456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.048 qpair failed and we were unable to recover it. 00:34:18.048 [2024-07-14 05:48:25.065666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.048 [2024-07-14 05:48:25.065693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.048 qpair failed and we were unable to recover it. 00:34:18.048 [2024-07-14 05:48:25.065880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.048 [2024-07-14 05:48:25.065910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.048 qpair failed and we were unable to recover it. 00:34:18.048 [2024-07-14 05:48:25.066081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.048 [2024-07-14 05:48:25.066109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.048 qpair failed and we were unable to recover it. 00:34:18.048 [2024-07-14 05:48:25.066278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.048 [2024-07-14 05:48:25.066306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.048 qpair failed and we were unable to recover it. 00:34:18.048 [2024-07-14 05:48:25.066465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.048 [2024-07-14 05:48:25.066492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.048 qpair failed and we were unable to recover it. 00:34:18.048 [2024-07-14 05:48:25.066709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.048 [2024-07-14 05:48:25.066736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.048 qpair failed and we were unable to recover it. 00:34:18.048 [2024-07-14 05:48:25.066925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.048 [2024-07-14 05:48:25.066954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.048 qpair failed and we were unable to recover it. 00:34:18.048 [2024-07-14 05:48:25.067115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.048 [2024-07-14 05:48:25.067142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.048 qpair failed and we were unable to recover it. 00:34:18.048 [2024-07-14 05:48:25.067299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.048 [2024-07-14 05:48:25.067326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.048 qpair failed and we were unable to recover it. 00:34:18.048 [2024-07-14 05:48:25.067484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.048 [2024-07-14 05:48:25.067511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.048 qpair failed and we were unable to recover it. 00:34:18.048 [2024-07-14 05:48:25.067723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.048 [2024-07-14 05:48:25.067750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.048 qpair failed and we were unable to recover it. 00:34:18.048 [2024-07-14 05:48:25.067911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.048 [2024-07-14 05:48:25.067947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.048 qpair failed and we were unable to recover it. 00:34:18.048 [2024-07-14 05:48:25.068226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.048 [2024-07-14 05:48:25.068253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.048 qpair failed and we were unable to recover it. 00:34:18.048 [2024-07-14 05:48:25.068468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.048 [2024-07-14 05:48:25.068495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.048 qpair failed and we were unable to recover it. 00:34:18.048 [2024-07-14 05:48:25.068680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.048 [2024-07-14 05:48:25.068707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.048 qpair failed and we were unable to recover it. 00:34:18.048 [2024-07-14 05:48:25.068905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.048 [2024-07-14 05:48:25.068932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.048 qpair failed and we were unable to recover it. 00:34:18.048 [2024-07-14 05:48:25.069110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.048 [2024-07-14 05:48:25.069138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.048 qpair failed and we were unable to recover it. 00:34:18.048 [2024-07-14 05:48:25.069422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.048 [2024-07-14 05:48:25.069448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.048 qpair failed and we were unable to recover it. 00:34:18.049 [2024-07-14 05:48:25.069642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.049 [2024-07-14 05:48:25.069668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.049 qpair failed and we were unable to recover it. 00:34:18.049 [2024-07-14 05:48:25.069879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.049 [2024-07-14 05:48:25.069906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.049 qpair failed and we were unable to recover it. 00:34:18.049 [2024-07-14 05:48:25.070067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.049 [2024-07-14 05:48:25.070094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.049 qpair failed and we were unable to recover it. 00:34:18.049 [2024-07-14 05:48:25.070252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.049 [2024-07-14 05:48:25.070279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.049 qpair failed and we were unable to recover it. 00:34:18.049 [2024-07-14 05:48:25.070459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.049 [2024-07-14 05:48:25.070486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.049 qpair failed and we were unable to recover it. 00:34:18.049 [2024-07-14 05:48:25.070693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.049 [2024-07-14 05:48:25.070720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.049 qpair failed and we were unable to recover it. 00:34:18.049 [2024-07-14 05:48:25.070878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.049 [2024-07-14 05:48:25.070905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.049 qpair failed and we were unable to recover it. 00:34:18.049 [2024-07-14 05:48:25.071078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.049 [2024-07-14 05:48:25.071106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.049 qpair failed and we were unable to recover it. 00:34:18.049 [2024-07-14 05:48:25.071316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.049 [2024-07-14 05:48:25.071343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.049 qpair failed and we were unable to recover it. 00:34:18.049 [2024-07-14 05:48:25.071531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.049 [2024-07-14 05:48:25.071557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.049 qpair failed and we were unable to recover it. 00:34:18.049 [2024-07-14 05:48:25.071747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.049 [2024-07-14 05:48:25.071773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.049 qpair failed and we were unable to recover it. 00:34:18.049 [2024-07-14 05:48:25.071954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.049 [2024-07-14 05:48:25.071981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.049 qpair failed and we were unable to recover it. 00:34:18.049 [2024-07-14 05:48:25.072163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.049 [2024-07-14 05:48:25.072208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.049 qpair failed and we were unable to recover it. 00:34:18.049 [2024-07-14 05:48:25.072429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.049 [2024-07-14 05:48:25.072458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.049 qpair failed and we were unable to recover it. 00:34:18.049 [2024-07-14 05:48:25.072639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.049 [2024-07-14 05:48:25.072667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.049 qpair failed and we were unable to recover it. 00:34:18.049 [2024-07-14 05:48:25.072851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.049 [2024-07-14 05:48:25.072891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.049 qpair failed and we were unable to recover it. 00:34:18.049 [2024-07-14 05:48:25.073089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.049 [2024-07-14 05:48:25.073117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.049 qpair failed and we were unable to recover it. 00:34:18.049 [2024-07-14 05:48:25.073344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.049 [2024-07-14 05:48:25.073372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.049 qpair failed and we were unable to recover it. 00:34:18.049 [2024-07-14 05:48:25.073556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.049 [2024-07-14 05:48:25.073586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.049 qpair failed and we were unable to recover it. 00:34:18.049 [2024-07-14 05:48:25.073800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.049 [2024-07-14 05:48:25.073828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.049 qpair failed and we were unable to recover it. 00:34:18.049 [2024-07-14 05:48:25.074046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.049 [2024-07-14 05:48:25.074074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.049 qpair failed and we were unable to recover it. 00:34:18.049 [2024-07-14 05:48:25.074230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.049 [2024-07-14 05:48:25.074258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.049 qpair failed and we were unable to recover it. 00:34:18.049 [2024-07-14 05:48:25.074444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.049 [2024-07-14 05:48:25.074472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.049 qpair failed and we were unable to recover it. 00:34:18.049 [2024-07-14 05:48:25.074691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.049 [2024-07-14 05:48:25.074718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.049 qpair failed and we were unable to recover it. 00:34:18.049 [2024-07-14 05:48:25.074905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.049 [2024-07-14 05:48:25.074934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.049 qpair failed and we were unable to recover it. 00:34:18.049 [2024-07-14 05:48:25.075115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.049 [2024-07-14 05:48:25.075144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.049 qpair failed and we were unable to recover it. 00:34:18.049 [2024-07-14 05:48:25.075356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.049 [2024-07-14 05:48:25.075383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.049 qpair failed and we were unable to recover it. 00:34:18.049 [2024-07-14 05:48:25.075570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.049 [2024-07-14 05:48:25.075599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.049 qpair failed and we were unable to recover it. 00:34:18.049 [2024-07-14 05:48:25.075784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.049 [2024-07-14 05:48:25.075812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.049 qpair failed and we were unable to recover it. 00:34:18.049 [2024-07-14 05:48:25.075969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.049 [2024-07-14 05:48:25.075997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5cf8000b90 with addr=10.0.0.2, port=4420 00:34:18.049 qpair failed and we were unable to recover it. 00:34:18.049 [2024-07-14 05:48:25.076163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.049 [2024-07-14 05:48:25.076194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.049 qpair failed and we were unable to recover it. 00:34:18.049 [2024-07-14 05:48:25.076379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.049 [2024-07-14 05:48:25.076407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.049 qpair failed and we were unable to recover it. 00:34:18.049 [2024-07-14 05:48:25.076596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.049 [2024-07-14 05:48:25.076624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.049 qpair failed and we were unable to recover it. 00:34:18.049 [2024-07-14 05:48:25.076838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.049 [2024-07-14 05:48:25.076888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.049 qpair failed and we were unable to recover it. 00:34:18.049 [2024-07-14 05:48:25.077085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.049 [2024-07-14 05:48:25.077112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.049 qpair failed and we were unable to recover it. 00:34:18.049 [2024-07-14 05:48:25.077297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.049 [2024-07-14 05:48:25.077324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.049 qpair failed and we were unable to recover it. 00:34:18.049 [2024-07-14 05:48:25.077536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.049 [2024-07-14 05:48:25.077563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.049 qpair failed and we were unable to recover it. 00:34:18.049 [2024-07-14 05:48:25.077725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.049 [2024-07-14 05:48:25.077754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.049 qpair failed and we were unable to recover it. 00:34:18.049 [2024-07-14 05:48:25.077943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.049 [2024-07-14 05:48:25.077972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.049 qpair failed and we were unable to recover it. 00:34:18.049 [2024-07-14 05:48:25.078155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.049 [2024-07-14 05:48:25.078182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.049 qpair failed and we were unable to recover it. 00:34:18.049 [2024-07-14 05:48:25.078382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.049 [2024-07-14 05:48:25.078409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.049 qpair failed and we were unable to recover it. 00:34:18.049 [2024-07-14 05:48:25.078563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.049 [2024-07-14 05:48:25.078592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.049 qpair failed and we were unable to recover it. 00:34:18.050 [2024-07-14 05:48:25.078774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.050 [2024-07-14 05:48:25.078801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.050 qpair failed and we were unable to recover it. 00:34:18.050 [2024-07-14 05:48:25.079009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.050 [2024-07-14 05:48:25.079037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.050 qpair failed and we were unable to recover it. 00:34:18.050 [2024-07-14 05:48:25.079221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.050 [2024-07-14 05:48:25.079248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.050 qpair failed and we were unable to recover it. 00:34:18.050 [2024-07-14 05:48:25.079446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.050 [2024-07-14 05:48:25.079473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.050 qpair failed and we were unable to recover it. 00:34:18.050 [2024-07-14 05:48:25.079663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.050 [2024-07-14 05:48:25.079691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.050 qpair failed and we were unable to recover it. 00:34:18.050 [2024-07-14 05:48:25.079881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.050 [2024-07-14 05:48:25.079909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.050 qpair failed and we were unable to recover it. 00:34:18.050 [2024-07-14 05:48:25.080098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.050 [2024-07-14 05:48:25.080126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.050 qpair failed and we were unable to recover it. 00:34:18.050 [2024-07-14 05:48:25.080314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.050 [2024-07-14 05:48:25.080341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.050 qpair failed and we were unable to recover it. 00:34:18.050 [2024-07-14 05:48:25.080509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.050 [2024-07-14 05:48:25.080536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.050 qpair failed and we were unable to recover it. 00:34:18.050 [2024-07-14 05:48:25.080750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.050 [2024-07-14 05:48:25.080778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.050 qpair failed and we were unable to recover it. 00:34:18.050 [2024-07-14 05:48:25.080973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.050 [2024-07-14 05:48:25.081001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.050 qpair failed and we were unable to recover it. 00:34:18.050 [2024-07-14 05:48:25.081192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.050 [2024-07-14 05:48:25.081219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.050 qpair failed and we were unable to recover it. 00:34:18.050 [2024-07-14 05:48:25.081403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.050 [2024-07-14 05:48:25.081430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.050 qpair failed and we were unable to recover it. 00:34:18.050 [2024-07-14 05:48:25.081614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.050 [2024-07-14 05:48:25.081641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.050 qpair failed and we were unable to recover it. 00:34:18.050 [2024-07-14 05:48:25.081852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.050 [2024-07-14 05:48:25.081897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.050 qpair failed and we were unable to recover it. 00:34:18.050 [2024-07-14 05:48:25.082074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.050 [2024-07-14 05:48:25.082102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.050 qpair failed and we were unable to recover it. 00:34:18.050 [2024-07-14 05:48:25.082286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.050 [2024-07-14 05:48:25.082314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.050 qpair failed and we were unable to recover it. 00:34:18.050 [2024-07-14 05:48:25.082521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.050 [2024-07-14 05:48:25.082548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.050 qpair failed and we were unable to recover it. 00:34:18.050 [2024-07-14 05:48:25.082731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.050 [2024-07-14 05:48:25.082763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.050 qpair failed and we were unable to recover it. 00:34:18.050 [2024-07-14 05:48:25.082934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.050 [2024-07-14 05:48:25.082962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.050 qpair failed and we were unable to recover it. 00:34:18.050 [2024-07-14 05:48:25.083129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.050 [2024-07-14 05:48:25.083156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.050 qpair failed and we were unable to recover it. 00:34:18.050 [2024-07-14 05:48:25.083343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.050 [2024-07-14 05:48:25.083370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.050 qpair failed and we were unable to recover it. 00:34:18.050 [2024-07-14 05:48:25.083578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.050 [2024-07-14 05:48:25.083605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.050 qpair failed and we were unable to recover it. 00:34:18.050 [2024-07-14 05:48:25.083796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.050 [2024-07-14 05:48:25.083823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.050 qpair failed and we were unable to recover it. 00:34:18.050 [2024-07-14 05:48:25.084016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.050 [2024-07-14 05:48:25.084045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.050 qpair failed and we were unable to recover it. 00:34:18.050 [2024-07-14 05:48:25.084234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.050 [2024-07-14 05:48:25.084261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.050 qpair failed and we were unable to recover it. 00:34:18.050 [2024-07-14 05:48:25.084418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.050 [2024-07-14 05:48:25.084445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.050 qpair failed and we were unable to recover it. 00:34:18.050 [2024-07-14 05:48:25.084654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.050 [2024-07-14 05:48:25.084681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.050 qpair failed and we were unable to recover it. 00:34:18.050 [2024-07-14 05:48:25.084896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.050 [2024-07-14 05:48:25.084925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.050 qpair failed and we were unable to recover it. 00:34:18.050 [2024-07-14 05:48:25.085077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.050 [2024-07-14 05:48:25.085104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.050 qpair failed and we were unable to recover it. 00:34:18.050 [2024-07-14 05:48:25.085260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.050 [2024-07-14 05:48:25.085288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.050 qpair failed and we were unable to recover it. 00:34:18.050 [2024-07-14 05:48:25.085473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.050 [2024-07-14 05:48:25.085500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.050 qpair failed and we were unable to recover it. 00:34:18.050 [2024-07-14 05:48:25.085717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.050 [2024-07-14 05:48:25.085745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.050 qpair failed and we were unable to recover it. 00:34:18.050 [2024-07-14 05:48:25.085910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.050 [2024-07-14 05:48:25.085937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.050 qpair failed and we were unable to recover it. 00:34:18.050 [2024-07-14 05:48:25.086105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.050 [2024-07-14 05:48:25.086133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.050 qpair failed and we were unable to recover it. 00:34:18.050 [2024-07-14 05:48:25.086283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.050 [2024-07-14 05:48:25.086311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.050 qpair failed and we were unable to recover it. 00:34:18.050 [2024-07-14 05:48:25.086491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.050 [2024-07-14 05:48:25.086518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.050 qpair failed and we were unable to recover it. 00:34:18.050 [2024-07-14 05:48:25.086701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.050 [2024-07-14 05:48:25.086729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.050 qpair failed and we were unable to recover it. 00:34:18.050 [2024-07-14 05:48:25.086945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.050 [2024-07-14 05:48:25.086974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.050 qpair failed and we were unable to recover it. 00:34:18.050 [2024-07-14 05:48:25.087157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.050 [2024-07-14 05:48:25.087184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.050 qpair failed and we were unable to recover it. 00:34:18.050 [2024-07-14 05:48:25.087369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.050 [2024-07-14 05:48:25.087396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.050 qpair failed and we were unable to recover it. 00:34:18.050 [2024-07-14 05:48:25.087563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.050 [2024-07-14 05:48:25.087591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.050 qpair failed and we were unable to recover it. 00:34:18.050 [2024-07-14 05:48:25.087771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.050 [2024-07-14 05:48:25.087798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.050 qpair failed and we were unable to recover it. 00:34:18.050 [2024-07-14 05:48:25.087972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.050 [2024-07-14 05:48:25.088001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.050 qpair failed and we were unable to recover it. 00:34:18.050 [2024-07-14 05:48:25.088185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.050 [2024-07-14 05:48:25.088212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.050 qpair failed and we were unable to recover it. 00:34:18.050 [2024-07-14 05:48:25.088404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.051 [2024-07-14 05:48:25.088432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.051 qpair failed and we were unable to recover it. 00:34:18.051 [2024-07-14 05:48:25.088600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.051 [2024-07-14 05:48:25.088627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.051 qpair failed and we were unable to recover it. 00:34:18.051 [2024-07-14 05:48:25.088805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.051 [2024-07-14 05:48:25.088832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.051 qpair failed and we were unable to recover it. 00:34:18.051 [2024-07-14 05:48:25.089064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.051 [2024-07-14 05:48:25.089092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.051 qpair failed and we were unable to recover it. 00:34:18.051 [2024-07-14 05:48:25.089274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.051 [2024-07-14 05:48:25.089302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.051 qpair failed and we were unable to recover it. 00:34:18.051 [2024-07-14 05:48:25.089513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.051 [2024-07-14 05:48:25.089542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.051 qpair failed and we were unable to recover it. 00:34:18.051 [2024-07-14 05:48:25.089706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.051 [2024-07-14 05:48:25.089733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.051 qpair failed and we were unable to recover it. 00:34:18.051 [2024-07-14 05:48:25.089925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.051 [2024-07-14 05:48:25.089954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.051 qpair failed and we were unable to recover it. 00:34:18.051 [2024-07-14 05:48:25.090113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.051 [2024-07-14 05:48:25.090140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.051 qpair failed and we were unable to recover it. 00:34:18.051 [2024-07-14 05:48:25.090326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.051 [2024-07-14 05:48:25.090354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.051 qpair failed and we were unable to recover it. 00:34:18.051 [2024-07-14 05:48:25.090508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.051 [2024-07-14 05:48:25.090536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.051 qpair failed and we were unable to recover it. 00:34:18.051 [2024-07-14 05:48:25.090717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.051 [2024-07-14 05:48:25.090744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.051 qpair failed and we were unable to recover it. 00:34:18.051 [2024-07-14 05:48:25.090896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.051 [2024-07-14 05:48:25.090924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.051 qpair failed and we were unable to recover it. 00:34:18.051 [2024-07-14 05:48:25.091106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.051 [2024-07-14 05:48:25.091137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.051 qpair failed and we were unable to recover it. 00:34:18.051 [2024-07-14 05:48:25.091289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.051 [2024-07-14 05:48:25.091316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.051 qpair failed and we were unable to recover it. 00:34:18.051 [2024-07-14 05:48:25.091503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.051 [2024-07-14 05:48:25.091532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.051 qpair failed and we were unable to recover it. 00:34:18.051 [2024-07-14 05:48:25.091722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.051 [2024-07-14 05:48:25.091750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.051 qpair failed and we were unable to recover it. 00:34:18.051 [2024-07-14 05:48:25.091971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.051 [2024-07-14 05:48:25.091999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.051 qpair failed and we were unable to recover it. 00:34:18.051 [2024-07-14 05:48:25.092159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.051 [2024-07-14 05:48:25.092188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.051 qpair failed and we were unable to recover it. 00:34:18.051 [2024-07-14 05:48:25.092378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.051 [2024-07-14 05:48:25.092405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.051 qpair failed and we were unable to recover it. 00:34:18.051 [2024-07-14 05:48:25.092612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.051 [2024-07-14 05:48:25.092639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.051 qpair failed and we were unable to recover it. 00:34:18.051 [2024-07-14 05:48:25.092823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.051 [2024-07-14 05:48:25.092862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.051 qpair failed and we were unable to recover it. 00:34:18.051 [2024-07-14 05:48:25.093021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.051 [2024-07-14 05:48:25.093048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.051 qpair failed and we were unable to recover it. 00:34:18.051 [2024-07-14 05:48:25.093203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.051 [2024-07-14 05:48:25.093231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.051 qpair failed and we were unable to recover it. 00:34:18.051 [2024-07-14 05:48:25.093446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.051 [2024-07-14 05:48:25.093474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.051 qpair failed and we were unable to recover it. 00:34:18.051 [2024-07-14 05:48:25.093660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.051 [2024-07-14 05:48:25.093688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.051 qpair failed and we were unable to recover it. 00:34:18.051 [2024-07-14 05:48:25.093891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.051 [2024-07-14 05:48:25.093919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.051 qpair failed and we were unable to recover it. 00:34:18.051 [2024-07-14 05:48:25.094116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.051 [2024-07-14 05:48:25.094144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.051 qpair failed and we were unable to recover it. 00:34:18.051 [2024-07-14 05:48:25.094354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.051 [2024-07-14 05:48:25.094382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.051 qpair failed and we were unable to recover it. 00:34:18.051 [2024-07-14 05:48:25.094540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.051 [2024-07-14 05:48:25.094568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.051 qpair failed and we were unable to recover it. 00:34:18.051 [2024-07-14 05:48:25.094730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.051 [2024-07-14 05:48:25.094758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.051 qpair failed and we were unable to recover it. 00:34:18.051 [2024-07-14 05:48:25.094948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.051 [2024-07-14 05:48:25.094977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.051 qpair failed and we were unable to recover it. 00:34:18.051 [2024-07-14 05:48:25.095137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.051 [2024-07-14 05:48:25.095165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.051 qpair failed and we were unable to recover it. 00:34:18.051 [2024-07-14 05:48:25.095344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.051 [2024-07-14 05:48:25.095371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.051 qpair failed and we were unable to recover it. 00:34:18.051 [2024-07-14 05:48:25.095585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.051 [2024-07-14 05:48:25.095613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.051 qpair failed and we were unable to recover it. 00:34:18.051 [2024-07-14 05:48:25.095801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.051 [2024-07-14 05:48:25.095828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.051 qpair failed and we were unable to recover it. 00:34:18.051 [2024-07-14 05:48:25.096048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.051 [2024-07-14 05:48:25.096076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.051 qpair failed and we were unable to recover it. 00:34:18.051 [2024-07-14 05:48:25.096253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.051 [2024-07-14 05:48:25.096281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.051 qpair failed and we were unable to recover it. 00:34:18.051 [2024-07-14 05:48:25.096494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.051 [2024-07-14 05:48:25.096521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.051 qpair failed and we were unable to recover it. 00:34:18.051 [2024-07-14 05:48:25.096711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.051 [2024-07-14 05:48:25.096738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.051 qpair failed and we were unable to recover it. 00:34:18.051 [2024-07-14 05:48:25.096921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.051 [2024-07-14 05:48:25.096949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.051 qpair failed and we were unable to recover it. 00:34:18.051 [2024-07-14 05:48:25.097160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.051 [2024-07-14 05:48:25.097193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.051 qpair failed and we were unable to recover it. 00:34:18.051 [2024-07-14 05:48:25.097373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.051 [2024-07-14 05:48:25.097401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.051 qpair failed and we were unable to recover it. 00:34:18.051 [2024-07-14 05:48:25.097584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.051 [2024-07-14 05:48:25.097611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.051 qpair failed and we were unable to recover it. 00:34:18.051 [2024-07-14 05:48:25.097767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.051 [2024-07-14 05:48:25.097795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.051 qpair failed and we were unable to recover it. 00:34:18.051 [2024-07-14 05:48:25.097989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.051 [2024-07-14 05:48:25.098017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.051 qpair failed and we were unable to recover it. 00:34:18.051 [2024-07-14 05:48:25.098197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.052 [2024-07-14 05:48:25.098225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.052 qpair failed and we were unable to recover it. 00:34:18.052 [2024-07-14 05:48:25.098407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.052 [2024-07-14 05:48:25.098434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.052 qpair failed and we were unable to recover it. 00:34:18.052 [2024-07-14 05:48:25.098623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.052 [2024-07-14 05:48:25.098651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.052 qpair failed and we were unable to recover it. 00:34:18.052 [2024-07-14 05:48:25.098863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.052 [2024-07-14 05:48:25.098897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.052 qpair failed and we were unable to recover it. 00:34:18.052 [2024-07-14 05:48:25.099060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.052 [2024-07-14 05:48:25.099087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.052 qpair failed and we were unable to recover it. 00:34:18.052 [2024-07-14 05:48:25.099299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.052 [2024-07-14 05:48:25.099326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.052 qpair failed and we were unable to recover it. 00:34:18.052 [2024-07-14 05:48:25.099537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.052 [2024-07-14 05:48:25.099566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.052 qpair failed and we were unable to recover it. 00:34:18.052 [2024-07-14 05:48:25.099752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.052 [2024-07-14 05:48:25.099784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.052 qpair failed and we were unable to recover it. 00:34:18.052 [2024-07-14 05:48:25.099974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.052 [2024-07-14 05:48:25.100002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.052 qpair failed and we were unable to recover it. 00:34:18.052 [2024-07-14 05:48:25.100163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.052 [2024-07-14 05:48:25.100190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.052 qpair failed and we were unable to recover it. 00:34:18.052 [2024-07-14 05:48:25.100374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.052 [2024-07-14 05:48:25.100401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.052 qpair failed and we were unable to recover it. 00:34:18.052 [2024-07-14 05:48:25.100588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.052 [2024-07-14 05:48:25.100616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.052 qpair failed and we were unable to recover it. 00:34:18.052 [2024-07-14 05:48:25.100769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.052 [2024-07-14 05:48:25.100797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.052 qpair failed and we were unable to recover it. 00:34:18.052 [2024-07-14 05:48:25.100985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.052 [2024-07-14 05:48:25.101013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.052 qpair failed and we were unable to recover it. 00:34:18.052 [2024-07-14 05:48:25.101202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.052 [2024-07-14 05:48:25.101229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.052 qpair failed and we were unable to recover it. 00:34:18.052 [2024-07-14 05:48:25.101415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.052 [2024-07-14 05:48:25.101442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.052 qpair failed and we were unable to recover it. 00:34:18.052 [2024-07-14 05:48:25.101628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.052 [2024-07-14 05:48:25.101656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.052 qpair failed and we were unable to recover it. 00:34:18.052 [2024-07-14 05:48:25.101958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.052 [2024-07-14 05:48:25.101996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.052 qpair failed and we were unable to recover it. 00:34:18.052 [2024-07-14 05:48:25.102229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.052 [2024-07-14 05:48:25.102259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.052 qpair failed and we were unable to recover it. 00:34:18.052 [2024-07-14 05:48:25.102478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.052 [2024-07-14 05:48:25.102506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.052 qpair failed and we were unable to recover it. 00:34:18.052 [2024-07-14 05:48:25.102690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.052 [2024-07-14 05:48:25.102717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.052 qpair failed and we were unable to recover it. 00:34:18.052 [2024-07-14 05:48:25.102921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.052 [2024-07-14 05:48:25.102950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.052 qpair failed and we were unable to recover it. 00:34:18.052 [2024-07-14 05:48:25.103166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.052 [2024-07-14 05:48:25.103193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.052 qpair failed and we were unable to recover it. 00:34:18.052 [2024-07-14 05:48:25.103379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.052 [2024-07-14 05:48:25.103406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.052 qpair failed and we were unable to recover it. 00:34:18.052 [2024-07-14 05:48:25.103593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.052 [2024-07-14 05:48:25.103627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.052 qpair failed and we were unable to recover it. 00:34:18.052 [2024-07-14 05:48:25.103843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.052 [2024-07-14 05:48:25.103888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.052 qpair failed and we were unable to recover it. 00:34:18.052 [2024-07-14 05:48:25.104091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.052 [2024-07-14 05:48:25.104120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.052 qpair failed and we were unable to recover it. 00:34:18.052 [2024-07-14 05:48:25.104301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.052 [2024-07-14 05:48:25.104328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.052 qpair failed and we were unable to recover it. 00:34:18.052 [2024-07-14 05:48:25.104517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.052 [2024-07-14 05:48:25.104544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.052 qpair failed and we were unable to recover it. 00:34:18.052 [2024-07-14 05:48:25.104733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.052 [2024-07-14 05:48:25.104761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.052 qpair failed and we were unable to recover it. 00:34:18.052 [2024-07-14 05:48:25.104960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.052 [2024-07-14 05:48:25.104987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.052 qpair failed and we were unable to recover it. 00:34:18.052 [2024-07-14 05:48:25.105164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.052 [2024-07-14 05:48:25.105191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.052 qpair failed and we were unable to recover it. 00:34:18.052 [2024-07-14 05:48:25.105357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.052 [2024-07-14 05:48:25.105389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.052 qpair failed and we were unable to recover it. 00:34:18.052 [2024-07-14 05:48:25.105553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.052 [2024-07-14 05:48:25.105591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.052 qpair failed and we were unable to recover it. 00:34:18.052 [2024-07-14 05:48:25.105808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.052 [2024-07-14 05:48:25.105838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.052 qpair failed and we were unable to recover it. 00:34:18.328 [2024-07-14 05:48:25.106057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.328 [2024-07-14 05:48:25.106086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.328 qpair failed and we were unable to recover it. 00:34:18.328 [2024-07-14 05:48:25.106258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.328 [2024-07-14 05:48:25.106286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.328 qpair failed and we were unable to recover it. 00:34:18.328 [2024-07-14 05:48:25.106452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.328 [2024-07-14 05:48:25.106479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.328 qpair failed and we were unable to recover it. 00:34:18.328 [2024-07-14 05:48:25.106645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.328 [2024-07-14 05:48:25.106673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.328 qpair failed and we were unable to recover it. 00:34:18.328 [2024-07-14 05:48:25.106881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.328 [2024-07-14 05:48:25.106909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.328 qpair failed and we were unable to recover it. 00:34:18.328 [2024-07-14 05:48:25.107071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.328 [2024-07-14 05:48:25.107099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.328 qpair failed and we were unable to recover it. 00:34:18.328 [2024-07-14 05:48:25.107282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.328 [2024-07-14 05:48:25.107308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.328 qpair failed and we were unable to recover it. 00:34:18.328 [2024-07-14 05:48:25.107467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.328 [2024-07-14 05:48:25.107494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.328 qpair failed and we were unable to recover it. 00:34:18.328 [2024-07-14 05:48:25.107655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.328 [2024-07-14 05:48:25.107683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.328 qpair failed and we were unable to recover it. 00:34:18.328 [2024-07-14 05:48:25.107862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.328 [2024-07-14 05:48:25.107896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.328 qpair failed and we were unable to recover it. 00:34:18.328 [2024-07-14 05:48:25.108100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.328 [2024-07-14 05:48:25.108127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.328 qpair failed and we were unable to recover it. 00:34:18.328 [2024-07-14 05:48:25.108321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.328 [2024-07-14 05:48:25.108347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.328 qpair failed and we were unable to recover it. 00:34:18.329 [2024-07-14 05:48:25.108562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.329 [2024-07-14 05:48:25.108593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.329 qpair failed and we were unable to recover it. 00:34:18.329 [2024-07-14 05:48:25.108779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.329 [2024-07-14 05:48:25.108806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.329 qpair failed and we were unable to recover it. 00:34:18.329 [2024-07-14 05:48:25.108996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.329 [2024-07-14 05:48:25.109024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.329 qpair failed and we were unable to recover it. 00:34:18.329 [2024-07-14 05:48:25.109172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.329 [2024-07-14 05:48:25.109200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.329 qpair failed and we were unable to recover it. 00:34:18.329 [2024-07-14 05:48:25.109393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.329 [2024-07-14 05:48:25.109437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.329 qpair failed and we were unable to recover it. 00:34:18.329 [2024-07-14 05:48:25.109631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.329 [2024-07-14 05:48:25.109659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.329 qpair failed and we were unable to recover it. 00:34:18.329 [2024-07-14 05:48:25.109815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.329 [2024-07-14 05:48:25.109845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.329 qpair failed and we were unable to recover it. 00:34:18.329 [2024-07-14 05:48:25.110041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.329 [2024-07-14 05:48:25.110068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.329 qpair failed and we were unable to recover it. 00:34:18.329 [2024-07-14 05:48:25.110259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.329 [2024-07-14 05:48:25.110285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.329 qpair failed and we were unable to recover it. 00:34:18.329 [2024-07-14 05:48:25.110497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.329 [2024-07-14 05:48:25.110524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.329 qpair failed and we were unable to recover it. 00:34:18.329 [2024-07-14 05:48:25.110748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.329 [2024-07-14 05:48:25.110774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.329 qpair failed and we were unable to recover it. 00:34:18.329 [2024-07-14 05:48:25.110940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.329 [2024-07-14 05:48:25.110967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.329 qpair failed and we were unable to recover it. 00:34:18.329 [2024-07-14 05:48:25.111125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.329 [2024-07-14 05:48:25.111152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.329 qpair failed and we were unable to recover it. 00:34:18.329 [2024-07-14 05:48:25.111337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.329 [2024-07-14 05:48:25.111364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.329 qpair failed and we were unable to recover it. 00:34:18.329 [2024-07-14 05:48:25.111554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.329 [2024-07-14 05:48:25.111580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.329 qpair failed and we were unable to recover it. 00:34:18.329 [2024-07-14 05:48:25.111734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.329 [2024-07-14 05:48:25.111759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.329 qpair failed and we were unable to recover it. 00:34:18.329 [2024-07-14 05:48:25.111960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.329 [2024-07-14 05:48:25.111987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.329 qpair failed and we were unable to recover it. 00:34:18.329 [2024-07-14 05:48:25.112160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.329 [2024-07-14 05:48:25.112188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.329 qpair failed and we were unable to recover it. 00:34:18.329 [2024-07-14 05:48:25.112376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.329 [2024-07-14 05:48:25.112404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.329 qpair failed and we were unable to recover it. 00:34:18.329 [2024-07-14 05:48:25.112588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.329 [2024-07-14 05:48:25.112616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.329 qpair failed and we were unable to recover it. 00:34:18.329 [2024-07-14 05:48:25.112799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.329 [2024-07-14 05:48:25.112826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.329 qpair failed and we were unable to recover it. 00:34:18.329 [2024-07-14 05:48:25.113053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.329 [2024-07-14 05:48:25.113081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.329 qpair failed and we were unable to recover it. 00:34:18.329 [2024-07-14 05:48:25.113271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.329 [2024-07-14 05:48:25.113298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.329 qpair failed and we were unable to recover it. 00:34:18.329 [2024-07-14 05:48:25.113483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.329 [2024-07-14 05:48:25.113509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.329 qpair failed and we were unable to recover it. 00:34:18.329 [2024-07-14 05:48:25.113661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.329 [2024-07-14 05:48:25.113686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.329 qpair failed and we were unable to recover it. 00:34:18.329 [2024-07-14 05:48:25.113908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.329 [2024-07-14 05:48:25.113935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.329 qpair failed and we were unable to recover it. 00:34:18.329 [2024-07-14 05:48:25.114219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.329 [2024-07-14 05:48:25.114246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.329 qpair failed and we were unable to recover it. 00:34:18.329 [2024-07-14 05:48:25.114460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.329 [2024-07-14 05:48:25.114486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.329 qpair failed and we were unable to recover it. 00:34:18.329 [2024-07-14 05:48:25.114670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.329 [2024-07-14 05:48:25.114696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.329 qpair failed and we were unable to recover it. 00:34:18.329 [2024-07-14 05:48:25.114883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.329 [2024-07-14 05:48:25.114911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.329 qpair failed and we were unable to recover it. 00:34:18.329 [2024-07-14 05:48:25.115095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.329 [2024-07-14 05:48:25.115120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.329 qpair failed and we were unable to recover it. 00:34:18.329 [2024-07-14 05:48:25.115345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.329 [2024-07-14 05:48:25.115371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.329 qpair failed and we were unable to recover it. 00:34:18.329 [2024-07-14 05:48:25.115583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.329 [2024-07-14 05:48:25.115609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.329 qpair failed and we were unable to recover it. 00:34:18.329 [2024-07-14 05:48:25.115804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.329 [2024-07-14 05:48:25.115830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.329 qpair failed and we were unable to recover it. 00:34:18.329 [2024-07-14 05:48:25.116021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.329 [2024-07-14 05:48:25.116048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.329 qpair failed and we were unable to recover it. 00:34:18.329 [2024-07-14 05:48:25.116204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.329 [2024-07-14 05:48:25.116230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.329 qpair failed and we were unable to recover it. 00:34:18.329 [2024-07-14 05:48:25.116451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.329 [2024-07-14 05:48:25.116478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.329 qpair failed and we were unable to recover it. 00:34:18.329 [2024-07-14 05:48:25.116641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.329 [2024-07-14 05:48:25.116669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.329 qpair failed and we were unable to recover it. 00:34:18.329 [2024-07-14 05:48:25.116881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.329 [2024-07-14 05:48:25.116908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.329 qpair failed and we were unable to recover it. 00:34:18.329 [2024-07-14 05:48:25.117101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.330 [2024-07-14 05:48:25.117127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.330 qpair failed and we were unable to recover it. 00:34:18.330 [2024-07-14 05:48:25.117314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.330 [2024-07-14 05:48:25.117346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.330 qpair failed and we were unable to recover it. 00:34:18.330 [2024-07-14 05:48:25.117533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.330 [2024-07-14 05:48:25.117561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.330 qpair failed and we were unable to recover it. 00:34:18.330 [2024-07-14 05:48:25.117726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.330 [2024-07-14 05:48:25.117754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.330 qpair failed and we were unable to recover it. 00:34:18.330 [2024-07-14 05:48:25.117917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.330 [2024-07-14 05:48:25.117945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.330 qpair failed and we were unable to recover it. 00:34:18.330 [2024-07-14 05:48:25.118160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.330 [2024-07-14 05:48:25.118187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.330 qpair failed and we were unable to recover it. 00:34:18.330 [2024-07-14 05:48:25.118373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.330 [2024-07-14 05:48:25.118399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.330 qpair failed and we were unable to recover it. 00:34:18.330 [2024-07-14 05:48:25.118585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.330 [2024-07-14 05:48:25.118612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.330 qpair failed and we were unable to recover it. 00:34:18.330 [2024-07-14 05:48:25.118801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.330 [2024-07-14 05:48:25.118828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.330 qpair failed and we were unable to recover it. 00:34:18.330 [2024-07-14 05:48:25.119023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.330 [2024-07-14 05:48:25.119051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.330 qpair failed and we were unable to recover it. 00:34:18.330 [2024-07-14 05:48:25.119261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.330 [2024-07-14 05:48:25.119288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.330 qpair failed and we were unable to recover it. 00:34:18.330 [2024-07-14 05:48:25.119473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.330 [2024-07-14 05:48:25.119500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.330 qpair failed and we were unable to recover it. 00:34:18.330 [2024-07-14 05:48:25.119691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.330 [2024-07-14 05:48:25.119717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.330 qpair failed and we were unable to recover it. 00:34:18.330 [2024-07-14 05:48:25.119941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.330 [2024-07-14 05:48:25.119968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.330 qpair failed and we were unable to recover it. 00:34:18.330 [2024-07-14 05:48:25.120127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.330 [2024-07-14 05:48:25.120154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.330 qpair failed and we were unable to recover it. 00:34:18.330 [2024-07-14 05:48:25.120346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.330 [2024-07-14 05:48:25.120375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.330 qpair failed and we were unable to recover it. 00:34:18.330 [2024-07-14 05:48:25.120562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.330 [2024-07-14 05:48:25.120590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.330 qpair failed and we were unable to recover it. 00:34:18.330 [2024-07-14 05:48:25.120773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.330 [2024-07-14 05:48:25.120802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.330 qpair failed and we were unable to recover it. 00:34:18.330 [2024-07-14 05:48:25.120986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.330 [2024-07-14 05:48:25.121015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.330 qpair failed and we were unable to recover it. 00:34:18.330 [2024-07-14 05:48:25.121172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.330 [2024-07-14 05:48:25.121199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.330 qpair failed and we were unable to recover it. 00:34:18.330 [2024-07-14 05:48:25.121388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.330 [2024-07-14 05:48:25.121415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.330 qpair failed and we were unable to recover it. 00:34:18.330 [2024-07-14 05:48:25.121606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.330 [2024-07-14 05:48:25.121633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.330 qpair failed and we were unable to recover it. 00:34:18.330 [2024-07-14 05:48:25.121821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.330 [2024-07-14 05:48:25.121847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.330 qpair failed and we were unable to recover it. 00:34:18.330 [2024-07-14 05:48:25.122009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.330 [2024-07-14 05:48:25.122036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.330 qpair failed and we were unable to recover it. 00:34:18.330 [2024-07-14 05:48:25.122224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.330 [2024-07-14 05:48:25.122251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.330 qpair failed and we were unable to recover it. 00:34:18.330 [2024-07-14 05:48:25.122462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.330 [2024-07-14 05:48:25.122489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.330 qpair failed and we were unable to recover it. 00:34:18.330 [2024-07-14 05:48:25.122670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.330 [2024-07-14 05:48:25.122697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.330 qpair failed and we were unable to recover it. 00:34:18.330 [2024-07-14 05:48:25.122879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.330 [2024-07-14 05:48:25.122908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.330 qpair failed and we were unable to recover it. 00:34:18.330 [2024-07-14 05:48:25.123070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.330 [2024-07-14 05:48:25.123097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.330 qpair failed and we were unable to recover it. 00:34:18.330 [2024-07-14 05:48:25.123280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.330 [2024-07-14 05:48:25.123308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.330 qpair failed and we were unable to recover it. 00:34:18.330 [2024-07-14 05:48:25.123496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.330 [2024-07-14 05:48:25.123523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.330 qpair failed and we were unable to recover it. 00:34:18.330 [2024-07-14 05:48:25.123731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.330 [2024-07-14 05:48:25.123758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.330 qpair failed and we were unable to recover it. 00:34:18.330 [2024-07-14 05:48:25.123981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.330 [2024-07-14 05:48:25.124009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.330 qpair failed and we were unable to recover it. 00:34:18.330 [2024-07-14 05:48:25.124199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.330 [2024-07-14 05:48:25.124225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.330 qpair failed and we were unable to recover it. 00:34:18.330 [2024-07-14 05:48:25.124433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.330 [2024-07-14 05:48:25.124460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.330 qpair failed and we were unable to recover it. 00:34:18.330 [2024-07-14 05:48:25.124648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.330 [2024-07-14 05:48:25.124677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.330 qpair failed and we were unable to recover it. 00:34:18.330 [2024-07-14 05:48:25.124878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.330 [2024-07-14 05:48:25.124906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.330 qpair failed and we were unable to recover it. 00:34:18.330 [2024-07-14 05:48:25.125124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.330 [2024-07-14 05:48:25.125152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.330 qpair failed and we were unable to recover it. 00:34:18.330 [2024-07-14 05:48:25.125363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.330 [2024-07-14 05:48:25.125391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.330 qpair failed and we were unable to recover it. 00:34:18.330 [2024-07-14 05:48:25.125598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.330 [2024-07-14 05:48:25.125625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.330 qpair failed and we were unable to recover it. 00:34:18.330 [2024-07-14 05:48:25.125812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.330 [2024-07-14 05:48:25.125839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.330 qpair failed and we were unable to recover it. 00:34:18.331 [2024-07-14 05:48:25.126051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.331 [2024-07-14 05:48:25.126082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.331 qpair failed and we were unable to recover it. 00:34:18.331 [2024-07-14 05:48:25.126299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.331 [2024-07-14 05:48:25.126327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.331 qpair failed and we were unable to recover it. 00:34:18.331 [2024-07-14 05:48:25.126537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.331 [2024-07-14 05:48:25.126564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.331 qpair failed and we were unable to recover it. 00:34:18.331 [2024-07-14 05:48:25.126774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.331 [2024-07-14 05:48:25.126802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.331 qpair failed and we were unable to recover it. 00:34:18.331 [2024-07-14 05:48:25.126986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.331 [2024-07-14 05:48:25.127015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.331 qpair failed and we were unable to recover it. 00:34:18.331 [2024-07-14 05:48:25.127168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.331 [2024-07-14 05:48:25.127195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.331 qpair failed and we were unable to recover it. 00:34:18.331 [2024-07-14 05:48:25.127381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.331 [2024-07-14 05:48:25.127408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.331 qpair failed and we were unable to recover it. 00:34:18.331 [2024-07-14 05:48:25.127567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.331 [2024-07-14 05:48:25.127594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.331 qpair failed and we were unable to recover it. 00:34:18.331 [2024-07-14 05:48:25.127755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.331 [2024-07-14 05:48:25.127783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.331 qpair failed and we were unable to recover it. 00:34:18.331 [2024-07-14 05:48:25.127954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.331 [2024-07-14 05:48:25.127982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.331 qpair failed and we were unable to recover it. 00:34:18.331 [2024-07-14 05:48:25.128170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.331 [2024-07-14 05:48:25.128197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.331 qpair failed and we were unable to recover it. 00:34:18.331 [2024-07-14 05:48:25.128409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.331 [2024-07-14 05:48:25.128437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.331 qpair failed and we were unable to recover it. 00:34:18.331 [2024-07-14 05:48:25.128623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.331 [2024-07-14 05:48:25.128650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.331 qpair failed and we were unable to recover it. 00:34:18.331 [2024-07-14 05:48:25.128863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.331 [2024-07-14 05:48:25.128896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.331 qpair failed and we were unable to recover it. 00:34:18.331 [2024-07-14 05:48:25.129058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.331 [2024-07-14 05:48:25.129087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.331 qpair failed and we were unable to recover it. 00:34:18.331 [2024-07-14 05:48:25.129273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.331 [2024-07-14 05:48:25.129302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.331 qpair failed and we were unable to recover it. 00:34:18.331 [2024-07-14 05:48:25.129511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.331 [2024-07-14 05:48:25.129539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.331 qpair failed and we were unable to recover it. 00:34:18.331 [2024-07-14 05:48:25.129750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.331 [2024-07-14 05:48:25.129777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.331 qpair failed and we were unable to recover it. 00:34:18.331 [2024-07-14 05:48:25.129965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.331 [2024-07-14 05:48:25.129993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.331 qpair failed and we were unable to recover it. 00:34:18.331 [2024-07-14 05:48:25.130148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.331 [2024-07-14 05:48:25.130176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.331 qpair failed and we were unable to recover it. 00:34:18.331 [2024-07-14 05:48:25.130387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.331 [2024-07-14 05:48:25.130414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.331 qpair failed and we were unable to recover it. 00:34:18.331 [2024-07-14 05:48:25.130573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.331 [2024-07-14 05:48:25.130600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.331 qpair failed and we were unable to recover it. 00:34:18.331 [2024-07-14 05:48:25.130791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.331 [2024-07-14 05:48:25.130818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.331 qpair failed and we were unable to recover it. 00:34:18.331 [2024-07-14 05:48:25.131007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.331 [2024-07-14 05:48:25.131034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.331 qpair failed and we were unable to recover it. 00:34:18.331 [2024-07-14 05:48:25.131221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.331 [2024-07-14 05:48:25.131249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.331 qpair failed and we were unable to recover it. 00:34:18.331 [2024-07-14 05:48:25.131462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.331 [2024-07-14 05:48:25.131489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.331 qpair failed and we were unable to recover it. 00:34:18.331 [2024-07-14 05:48:25.131680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.331 [2024-07-14 05:48:25.131707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.331 qpair failed and we were unable to recover it. 00:34:18.331 [2024-07-14 05:48:25.131923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.331 [2024-07-14 05:48:25.131951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.331 qpair failed and we were unable to recover it. 00:34:18.331 [2024-07-14 05:48:25.132104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.331 [2024-07-14 05:48:25.132132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.331 qpair failed and we were unable to recover it. 00:34:18.331 [2024-07-14 05:48:25.132288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.331 [2024-07-14 05:48:25.132315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.331 qpair failed and we were unable to recover it. 00:34:18.331 [2024-07-14 05:48:25.132528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.331 [2024-07-14 05:48:25.132555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.331 qpair failed and we were unable to recover it. 00:34:18.331 [2024-07-14 05:48:25.132762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.331 [2024-07-14 05:48:25.132789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.331 qpair failed and we were unable to recover it. 00:34:18.331 [2024-07-14 05:48:25.132967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.331 [2024-07-14 05:48:25.132995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.331 qpair failed and we were unable to recover it. 00:34:18.331 [2024-07-14 05:48:25.133184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.331 [2024-07-14 05:48:25.133212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.331 qpair failed and we were unable to recover it. 00:34:18.331 [2024-07-14 05:48:25.133372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.331 [2024-07-14 05:48:25.133399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.331 qpair failed and we were unable to recover it. 00:34:18.331 [2024-07-14 05:48:25.133611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.331 [2024-07-14 05:48:25.133638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.331 qpair failed and we were unable to recover it. 00:34:18.331 [2024-07-14 05:48:25.133822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.331 [2024-07-14 05:48:25.133849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.331 qpair failed and we were unable to recover it. 00:34:18.331 [2024-07-14 05:48:25.134069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.331 [2024-07-14 05:48:25.134096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.331 qpair failed and we were unable to recover it. 00:34:18.331 [2024-07-14 05:48:25.134275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.331 [2024-07-14 05:48:25.134301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.331 qpair failed and we were unable to recover it. 00:34:18.331 [2024-07-14 05:48:25.134510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.331 [2024-07-14 05:48:25.134537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.331 qpair failed and we were unable to recover it. 00:34:18.332 [2024-07-14 05:48:25.134695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.332 [2024-07-14 05:48:25.134732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.332 qpair failed and we were unable to recover it. 00:34:18.332 [2024-07-14 05:48:25.134919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.332 [2024-07-14 05:48:25.134946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.332 qpair failed and we were unable to recover it. 00:34:18.332 [2024-07-14 05:48:25.135133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.332 [2024-07-14 05:48:25.135159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.332 qpair failed and we were unable to recover it. 00:34:18.332 [2024-07-14 05:48:25.135334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.332 [2024-07-14 05:48:25.135361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.332 qpair failed and we were unable to recover it. 00:34:18.332 [2024-07-14 05:48:25.135580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.332 [2024-07-14 05:48:25.135607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.332 qpair failed and we were unable to recover it. 00:34:18.332 [2024-07-14 05:48:25.135819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.332 [2024-07-14 05:48:25.135846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.332 qpair failed and we were unable to recover it. 00:34:18.332 [2024-07-14 05:48:25.136019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.332 [2024-07-14 05:48:25.136047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.332 qpair failed and we were unable to recover it. 00:34:18.332 [2024-07-14 05:48:25.136266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.332 [2024-07-14 05:48:25.136293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.332 qpair failed and we were unable to recover it. 00:34:18.332 [2024-07-14 05:48:25.136489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.332 [2024-07-14 05:48:25.136516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.332 qpair failed and we were unable to recover it. 00:34:18.332 [2024-07-14 05:48:25.136719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.332 [2024-07-14 05:48:25.136746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.332 qpair failed and we were unable to recover it. 00:34:18.332 [2024-07-14 05:48:25.136952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.332 [2024-07-14 05:48:25.136981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.332 qpair failed and we were unable to recover it. 00:34:18.332 [2024-07-14 05:48:25.137167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.332 [2024-07-14 05:48:25.137194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.332 qpair failed and we were unable to recover it. 00:34:18.332 [2024-07-14 05:48:25.137347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.332 [2024-07-14 05:48:25.137374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.332 qpair failed and we were unable to recover it. 00:34:18.332 [2024-07-14 05:48:25.137533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.332 [2024-07-14 05:48:25.137561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.332 qpair failed and we were unable to recover it. 00:34:18.332 [2024-07-14 05:48:25.137721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.332 [2024-07-14 05:48:25.137748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.332 qpair failed and we were unable to recover it. 00:34:18.332 [2024-07-14 05:48:25.137933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.332 [2024-07-14 05:48:25.137960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.332 qpair failed and we were unable to recover it. 00:34:18.332 [2024-07-14 05:48:25.138150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.332 [2024-07-14 05:48:25.138177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.332 qpair failed and we were unable to recover it. 00:34:18.332 [2024-07-14 05:48:25.138383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.332 [2024-07-14 05:48:25.138410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.332 qpair failed and we were unable to recover it. 00:34:18.332 [2024-07-14 05:48:25.138593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.332 [2024-07-14 05:48:25.138620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.332 qpair failed and we were unable to recover it. 00:34:18.332 [2024-07-14 05:48:25.138826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.332 [2024-07-14 05:48:25.138853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.332 qpair failed and we were unable to recover it. 00:34:18.332 [2024-07-14 05:48:25.139057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.332 [2024-07-14 05:48:25.139084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.332 qpair failed and we were unable to recover it. 00:34:18.332 [2024-07-14 05:48:25.139265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.332 [2024-07-14 05:48:25.139293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.332 qpair failed and we were unable to recover it. 00:34:18.332 [2024-07-14 05:48:25.139456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.332 [2024-07-14 05:48:25.139484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.332 qpair failed and we were unable to recover it. 00:34:18.332 [2024-07-14 05:48:25.139705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.332 [2024-07-14 05:48:25.139733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.332 qpair failed and we were unable to recover it. 00:34:18.332 [2024-07-14 05:48:25.139947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.332 [2024-07-14 05:48:25.139975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.332 qpair failed and we were unable to recover it. 00:34:18.332 [2024-07-14 05:48:25.140162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.332 [2024-07-14 05:48:25.140191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.332 qpair failed and we were unable to recover it. 00:34:18.332 [2024-07-14 05:48:25.140401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.332 [2024-07-14 05:48:25.140429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.332 qpair failed and we were unable to recover it. 00:34:18.332 [2024-07-14 05:48:25.140614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.332 [2024-07-14 05:48:25.140641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.332 qpair failed and we were unable to recover it. 00:34:18.332 [2024-07-14 05:48:25.140825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.332 [2024-07-14 05:48:25.140852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.332 qpair failed and we were unable to recover it. 00:34:18.332 [2024-07-14 05:48:25.141014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.332 [2024-07-14 05:48:25.141042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.332 qpair failed and we were unable to recover it. 00:34:18.332 [2024-07-14 05:48:25.141249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.332 [2024-07-14 05:48:25.141277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.332 qpair failed and we were unable to recover it. 00:34:18.332 [2024-07-14 05:48:25.141455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.332 [2024-07-14 05:48:25.141482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.332 qpair failed and we were unable to recover it. 00:34:18.332 [2024-07-14 05:48:25.141637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.332 [2024-07-14 05:48:25.141664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.332 qpair failed and we were unable to recover it. 00:34:18.332 [2024-07-14 05:48:25.141851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.332 [2024-07-14 05:48:25.141885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.332 qpair failed and we were unable to recover it. 00:34:18.332 [2024-07-14 05:48:25.142073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.332 [2024-07-14 05:48:25.142100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.332 qpair failed and we were unable to recover it. 00:34:18.332 [2024-07-14 05:48:25.142287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.332 [2024-07-14 05:48:25.142314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.332 qpair failed and we were unable to recover it. 00:34:18.332 [2024-07-14 05:48:25.142493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.332 [2024-07-14 05:48:25.142521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.332 qpair failed and we were unable to recover it. 00:34:18.333 [2024-07-14 05:48:25.142673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.333 [2024-07-14 05:48:25.142700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.333 qpair failed and we were unable to recover it. 00:34:18.333 [2024-07-14 05:48:25.142855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.333 [2024-07-14 05:48:25.142890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.333 qpair failed and we were unable to recover it. 00:34:18.333 [2024-07-14 05:48:25.143075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.333 [2024-07-14 05:48:25.143103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.333 qpair failed and we were unable to recover it. 00:34:18.333 [2024-07-14 05:48:25.143287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.333 [2024-07-14 05:48:25.143318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.333 qpair failed and we were unable to recover it. 00:34:18.333 [2024-07-14 05:48:25.143507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.333 [2024-07-14 05:48:25.143534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.333 qpair failed and we were unable to recover it. 00:34:18.333 [2024-07-14 05:48:25.143747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.333 [2024-07-14 05:48:25.143774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.333 qpair failed and we were unable to recover it. 00:34:18.333 [2024-07-14 05:48:25.143987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.333 [2024-07-14 05:48:25.144015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.333 qpair failed and we were unable to recover it. 00:34:18.333 [2024-07-14 05:48:25.144200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.333 [2024-07-14 05:48:25.144227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.333 qpair failed and we were unable to recover it. 00:34:18.333 [2024-07-14 05:48:25.144413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.333 [2024-07-14 05:48:25.144440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.333 qpair failed and we were unable to recover it. 00:34:18.333 [2024-07-14 05:48:25.144651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.333 [2024-07-14 05:48:25.144678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.333 qpair failed and we were unable to recover it. 00:34:18.333 [2024-07-14 05:48:25.144886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.333 [2024-07-14 05:48:25.144913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.333 qpair failed and we were unable to recover it. 00:34:18.333 [2024-07-14 05:48:25.145105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.333 [2024-07-14 05:48:25.145133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.333 qpair failed and we were unable to recover it. 00:34:18.333 [2024-07-14 05:48:25.145344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.333 [2024-07-14 05:48:25.145370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.333 qpair failed and we were unable to recover it. 00:34:18.333 [2024-07-14 05:48:25.145550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.333 [2024-07-14 05:48:25.145576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.333 qpair failed and we were unable to recover it. 00:34:18.333 [2024-07-14 05:48:25.145779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.333 [2024-07-14 05:48:25.145805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.333 qpair failed and we were unable to recover it. 00:34:18.333 [2024-07-14 05:48:25.145988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.333 [2024-07-14 05:48:25.146015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.333 qpair failed and we were unable to recover it. 00:34:18.333 [2024-07-14 05:48:25.146226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.333 [2024-07-14 05:48:25.146253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.333 qpair failed and we were unable to recover it. 00:34:18.333 [2024-07-14 05:48:25.146433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.333 [2024-07-14 05:48:25.146460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.333 qpair failed and we were unable to recover it. 00:34:18.333 [2024-07-14 05:48:25.146643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.333 [2024-07-14 05:48:25.146670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.333 qpair failed and we were unable to recover it. 00:34:18.333 [2024-07-14 05:48:25.146826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.333 [2024-07-14 05:48:25.146853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.333 qpair failed and we were unable to recover it. 00:34:18.333 [2024-07-14 05:48:25.147026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.333 [2024-07-14 05:48:25.147053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.333 qpair failed and we were unable to recover it. 00:34:18.333 [2024-07-14 05:48:25.147300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.333 [2024-07-14 05:48:25.147326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.333 qpair failed and we were unable to recover it. 00:34:18.333 [2024-07-14 05:48:25.147607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.333 [2024-07-14 05:48:25.147634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.333 qpair failed and we were unable to recover it. 00:34:18.333 [2024-07-14 05:48:25.147814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.333 [2024-07-14 05:48:25.147841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.333 qpair failed and we were unable to recover it. 00:34:18.333 [2024-07-14 05:48:25.148080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.333 [2024-07-14 05:48:25.148126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.333 qpair failed and we were unable to recover it. 00:34:18.333 [2024-07-14 05:48:25.148310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.333 [2024-07-14 05:48:25.148338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.333 qpair failed and we were unable to recover it. 00:34:18.333 [2024-07-14 05:48:25.148497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.333 [2024-07-14 05:48:25.148524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.333 qpair failed and we were unable to recover it. 00:34:18.333 [2024-07-14 05:48:25.148674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.333 [2024-07-14 05:48:25.148700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.333 qpair failed and we were unable to recover it. 00:34:18.333 [2024-07-14 05:48:25.148880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.333 [2024-07-14 05:48:25.148908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.333 qpair failed and we were unable to recover it. 00:34:18.333 [2024-07-14 05:48:25.149068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.333 [2024-07-14 05:48:25.149095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d08000b90 with addr=10.0.0.2, port=4420 00:34:18.333 qpair failed and we were unable to recover it. 00:34:18.333 [2024-07-14 05:48:25.149226] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:18.333 [2024-07-14 05:48:25.149261] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:18.333 [2024-07-14 05:48:25.149277] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:18.333 [2024-07-14 05:48:25.149289] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:18.333 [2024-07-14 05:48:25.149284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.333 [2024-07-14 05:48:25.149300] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:18.333 [2024-07-14 05:48:25.149311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.333 qpair failed and we were unable to recover it. 00:34:18.333 [2024-07-14 05:48:25.149376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:34:18.333 [2024-07-14 05:48:25.149445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:34:18.333 [2024-07-14 05:48:25.149528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.333 [2024-07-14 05:48:25.149555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.333 qpair failed and we were unable to recover it. 00:34:18.333 [2024-07-14 05:48:25.149475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:34:18.333 [2024-07-14 05:48:25.149477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:34:18.333 [2024-07-14 05:48:25.149736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.333 [2024-07-14 05:48:25.149764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.333 qpair failed and we were unable to recover it. 00:34:18.333 [2024-07-14 05:48:25.149927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.333 [2024-07-14 05:48:25.149954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.333 qpair failed and we were unable to recover it. 00:34:18.334 [2024-07-14 05:48:25.150223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.334 [2024-07-14 05:48:25.150250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.334 qpair failed and we were unable to recover it. 00:34:18.334 [2024-07-14 05:48:25.150450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.334 [2024-07-14 05:48:25.150477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.334 qpair failed and we were unable to recover it. 00:34:18.334 [2024-07-14 05:48:25.150661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.334 [2024-07-14 05:48:25.150689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.334 qpair failed and we were unable to recover it. 00:34:18.334 [2024-07-14 05:48:25.150881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.334 [2024-07-14 05:48:25.150908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.334 qpair failed and we were unable to recover it. 00:34:18.334 [2024-07-14 05:48:25.151095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.334 [2024-07-14 05:48:25.151123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.334 qpair failed and we were unable to recover it. 00:34:18.334 [2024-07-14 05:48:25.151307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.334 [2024-07-14 05:48:25.151334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.334 qpair failed and we were unable to recover it. 00:34:18.334 [2024-07-14 05:48:25.151538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.334 [2024-07-14 05:48:25.151569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.334 qpair failed and we were unable to recover it. 00:34:18.334 [2024-07-14 05:48:25.151738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.334 [2024-07-14 05:48:25.151765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.334 qpair failed and we were unable to recover it. 00:34:18.334 [2024-07-14 05:48:25.151981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.334 [2024-07-14 05:48:25.152008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.334 qpair failed and we were unable to recover it. 00:34:18.334 [2024-07-14 05:48:25.152210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.334 [2024-07-14 05:48:25.152254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.334 qpair failed and we were unable to recover it. 00:34:18.334 [2024-07-14 05:48:25.152445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.334 [2024-07-14 05:48:25.152473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.334 qpair failed and we were unable to recover it. 00:34:18.334 [2024-07-14 05:48:25.152632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.334 [2024-07-14 05:48:25.152659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.334 qpair failed and we were unable to recover it. 00:34:18.334 [2024-07-14 05:48:25.152858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.334 [2024-07-14 05:48:25.152895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.334 qpair failed and we were unable to recover it. 00:34:18.334 [2024-07-14 05:48:25.153052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.334 [2024-07-14 05:48:25.153078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.334 qpair failed and we were unable to recover it. 00:34:18.334 [2024-07-14 05:48:25.153245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.334 [2024-07-14 05:48:25.153270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.334 qpair failed and we were unable to recover it. 00:34:18.334 [2024-07-14 05:48:25.153435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.334 [2024-07-14 05:48:25.153461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.334 qpair failed and we were unable to recover it. 00:34:18.334 [2024-07-14 05:48:25.153614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.334 [2024-07-14 05:48:25.153641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.334 qpair failed and we were unable to recover it. 00:34:18.334 [2024-07-14 05:48:25.153824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.334 [2024-07-14 05:48:25.153850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.334 qpair failed and we were unable to recover it. 00:34:18.334 [2024-07-14 05:48:25.154018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.334 [2024-07-14 05:48:25.154044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.334 qpair failed and we were unable to recover it. 00:34:18.334 [2024-07-14 05:48:25.154195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.334 [2024-07-14 05:48:25.154222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.334 qpair failed and we were unable to recover it. 00:34:18.334 [2024-07-14 05:48:25.154379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.334 [2024-07-14 05:48:25.154405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.334 qpair failed and we were unable to recover it. 00:34:18.334 [2024-07-14 05:48:25.154565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.334 [2024-07-14 05:48:25.154590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.334 qpair failed and we were unable to recover it. 00:34:18.334 [2024-07-14 05:48:25.154749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.334 [2024-07-14 05:48:25.154775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.334 qpair failed and we were unable to recover it. 00:34:18.334 [2024-07-14 05:48:25.154936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.334 [2024-07-14 05:48:25.154967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.334 qpair failed and we were unable to recover it. 00:34:18.334 [2024-07-14 05:48:25.155132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.334 [2024-07-14 05:48:25.155159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.334 qpair failed and we were unable to recover it. 00:34:18.334 [2024-07-14 05:48:25.155309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.334 [2024-07-14 05:48:25.155334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.334 qpair failed and we were unable to recover it. 00:34:18.334 [2024-07-14 05:48:25.155510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.334 [2024-07-14 05:48:25.155536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.334 qpair failed and we were unable to recover it. 00:34:18.334 [2024-07-14 05:48:25.155693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.334 [2024-07-14 05:48:25.155719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.334 qpair failed and we were unable to recover it. 00:34:18.334 [2024-07-14 05:48:25.155913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.334 [2024-07-14 05:48:25.155940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.334 qpair failed and we were unable to recover it. 00:34:18.334 [2024-07-14 05:48:25.156098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.334 [2024-07-14 05:48:25.156123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.334 qpair failed and we were unable to recover it. 00:34:18.334 [2024-07-14 05:48:25.156322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.334 [2024-07-14 05:48:25.156348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.334 qpair failed and we were unable to recover it. 00:34:18.334 [2024-07-14 05:48:25.156533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.334 [2024-07-14 05:48:25.156559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.334 qpair failed and we were unable to recover it. 00:34:18.334 [2024-07-14 05:48:25.156723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.334 [2024-07-14 05:48:25.156750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.334 qpair failed and we were unable to recover it. 00:34:18.334 [2024-07-14 05:48:25.156937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.334 [2024-07-14 05:48:25.156969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.334 qpair failed and we were unable to recover it. 00:34:18.334 [2024-07-14 05:48:25.157126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.334 [2024-07-14 05:48:25.157151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.334 qpair failed and we were unable to recover it. 00:34:18.334 [2024-07-14 05:48:25.157360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.334 [2024-07-14 05:48:25.157386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.334 qpair failed and we were unable to recover it. 00:34:18.334 [2024-07-14 05:48:25.157544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.334 [2024-07-14 05:48:25.157569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.334 qpair failed and we were unable to recover it. 00:34:18.334 [2024-07-14 05:48:25.157742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.334 [2024-07-14 05:48:25.157768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.334 qpair failed and we were unable to recover it. 00:34:18.334 [2024-07-14 05:48:25.157928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.334 [2024-07-14 05:48:25.157954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.334 qpair failed and we were unable to recover it. 00:34:18.334 [2024-07-14 05:48:25.158146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.334 [2024-07-14 05:48:25.158174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.334 qpair failed and we were unable to recover it. 00:34:18.335 [2024-07-14 05:48:25.158361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.335 [2024-07-14 05:48:25.158387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.335 qpair failed and we were unable to recover it. 00:34:18.335 [2024-07-14 05:48:25.158533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.335 [2024-07-14 05:48:25.158560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.335 qpair failed and we were unable to recover it. 00:34:18.335 [2024-07-14 05:48:25.158712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.335 [2024-07-14 05:48:25.158737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.335 qpair failed and we were unable to recover it. 00:34:18.335 [2024-07-14 05:48:25.158923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.335 [2024-07-14 05:48:25.158949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.335 qpair failed and we were unable to recover it. 00:34:18.335 [2024-07-14 05:48:25.159098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.335 [2024-07-14 05:48:25.159124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.335 qpair failed and we were unable to recover it. 00:34:18.335 [2024-07-14 05:48:25.159317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.335 [2024-07-14 05:48:25.159350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.335 qpair failed and we were unable to recover it. 00:34:18.335 [2024-07-14 05:48:25.159534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.335 [2024-07-14 05:48:25.159561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.335 qpair failed and we were unable to recover it. 00:34:18.335 [2024-07-14 05:48:25.159725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.335 [2024-07-14 05:48:25.159752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.335 qpair failed and we were unable to recover it. 00:34:18.335 [2024-07-14 05:48:25.159935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.335 [2024-07-14 05:48:25.159962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.335 qpair failed and we were unable to recover it. 00:34:18.335 [2024-07-14 05:48:25.160132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.335 [2024-07-14 05:48:25.160160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.335 qpair failed and we were unable to recover it. 00:34:18.335 [2024-07-14 05:48:25.160315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.335 [2024-07-14 05:48:25.160342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.335 qpair failed and we were unable to recover it. 00:34:18.335 [2024-07-14 05:48:25.160513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.335 [2024-07-14 05:48:25.160538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.335 qpair failed and we were unable to recover it. 00:34:18.335 [2024-07-14 05:48:25.160707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.335 [2024-07-14 05:48:25.160732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.335 qpair failed and we were unable to recover it. 00:34:18.335 [2024-07-14 05:48:25.160890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.335 [2024-07-14 05:48:25.160920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.335 qpair failed and we were unable to recover it. 00:34:18.335 [2024-07-14 05:48:25.161071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.335 [2024-07-14 05:48:25.161098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.335 qpair failed and we were unable to recover it. 00:34:18.335 [2024-07-14 05:48:25.161264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.335 [2024-07-14 05:48:25.161291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.335 qpair failed and we were unable to recover it. 00:34:18.335 [2024-07-14 05:48:25.161471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.335 [2024-07-14 05:48:25.161497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.335 qpair failed and we were unable to recover it. 00:34:18.335 [2024-07-14 05:48:25.161665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.335 [2024-07-14 05:48:25.161691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.335 qpair failed and we were unable to recover it. 00:34:18.335 [2024-07-14 05:48:25.161849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.335 [2024-07-14 05:48:25.161882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.335 qpair failed and we were unable to recover it. 00:34:18.335 [2024-07-14 05:48:25.162048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.335 [2024-07-14 05:48:25.162074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.335 qpair failed and we were unable to recover it. 00:34:18.335 [2024-07-14 05:48:25.162256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.335 [2024-07-14 05:48:25.162286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.335 qpair failed and we were unable to recover it. 00:34:18.335 [2024-07-14 05:48:25.162495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.335 [2024-07-14 05:48:25.162522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.335 qpair failed and we were unable to recover it. 00:34:18.335 [2024-07-14 05:48:25.162703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.335 [2024-07-14 05:48:25.162729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.335 qpair failed and we were unable to recover it. 00:34:18.335 [2024-07-14 05:48:25.162913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.335 [2024-07-14 05:48:25.162941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.335 qpair failed and we were unable to recover it. 00:34:18.335 [2024-07-14 05:48:25.163098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.335 [2024-07-14 05:48:25.163124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.335 qpair failed and we were unable to recover it. 00:34:18.335 [2024-07-14 05:48:25.163276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.335 [2024-07-14 05:48:25.163302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.335 qpair failed and we were unable to recover it. 00:34:18.335 [2024-07-14 05:48:25.163477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.335 [2024-07-14 05:48:25.163503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.335 qpair failed and we were unable to recover it. 00:34:18.335 [2024-07-14 05:48:25.163694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.335 [2024-07-14 05:48:25.163721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.335 qpair failed and we were unable to recover it. 00:34:18.335 [2024-07-14 05:48:25.163878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.335 [2024-07-14 05:48:25.163906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.335 qpair failed and we were unable to recover it. 00:34:18.335 [2024-07-14 05:48:25.164060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.335 [2024-07-14 05:48:25.164085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.335 qpair failed and we were unable to recover it. 00:34:18.335 [2024-07-14 05:48:25.164273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.335 [2024-07-14 05:48:25.164299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.335 qpair failed and we were unable to recover it. 00:34:18.335 [2024-07-14 05:48:25.164466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.335 [2024-07-14 05:48:25.164492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.335 qpair failed and we were unable to recover it. 00:34:18.335 [2024-07-14 05:48:25.164702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.335 [2024-07-14 05:48:25.164728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.335 qpair failed and we were unable to recover it. 00:34:18.335 [2024-07-14 05:48:25.164918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.335 [2024-07-14 05:48:25.164945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.335 qpair failed and we were unable to recover it. 00:34:18.335 [2024-07-14 05:48:25.165096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.335 [2024-07-14 05:48:25.165122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.335 qpair failed and we were unable to recover it. 00:34:18.335 [2024-07-14 05:48:25.165280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.335 [2024-07-14 05:48:25.165306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.335 qpair failed and we were unable to recover it. 00:34:18.335 [2024-07-14 05:48:25.165466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.335 [2024-07-14 05:48:25.165493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.335 qpair failed and we were unable to recover it. 00:34:18.335 [2024-07-14 05:48:25.165677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.335 [2024-07-14 05:48:25.165703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.335 qpair failed and we were unable to recover it. 00:34:18.335 [2024-07-14 05:48:25.165855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.335 [2024-07-14 05:48:25.165887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.335 qpair failed and we were unable to recover it. 00:34:18.335 [2024-07-14 05:48:25.166036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.335 [2024-07-14 05:48:25.166063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.335 qpair failed and we were unable to recover it. 00:34:18.335 [2024-07-14 05:48:25.166227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.335 [2024-07-14 05:48:25.166254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.336 qpair failed and we were unable to recover it. 00:34:18.336 [2024-07-14 05:48:25.166441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.336 [2024-07-14 05:48:25.166467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.336 qpair failed and we were unable to recover it. 00:34:18.336 [2024-07-14 05:48:25.166637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.336 [2024-07-14 05:48:25.166662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.336 qpair failed and we were unable to recover it. 00:34:18.336 [2024-07-14 05:48:25.166852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.336 [2024-07-14 05:48:25.166887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.336 qpair failed and we were unable to recover it. 00:34:18.336 [2024-07-14 05:48:25.167060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.336 [2024-07-14 05:48:25.167086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.336 qpair failed and we were unable to recover it. 00:34:18.336 [2024-07-14 05:48:25.167245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.336 [2024-07-14 05:48:25.167271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.336 qpair failed and we were unable to recover it. 00:34:18.336 [2024-07-14 05:48:25.167456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.336 [2024-07-14 05:48:25.167481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.336 qpair failed and we were unable to recover it. 00:34:18.336 [2024-07-14 05:48:25.167651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.336 [2024-07-14 05:48:25.167677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.336 qpair failed and we were unable to recover it. 00:34:18.336 [2024-07-14 05:48:25.167828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.336 [2024-07-14 05:48:25.167870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.336 qpair failed and we were unable to recover it. 00:34:18.336 [2024-07-14 05:48:25.168055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.336 [2024-07-14 05:48:25.168080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.336 qpair failed and we were unable to recover it. 00:34:18.336 [2024-07-14 05:48:25.168285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.336 [2024-07-14 05:48:25.168311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.336 qpair failed and we were unable to recover it. 00:34:18.336 [2024-07-14 05:48:25.168474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.336 [2024-07-14 05:48:25.168499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.336 qpair failed and we were unable to recover it. 00:34:18.336 [2024-07-14 05:48:25.168673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.336 [2024-07-14 05:48:25.168698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.336 qpair failed and we were unable to recover it. 00:34:18.336 [2024-07-14 05:48:25.168852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.336 [2024-07-14 05:48:25.168902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.336 qpair failed and we were unable to recover it. 00:34:18.336 [2024-07-14 05:48:25.169064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.336 [2024-07-14 05:48:25.169090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.336 qpair failed and we were unable to recover it. 00:34:18.336 [2024-07-14 05:48:25.169271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.336 [2024-07-14 05:48:25.169296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.336 qpair failed and we were unable to recover it. 00:34:18.336 [2024-07-14 05:48:25.169453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.336 [2024-07-14 05:48:25.169478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.336 qpair failed and we were unable to recover it. 00:34:18.336 [2024-07-14 05:48:25.169654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.336 [2024-07-14 05:48:25.169680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.336 qpair failed and we were unable to recover it. 00:34:18.336 [2024-07-14 05:48:25.169863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.336 [2024-07-14 05:48:25.169894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.336 qpair failed and we were unable to recover it. 00:34:18.336 [2024-07-14 05:48:25.170050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.336 [2024-07-14 05:48:25.170075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.336 qpair failed and we were unable to recover it. 00:34:18.336 [2024-07-14 05:48:25.170225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.336 [2024-07-14 05:48:25.170250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.336 qpair failed and we were unable to recover it. 00:34:18.336 [2024-07-14 05:48:25.170415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.336 [2024-07-14 05:48:25.170441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.336 qpair failed and we were unable to recover it. 00:34:18.336 [2024-07-14 05:48:25.170617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.336 [2024-07-14 05:48:25.170643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.336 qpair failed and we were unable to recover it. 00:34:18.336 [2024-07-14 05:48:25.170821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.336 [2024-07-14 05:48:25.170847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.336 qpair failed and we were unable to recover it. 00:34:18.336 [2024-07-14 05:48:25.171030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.336 [2024-07-14 05:48:25.171056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.336 qpair failed and we were unable to recover it. 00:34:18.336 [2024-07-14 05:48:25.171218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.336 [2024-07-14 05:48:25.171244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.336 qpair failed and we were unable to recover it. 00:34:18.336 [2024-07-14 05:48:25.171422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.336 [2024-07-14 05:48:25.171448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.336 qpair failed and we were unable to recover it. 00:34:18.336 [2024-07-14 05:48:25.171607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.336 [2024-07-14 05:48:25.171632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.336 qpair failed and we were unable to recover it. 00:34:18.336 [2024-07-14 05:48:25.171815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.336 [2024-07-14 05:48:25.171843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.336 qpair failed and we were unable to recover it. 00:34:18.336 [2024-07-14 05:48:25.172020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.336 [2024-07-14 05:48:25.172047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.336 qpair failed and we were unable to recover it. 00:34:18.336 [2024-07-14 05:48:25.172224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.336 [2024-07-14 05:48:25.172250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.336 qpair failed and we were unable to recover it. 00:34:18.336 [2024-07-14 05:48:25.172417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.336 [2024-07-14 05:48:25.172443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.336 qpair failed and we were unable to recover it. 00:34:18.336 [2024-07-14 05:48:25.172627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.336 [2024-07-14 05:48:25.172652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.336 qpair failed and we were unable to recover it. 00:34:18.336 [2024-07-14 05:48:25.172801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.336 [2024-07-14 05:48:25.172827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.336 qpair failed and we were unable to recover it. 00:34:18.336 [2024-07-14 05:48:25.172991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.336 [2024-07-14 05:48:25.173017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.336 qpair failed and we were unable to recover it. 00:34:18.336 [2024-07-14 05:48:25.173199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.336 [2024-07-14 05:48:25.173225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.336 qpair failed and we were unable to recover it. 00:34:18.336 [2024-07-14 05:48:25.173404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.336 [2024-07-14 05:48:25.173429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.336 qpair failed and we were unable to recover it. 00:34:18.336 [2024-07-14 05:48:25.173628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.336 [2024-07-14 05:48:25.173654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.336 qpair failed and we were unable to recover it. 00:34:18.336 [2024-07-14 05:48:25.173851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.336 [2024-07-14 05:48:25.173882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.336 qpair failed and we were unable to recover it. 00:34:18.336 [2024-07-14 05:48:25.174070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.336 [2024-07-14 05:48:25.174096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.336 qpair failed and we were unable to recover it. 00:34:18.336 [2024-07-14 05:48:25.174252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.336 [2024-07-14 05:48:25.174279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.336 qpair failed and we were unable to recover it. 00:34:18.336 [2024-07-14 05:48:25.174460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.337 [2024-07-14 05:48:25.174486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.337 qpair failed and we were unable to recover it. 00:34:18.337 [2024-07-14 05:48:25.174641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.337 [2024-07-14 05:48:25.174666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.337 qpair failed and we were unable to recover it. 00:34:18.337 [2024-07-14 05:48:25.174854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.337 [2024-07-14 05:48:25.174885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.337 qpair failed and we were unable to recover it. 00:34:18.337 [2024-07-14 05:48:25.175045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.337 [2024-07-14 05:48:25.175071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.337 qpair failed and we were unable to recover it. 00:34:18.337 [2024-07-14 05:48:25.175285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.337 [2024-07-14 05:48:25.175311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.337 qpair failed and we were unable to recover it. 00:34:18.337 [2024-07-14 05:48:25.175484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.337 [2024-07-14 05:48:25.175509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.337 qpair failed and we were unable to recover it. 00:34:18.337 [2024-07-14 05:48:25.175658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.337 [2024-07-14 05:48:25.175684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.337 qpair failed and we were unable to recover it. 00:34:18.337 [2024-07-14 05:48:25.175989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.337 [2024-07-14 05:48:25.176020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.337 qpair failed and we were unable to recover it. 00:34:18.337 [2024-07-14 05:48:25.176239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.337 [2024-07-14 05:48:25.176265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.337 qpair failed and we were unable to recover it. 00:34:18.337 [2024-07-14 05:48:25.176428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.337 [2024-07-14 05:48:25.176453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.337 qpair failed and we were unable to recover it. 00:34:18.337 [2024-07-14 05:48:25.176643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.337 [2024-07-14 05:48:25.176669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.337 qpair failed and we were unable to recover it. 00:34:18.337 [2024-07-14 05:48:25.176846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.337 [2024-07-14 05:48:25.176877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.337 qpair failed and we were unable to recover it. 00:34:18.337 [2024-07-14 05:48:25.177026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.337 [2024-07-14 05:48:25.177051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.337 qpair failed and we were unable to recover it. 00:34:18.337 [2024-07-14 05:48:25.177224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.337 [2024-07-14 05:48:25.177251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.337 qpair failed and we were unable to recover it. 00:34:18.337 [2024-07-14 05:48:25.177556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.337 [2024-07-14 05:48:25.177582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.337 qpair failed and we were unable to recover it. 00:34:18.337 [2024-07-14 05:48:25.177743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.337 [2024-07-14 05:48:25.177769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.337 qpair failed and we were unable to recover it. 00:34:18.337 [2024-07-14 05:48:25.177930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.337 [2024-07-14 05:48:25.177957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.337 qpair failed and we were unable to recover it. 00:34:18.337 [2024-07-14 05:48:25.178114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.337 [2024-07-14 05:48:25.178140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.337 qpair failed and we were unable to recover it. 00:34:18.337 [2024-07-14 05:48:25.178295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.337 [2024-07-14 05:48:25.178321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.337 qpair failed and we were unable to recover it. 00:34:18.337 [2024-07-14 05:48:25.178496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.337 [2024-07-14 05:48:25.178522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.337 qpair failed and we were unable to recover it. 00:34:18.337 [2024-07-14 05:48:25.178667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.337 [2024-07-14 05:48:25.178693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.337 qpair failed and we were unable to recover it. 00:34:18.337 [2024-07-14 05:48:25.178876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.337 [2024-07-14 05:48:25.178902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.337 qpair failed and we were unable to recover it. 00:34:18.337 [2024-07-14 05:48:25.179063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.337 [2024-07-14 05:48:25.179089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.337 qpair failed and we were unable to recover it. 00:34:18.337 [2024-07-14 05:48:25.179264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.337 [2024-07-14 05:48:25.179290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.337 qpair failed and we were unable to recover it. 00:34:18.337 [2024-07-14 05:48:25.179477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.337 [2024-07-14 05:48:25.179503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.337 qpair failed and we were unable to recover it. 00:34:18.337 [2024-07-14 05:48:25.179667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.337 [2024-07-14 05:48:25.179693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.337 qpair failed and we were unable to recover it. 00:34:18.337 [2024-07-14 05:48:25.179839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.337 [2024-07-14 05:48:25.179904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.337 qpair failed and we were unable to recover it. 00:34:18.337 [2024-07-14 05:48:25.180066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.337 [2024-07-14 05:48:25.180093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.337 qpair failed and we were unable to recover it. 00:34:18.337 [2024-07-14 05:48:25.180276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.337 [2024-07-14 05:48:25.180302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.337 qpair failed and we were unable to recover it. 00:34:18.337 [2024-07-14 05:48:25.180447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.337 [2024-07-14 05:48:25.180473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.337 qpair failed and we were unable to recover it. 00:34:18.337 [2024-07-14 05:48:25.180634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.337 [2024-07-14 05:48:25.180660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.337 qpair failed and we were unable to recover it. 00:34:18.337 [2024-07-14 05:48:25.180811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.337 [2024-07-14 05:48:25.180837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.337 qpair failed and we were unable to recover it. 00:34:18.337 [2024-07-14 05:48:25.181037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.337 [2024-07-14 05:48:25.181065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.337 qpair failed and we were unable to recover it. 00:34:18.337 [2024-07-14 05:48:25.181222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.337 [2024-07-14 05:48:25.181248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.337 qpair failed and we were unable to recover it. 00:34:18.337 [2024-07-14 05:48:25.181430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.337 [2024-07-14 05:48:25.181460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.337 qpair failed and we were unable to recover it. 00:34:18.337 [2024-07-14 05:48:25.181611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.337 [2024-07-14 05:48:25.181637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.338 qpair failed and we were unable to recover it. 00:34:18.338 [2024-07-14 05:48:25.181840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.338 [2024-07-14 05:48:25.181872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.338 qpair failed and we were unable to recover it. 00:34:18.338 [2024-07-14 05:48:25.182065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.338 [2024-07-14 05:48:25.182092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.338 qpair failed and we were unable to recover it. 00:34:18.338 [2024-07-14 05:48:25.182251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.338 [2024-07-14 05:48:25.182276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.338 qpair failed and we were unable to recover it. 00:34:18.338 [2024-07-14 05:48:25.182458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.338 [2024-07-14 05:48:25.182484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.338 qpair failed and we were unable to recover it. 00:34:18.338 [2024-07-14 05:48:25.182629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.338 [2024-07-14 05:48:25.182655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.338 qpair failed and we were unable to recover it. 00:34:18.338 [2024-07-14 05:48:25.182837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.338 [2024-07-14 05:48:25.182862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.338 qpair failed and we were unable to recover it. 00:34:18.338 [2024-07-14 05:48:25.183041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.338 [2024-07-14 05:48:25.183067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.338 qpair failed and we were unable to recover it. 00:34:18.338 [2024-07-14 05:48:25.183219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.338 [2024-07-14 05:48:25.183245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.338 qpair failed and we were unable to recover it. 00:34:18.338 [2024-07-14 05:48:25.183412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.338 [2024-07-14 05:48:25.183438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.338 qpair failed and we were unable to recover it. 00:34:18.338 [2024-07-14 05:48:25.183618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.338 [2024-07-14 05:48:25.183645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.338 qpair failed and we were unable to recover it. 00:34:18.338 [2024-07-14 05:48:25.183788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.338 [2024-07-14 05:48:25.183814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.338 qpair failed and we were unable to recover it. 00:34:18.338 [2024-07-14 05:48:25.184007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.338 [2024-07-14 05:48:25.184034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.338 qpair failed and we were unable to recover it. 00:34:18.338 [2024-07-14 05:48:25.184216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.338 [2024-07-14 05:48:25.184242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.338 qpair failed and we were unable to recover it. 00:34:18.338 [2024-07-14 05:48:25.184420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.338 [2024-07-14 05:48:25.184446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.338 qpair failed and we were unable to recover it. 00:34:18.338 [2024-07-14 05:48:25.184595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.338 [2024-07-14 05:48:25.184621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.338 qpair failed and we were unable to recover it. 00:34:18.338 [2024-07-14 05:48:25.184903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.338 [2024-07-14 05:48:25.184930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.338 qpair failed and we were unable to recover it. 00:34:18.338 [2024-07-14 05:48:25.185108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.338 [2024-07-14 05:48:25.185133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.338 qpair failed and we were unable to recover it. 00:34:18.338 [2024-07-14 05:48:25.185288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.338 [2024-07-14 05:48:25.185314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.338 qpair failed and we were unable to recover it. 00:34:18.338 [2024-07-14 05:48:25.185494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.338 [2024-07-14 05:48:25.185520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.338 qpair failed and we were unable to recover it. 00:34:18.338 [2024-07-14 05:48:25.185796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.338 [2024-07-14 05:48:25.185822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.338 qpair failed and we were unable to recover it. 00:34:18.338 [2024-07-14 05:48:25.186046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.338 [2024-07-14 05:48:25.186074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.338 qpair failed and we were unable to recover it. 00:34:18.338 [2024-07-14 05:48:25.186250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.338 [2024-07-14 05:48:25.186277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.338 qpair failed and we were unable to recover it. 00:34:18.338 [2024-07-14 05:48:25.186480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.338 [2024-07-14 05:48:25.186505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.338 qpair failed and we were unable to recover it. 00:34:18.338 [2024-07-14 05:48:25.186681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.338 [2024-07-14 05:48:25.186707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.338 qpair failed and we were unable to recover it. 00:34:18.338 [2024-07-14 05:48:25.186886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.338 [2024-07-14 05:48:25.186912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.338 qpair failed and we were unable to recover it. 00:34:18.338 [2024-07-14 05:48:25.187098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.338 [2024-07-14 05:48:25.187133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.338 qpair failed and we were unable to recover it. 00:34:18.338 [2024-07-14 05:48:25.187287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.338 [2024-07-14 05:48:25.187313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.338 qpair failed and we were unable to recover it. 00:34:18.338 [2024-07-14 05:48:25.187471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.338 [2024-07-14 05:48:25.187497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.338 qpair failed and we were unable to recover it. 00:34:18.338 [2024-07-14 05:48:25.187679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.338 [2024-07-14 05:48:25.187705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.338 qpair failed and we were unable to recover it. 00:34:18.338 [2024-07-14 05:48:25.187972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.338 [2024-07-14 05:48:25.187999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.338 qpair failed and we were unable to recover it. 00:34:18.338 [2024-07-14 05:48:25.188206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.338 [2024-07-14 05:48:25.188231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.338 qpair failed and we were unable to recover it. 00:34:18.338 [2024-07-14 05:48:25.188412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.338 [2024-07-14 05:48:25.188437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.338 qpair failed and we were unable to recover it. 00:34:18.338 [2024-07-14 05:48:25.188647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.338 [2024-07-14 05:48:25.188673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.338 qpair failed and we were unable to recover it. 00:34:18.338 [2024-07-14 05:48:25.188875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.338 [2024-07-14 05:48:25.188901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.338 qpair failed and we were unable to recover it. 00:34:18.338 [2024-07-14 05:48:25.189090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.338 [2024-07-14 05:48:25.189116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.338 qpair failed and we were unable to recover it. 00:34:18.338 [2024-07-14 05:48:25.189265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.338 [2024-07-14 05:48:25.189291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.338 qpair failed and we were unable to recover it. 00:34:18.338 [2024-07-14 05:48:25.189436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.338 [2024-07-14 05:48:25.189462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.338 qpair failed and we were unable to recover it. 00:34:18.338 [2024-07-14 05:48:25.189626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.338 [2024-07-14 05:48:25.189652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.338 qpair failed and we were unable to recover it. 00:34:18.338 [2024-07-14 05:48:25.189832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.338 [2024-07-14 05:48:25.189858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.338 qpair failed and we were unable to recover it. 00:34:18.338 [2024-07-14 05:48:25.190042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.338 [2024-07-14 05:48:25.190069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.338 qpair failed and we were unable to recover it. 00:34:18.338 [2024-07-14 05:48:25.190251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.338 [2024-07-14 05:48:25.190277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.339 qpair failed and we were unable to recover it. 00:34:18.339 [2024-07-14 05:48:25.190434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.339 [2024-07-14 05:48:25.190460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.339 qpair failed and we were unable to recover it. 00:34:18.339 [2024-07-14 05:48:25.190637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.339 [2024-07-14 05:48:25.190666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.339 qpair failed and we were unable to recover it. 00:34:18.339 [2024-07-14 05:48:25.190846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.339 [2024-07-14 05:48:25.190877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.339 qpair failed and we were unable to recover it. 00:34:18.339 [2024-07-14 05:48:25.191032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.339 [2024-07-14 05:48:25.191058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.339 qpair failed and we were unable to recover it. 00:34:18.339 [2024-07-14 05:48:25.191234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.339 [2024-07-14 05:48:25.191272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.339 qpair failed and we were unable to recover it. 00:34:18.339 [2024-07-14 05:48:25.191453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.339 [2024-07-14 05:48:25.191479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.339 qpair failed and we were unable to recover it. 00:34:18.339 [2024-07-14 05:48:25.191663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.339 [2024-07-14 05:48:25.191688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.339 qpair failed and we were unable to recover it. 00:34:18.339 [2024-07-14 05:48:25.191838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.339 [2024-07-14 05:48:25.191878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.339 qpair failed and we were unable to recover it. 00:34:18.339 [2024-07-14 05:48:25.192059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.339 [2024-07-14 05:48:25.192084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.339 qpair failed and we were unable to recover it. 00:34:18.339 [2024-07-14 05:48:25.192248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.339 [2024-07-14 05:48:25.192273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.339 qpair failed and we were unable to recover it. 00:34:18.339 [2024-07-14 05:48:25.192458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.339 [2024-07-14 05:48:25.192484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.339 qpair failed and we were unable to recover it. 00:34:18.339 [2024-07-14 05:48:25.192649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.339 [2024-07-14 05:48:25.192678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.339 qpair failed and we were unable to recover it. 00:34:18.339 [2024-07-14 05:48:25.192852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.339 [2024-07-14 05:48:25.192882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.339 qpair failed and we were unable to recover it. 00:34:18.339 [2024-07-14 05:48:25.193052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.339 [2024-07-14 05:48:25.193077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.339 qpair failed and we were unable to recover it. 00:34:18.339 [2024-07-14 05:48:25.193258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.339 [2024-07-14 05:48:25.193284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.339 qpair failed and we were unable to recover it. 00:34:18.339 [2024-07-14 05:48:25.193474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.339 [2024-07-14 05:48:25.193499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.339 qpair failed and we were unable to recover it. 00:34:18.339 [2024-07-14 05:48:25.193680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.339 [2024-07-14 05:48:25.193707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.339 qpair failed and we were unable to recover it. 00:34:18.339 [2024-07-14 05:48:25.193849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.339 [2024-07-14 05:48:25.193880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.339 qpair failed and we were unable to recover it. 00:34:18.339 [2024-07-14 05:48:25.194078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.339 [2024-07-14 05:48:25.194104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.339 qpair failed and we were unable to recover it. 00:34:18.339 [2024-07-14 05:48:25.194287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.339 [2024-07-14 05:48:25.194312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.339 qpair failed and we were unable to recover it. 00:34:18.339 [2024-07-14 05:48:25.194526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.339 [2024-07-14 05:48:25.194552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.339 qpair failed and we were unable to recover it. 00:34:18.339 [2024-07-14 05:48:25.194727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.339 [2024-07-14 05:48:25.194753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.339 qpair failed and we were unable to recover it. 00:34:18.339 [2024-07-14 05:48:25.194940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.339 [2024-07-14 05:48:25.194966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.339 qpair failed and we were unable to recover it. 00:34:18.339 [2024-07-14 05:48:25.195123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.339 [2024-07-14 05:48:25.195148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.339 qpair failed and we were unable to recover it. 00:34:18.339 [2024-07-14 05:48:25.195301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.339 [2024-07-14 05:48:25.195327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.339 qpair failed and we were unable to recover it. 00:34:18.339 [2024-07-14 05:48:25.195479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.339 [2024-07-14 05:48:25.195505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.339 qpair failed and we were unable to recover it. 00:34:18.339 [2024-07-14 05:48:25.195681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.339 [2024-07-14 05:48:25.195706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.339 qpair failed and we were unable to recover it. 00:34:18.339 [2024-07-14 05:48:25.195856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.339 [2024-07-14 05:48:25.195894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.339 qpair failed and we were unable to recover it. 00:34:18.339 [2024-07-14 05:48:25.196046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.339 [2024-07-14 05:48:25.196072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.339 qpair failed and we were unable to recover it. 00:34:18.339 [2024-07-14 05:48:25.196244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.339 [2024-07-14 05:48:25.196270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.339 qpair failed and we were unable to recover it. 00:34:18.339 [2024-07-14 05:48:25.196430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.339 [2024-07-14 05:48:25.196455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.339 qpair failed and we were unable to recover it. 00:34:18.339 [2024-07-14 05:48:25.196606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.339 [2024-07-14 05:48:25.196631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.339 qpair failed and we were unable to recover it. 00:34:18.339 [2024-07-14 05:48:25.196811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.339 [2024-07-14 05:48:25.196838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.339 qpair failed and we were unable to recover it. 00:34:18.339 [2024-07-14 05:48:25.197038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.339 [2024-07-14 05:48:25.197065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.339 qpair failed and we were unable to recover it. 00:34:18.339 [2024-07-14 05:48:25.197216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.339 [2024-07-14 05:48:25.197242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.339 qpair failed and we were unable to recover it. 00:34:18.339 [2024-07-14 05:48:25.197412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.339 [2024-07-14 05:48:25.197438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.339 qpair failed and we were unable to recover it. 00:34:18.339 [2024-07-14 05:48:25.197593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.339 [2024-07-14 05:48:25.197619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.339 qpair failed and we were unable to recover it. 00:34:18.339 [2024-07-14 05:48:25.197822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.339 [2024-07-14 05:48:25.197848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.339 qpair failed and we were unable to recover it. 00:34:18.339 [2024-07-14 05:48:25.198037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.339 [2024-07-14 05:48:25.198063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.339 qpair failed and we were unable to recover it. 00:34:18.339 [2024-07-14 05:48:25.198381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.339 [2024-07-14 05:48:25.198418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.339 qpair failed and we were unable to recover it. 00:34:18.339 [2024-07-14 05:48:25.198629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.339 [2024-07-14 05:48:25.198655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.339 qpair failed and we were unable to recover it. 00:34:18.340 [2024-07-14 05:48:25.198806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.340 [2024-07-14 05:48:25.198832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.340 qpair failed and we were unable to recover it. 00:34:18.340 [2024-07-14 05:48:25.199012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.340 [2024-07-14 05:48:25.199038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.340 qpair failed and we were unable to recover it. 00:34:18.340 [2024-07-14 05:48:25.199189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.340 [2024-07-14 05:48:25.199215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.340 qpair failed and we were unable to recover it. 00:34:18.340 [2024-07-14 05:48:25.199409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.340 [2024-07-14 05:48:25.199435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.340 qpair failed and we were unable to recover it. 00:34:18.340 [2024-07-14 05:48:25.199582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.340 [2024-07-14 05:48:25.199608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.340 qpair failed and we were unable to recover it. 00:34:18.340 [2024-07-14 05:48:25.199754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.340 [2024-07-14 05:48:25.199780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.340 qpair failed and we were unable to recover it. 00:34:18.340 [2024-07-14 05:48:25.199953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.340 [2024-07-14 05:48:25.199981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.340 qpair failed and we were unable to recover it. 00:34:18.340 [2024-07-14 05:48:25.200170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.340 [2024-07-14 05:48:25.200196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.340 qpair failed and we were unable to recover it. 00:34:18.340 [2024-07-14 05:48:25.200384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.340 [2024-07-14 05:48:25.200410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.340 qpair failed and we were unable to recover it. 00:34:18.340 [2024-07-14 05:48:25.200598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.340 [2024-07-14 05:48:25.200626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.340 qpair failed and we were unable to recover it. 00:34:18.340 [2024-07-14 05:48:25.200815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.340 [2024-07-14 05:48:25.200840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.340 qpair failed and we were unable to recover it. 00:34:18.340 [2024-07-14 05:48:25.201028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.340 [2024-07-14 05:48:25.201059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.340 qpair failed and we were unable to recover it. 00:34:18.340 [2024-07-14 05:48:25.201240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.340 [2024-07-14 05:48:25.201266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.340 qpair failed and we were unable to recover it. 00:34:18.340 [2024-07-14 05:48:25.201426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.340 [2024-07-14 05:48:25.201452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.340 qpair failed and we were unable to recover it. 00:34:18.340 [2024-07-14 05:48:25.201606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.340 [2024-07-14 05:48:25.201632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.340 qpair failed and we were unable to recover it. 00:34:18.340 [2024-07-14 05:48:25.201784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.340 [2024-07-14 05:48:25.201810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.340 qpair failed and we were unable to recover it. 00:34:18.340 [2024-07-14 05:48:25.201983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.340 [2024-07-14 05:48:25.202010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.340 qpair failed and we were unable to recover it. 00:34:18.340 [2024-07-14 05:48:25.202168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.340 [2024-07-14 05:48:25.202195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.340 qpair failed and we were unable to recover it. 00:34:18.340 [2024-07-14 05:48:25.202380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.340 [2024-07-14 05:48:25.202405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.340 qpair failed and we were unable to recover it. 00:34:18.340 [2024-07-14 05:48:25.202558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.340 [2024-07-14 05:48:25.202584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.340 qpair failed and we were unable to recover it. 00:34:18.340 [2024-07-14 05:48:25.202746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.340 [2024-07-14 05:48:25.202777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.340 qpair failed and we were unable to recover it. 00:34:18.340 [2024-07-14 05:48:25.202942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.340 [2024-07-14 05:48:25.202982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.340 qpair failed and we were unable to recover it. 00:34:18.340 [2024-07-14 05:48:25.203136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.340 [2024-07-14 05:48:25.203161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.340 qpair failed and we were unable to recover it. 00:34:18.340 [2024-07-14 05:48:25.203368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.340 [2024-07-14 05:48:25.203395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.340 qpair failed and we were unable to recover it. 00:34:18.340 [2024-07-14 05:48:25.203595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.340 [2024-07-14 05:48:25.203624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.340 qpair failed and we were unable to recover it. 00:34:18.340 [2024-07-14 05:48:25.203783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.340 [2024-07-14 05:48:25.203809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.340 qpair failed and we were unable to recover it. 00:34:18.340 [2024-07-14 05:48:25.203967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.340 [2024-07-14 05:48:25.203994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.340 qpair failed and we were unable to recover it. 00:34:18.340 [2024-07-14 05:48:25.204149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.340 [2024-07-14 05:48:25.204174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.340 qpair failed and we were unable to recover it. 00:34:18.340 [2024-07-14 05:48:25.204375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.340 [2024-07-14 05:48:25.204419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.340 qpair failed and we were unable to recover it. 00:34:18.340 [2024-07-14 05:48:25.204622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.340 [2024-07-14 05:48:25.204651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.340 qpair failed and we were unable to recover it. 00:34:18.340 [2024-07-14 05:48:25.204850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.340 [2024-07-14 05:48:25.204887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.340 qpair failed and we were unable to recover it. 00:34:18.340 [2024-07-14 05:48:25.205046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.340 [2024-07-14 05:48:25.205073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.340 qpair failed and we were unable to recover it. 00:34:18.340 [2024-07-14 05:48:25.205287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.340 [2024-07-14 05:48:25.205325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.340 qpair failed and we were unable to recover it. 00:34:18.340 [2024-07-14 05:48:25.205533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.340 [2024-07-14 05:48:25.205561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.340 qpair failed and we were unable to recover it. 00:34:18.340 [2024-07-14 05:48:25.205725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.340 [2024-07-14 05:48:25.205753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.340 qpair failed and we were unable to recover it. 00:34:18.340 [2024-07-14 05:48:25.205912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.340 [2024-07-14 05:48:25.205941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.340 qpair failed and we were unable to recover it. 00:34:18.340 [2024-07-14 05:48:25.206098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.340 [2024-07-14 05:48:25.206124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.340 qpair failed and we were unable to recover it. 00:34:18.340 [2024-07-14 05:48:25.206320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.340 [2024-07-14 05:48:25.206350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.340 qpair failed and we were unable to recover it. 00:34:18.340 [2024-07-14 05:48:25.206537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.340 [2024-07-14 05:48:25.206572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.340 qpair failed and we were unable to recover it. 00:34:18.340 [2024-07-14 05:48:25.206786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.340 [2024-07-14 05:48:25.206817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.340 qpair failed and we were unable to recover it. 00:34:18.340 [2024-07-14 05:48:25.207007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.340 [2024-07-14 05:48:25.207035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.341 qpair failed and we were unable to recover it. 00:34:18.341 [2024-07-14 05:48:25.207220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.341 [2024-07-14 05:48:25.207248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.341 qpair failed and we were unable to recover it. 00:34:18.341 [2024-07-14 05:48:25.207403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.341 [2024-07-14 05:48:25.207431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.341 qpair failed and we were unable to recover it. 00:34:18.341 [2024-07-14 05:48:25.207592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.341 [2024-07-14 05:48:25.207620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.341 qpair failed and we were unable to recover it. 00:34:18.341 [2024-07-14 05:48:25.207840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.341 [2024-07-14 05:48:25.207888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.341 qpair failed and we were unable to recover it. 00:34:18.341 [2024-07-14 05:48:25.208059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.341 [2024-07-14 05:48:25.208086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.341 qpair failed and we were unable to recover it. 00:34:18.341 [2024-07-14 05:48:25.208297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.341 [2024-07-14 05:48:25.208325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.341 qpair failed and we were unable to recover it. 00:34:18.341 [2024-07-14 05:48:25.208529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.341 [2024-07-14 05:48:25.208557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.341 qpair failed and we were unable to recover it. 00:34:18.341 [2024-07-14 05:48:25.208731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.341 [2024-07-14 05:48:25.208759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.341 qpair failed and we were unable to recover it. 00:34:18.341 [2024-07-14 05:48:25.208926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.341 [2024-07-14 05:48:25.208957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.341 qpair failed and we were unable to recover it. 00:34:18.341 [2024-07-14 05:48:25.209163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.341 [2024-07-14 05:48:25.209190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.341 qpair failed and we were unable to recover it. 00:34:18.341 [2024-07-14 05:48:25.209348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.341 [2024-07-14 05:48:25.209376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.341 qpair failed and we were unable to recover it. 00:34:18.341 [2024-07-14 05:48:25.209614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.341 [2024-07-14 05:48:25.209652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.341 qpair failed and we were unable to recover it. 00:34:18.341 [2024-07-14 05:48:25.209850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.341 [2024-07-14 05:48:25.209883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.341 qpair failed and we were unable to recover it. 00:34:18.341 [2024-07-14 05:48:25.210040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.341 [2024-07-14 05:48:25.210076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5d00000b90 with addr=10.0.0.2, port=4420 00:34:18.341 qpair failed and we were unable to recover it. 00:34:18.341 [2024-07-14 05:48:25.210268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.341 [2024-07-14 05:48:25.210309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.341 qpair failed and we were unable to recover it. 00:34:18.341 [2024-07-14 05:48:25.210482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.341 [2024-07-14 05:48:25.210510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.341 qpair failed and we were unable to recover it. 00:34:18.341 [2024-07-14 05:48:25.210688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.341 [2024-07-14 05:48:25.210716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.341 qpair failed and we were unable to recover it. 00:34:18.341 [2024-07-14 05:48:25.210874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.341 [2024-07-14 05:48:25.210901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.341 qpair failed and we were unable to recover it. 00:34:18.341 [2024-07-14 05:48:25.211079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.341 [2024-07-14 05:48:25.211106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.341 qpair failed and we were unable to recover it. 00:34:18.341 [2024-07-14 05:48:25.211279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.341 [2024-07-14 05:48:25.211305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.341 qpair failed and we were unable to recover it. 00:34:18.341 [2024-07-14 05:48:25.211483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.341 [2024-07-14 05:48:25.211509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.341 qpair failed and we were unable to recover it. 00:34:18.341 [2024-07-14 05:48:25.211705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.341 [2024-07-14 05:48:25.211741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.341 qpair failed and we were unable to recover it. 00:34:18.341 [2024-07-14 05:48:25.211915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.341 [2024-07-14 05:48:25.211941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.341 qpair failed and we were unable to recover it. 00:34:18.341 [2024-07-14 05:48:25.212124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.341 [2024-07-14 05:48:25.212149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.341 qpair failed and we were unable to recover it. 00:34:18.341 [2024-07-14 05:48:25.212308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.341 [2024-07-14 05:48:25.212336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.341 qpair failed and we were unable to recover it. 00:34:18.341 [2024-07-14 05:48:25.212519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.341 [2024-07-14 05:48:25.212556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.341 qpair failed and we were unable to recover it. 00:34:18.341 [2024-07-14 05:48:25.212735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.341 [2024-07-14 05:48:25.212761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.341 qpair failed and we were unable to recover it. 00:34:18.341 [2024-07-14 05:48:25.212975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.341 [2024-07-14 05:48:25.213002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.341 qpair failed and we were unable to recover it. 00:34:18.341 [2024-07-14 05:48:25.213171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.341 [2024-07-14 05:48:25.213197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.341 qpair failed and we were unable to recover it. 00:34:18.341 [2024-07-14 05:48:25.213370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.341 [2024-07-14 05:48:25.213396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.341 qpair failed and we were unable to recover it. 00:34:18.341 [2024-07-14 05:48:25.213563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.341 [2024-07-14 05:48:25.213589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.341 qpair failed and we were unable to recover it. 00:34:18.341 [2024-07-14 05:48:25.213763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.341 [2024-07-14 05:48:25.213788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.341 qpair failed and we were unable to recover it. 00:34:18.341 [2024-07-14 05:48:25.213945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.341 [2024-07-14 05:48:25.213971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.341 qpair failed and we were unable to recover it. 00:34:18.341 [2024-07-14 05:48:25.214165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.341 [2024-07-14 05:48:25.214191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.341 qpair failed and we were unable to recover it. 00:34:18.341 [2024-07-14 05:48:25.214354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.341 [2024-07-14 05:48:25.214380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.341 qpair failed and we were unable to recover it. 00:34:18.342 [2024-07-14 05:48:25.214526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.342 [2024-07-14 05:48:25.214552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.342 qpair failed and we were unable to recover it. 00:34:18.342 [2024-07-14 05:48:25.214712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.342 [2024-07-14 05:48:25.214738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.342 qpair failed and we were unable to recover it. 00:34:18.342 [2024-07-14 05:48:25.214908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.342 [2024-07-14 05:48:25.214935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.342 qpair failed and we were unable to recover it. 00:34:18.342 [2024-07-14 05:48:25.215222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.342 [2024-07-14 05:48:25.215249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.342 qpair failed and we were unable to recover it. 00:34:18.342 [2024-07-14 05:48:25.215503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.342 [2024-07-14 05:48:25.215529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.342 qpair failed and we were unable to recover it. 00:34:18.342 [2024-07-14 05:48:25.215698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.342 [2024-07-14 05:48:25.215724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.342 qpair failed and we were unable to recover it. 00:34:18.342 [2024-07-14 05:48:25.215884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.342 [2024-07-14 05:48:25.215910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.342 qpair failed and we were unable to recover it. 00:34:18.342 [2024-07-14 05:48:25.216090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.342 [2024-07-14 05:48:25.216116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.342 qpair failed and we were unable to recover it. 00:34:18.342 [2024-07-14 05:48:25.216289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.342 [2024-07-14 05:48:25.216316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.342 qpair failed and we were unable to recover it. 00:34:18.342 [2024-07-14 05:48:25.216498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.342 [2024-07-14 05:48:25.216524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.342 qpair failed and we were unable to recover it. 00:34:18.342 [2024-07-14 05:48:25.216684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.342 [2024-07-14 05:48:25.216710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.342 qpair failed and we were unable to recover it. 00:34:18.342 [2024-07-14 05:48:25.216916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.342 [2024-07-14 05:48:25.216947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.342 qpair failed and we were unable to recover it. 00:34:18.342 [2024-07-14 05:48:25.217096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.342 [2024-07-14 05:48:25.217122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.342 qpair failed and we were unable to recover it. 00:34:18.342 [2024-07-14 05:48:25.217324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.342 [2024-07-14 05:48:25.217350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.342 qpair failed and we were unable to recover it. 00:34:18.342 [2024-07-14 05:48:25.217509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.342 [2024-07-14 05:48:25.217540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.342 qpair failed and we were unable to recover it. 00:34:18.342 [2024-07-14 05:48:25.217697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.342 [2024-07-14 05:48:25.217723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.342 qpair failed and we were unable to recover it. 00:34:18.342 [2024-07-14 05:48:25.217915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.342 [2024-07-14 05:48:25.217947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.342 qpair failed and we were unable to recover it. 00:34:18.342 [2024-07-14 05:48:25.218092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.342 [2024-07-14 05:48:25.218124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.342 qpair failed and we were unable to recover it. 00:34:18.342 [2024-07-14 05:48:25.218281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.342 [2024-07-14 05:48:25.218307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.342 qpair failed and we were unable to recover it. 00:34:18.342 [2024-07-14 05:48:25.218467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.342 [2024-07-14 05:48:25.218493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.342 qpair failed and we were unable to recover it. 00:34:18.342 [2024-07-14 05:48:25.218650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.342 [2024-07-14 05:48:25.218677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.342 qpair failed and we were unable to recover it. 00:34:18.342 [2024-07-14 05:48:25.218838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.342 [2024-07-14 05:48:25.218877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.342 qpair failed and we were unable to recover it. 00:34:18.342 [2024-07-14 05:48:25.219040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.342 [2024-07-14 05:48:25.219066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.342 qpair failed and we were unable to recover it. 00:34:18.342 [2024-07-14 05:48:25.219236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.342 [2024-07-14 05:48:25.219263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.342 qpair failed and we were unable to recover it. 00:34:18.342 [2024-07-14 05:48:25.219432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.342 [2024-07-14 05:48:25.219460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.342 qpair failed and we were unable to recover it. 00:34:18.342 [2024-07-14 05:48:25.219617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.342 [2024-07-14 05:48:25.219644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.342 qpair failed and we were unable to recover it. 00:34:18.342 [2024-07-14 05:48:25.219824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.342 [2024-07-14 05:48:25.219858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.342 qpair failed and we were unable to recover it. 00:34:18.342 [2024-07-14 05:48:25.220027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.342 [2024-07-14 05:48:25.220053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.342 qpair failed and we were unable to recover it. 00:34:18.342 [2024-07-14 05:48:25.220218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.342 [2024-07-14 05:48:25.220244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.342 qpair failed and we were unable to recover it. 00:34:18.342 [2024-07-14 05:48:25.220400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.342 [2024-07-14 05:48:25.220425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.342 qpair failed and we were unable to recover it. 00:34:18.342 [2024-07-14 05:48:25.220627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.342 [2024-07-14 05:48:25.220653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.342 qpair failed and we were unable to recover it. 00:34:18.342 [2024-07-14 05:48:25.220809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.342 [2024-07-14 05:48:25.220836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.342 qpair failed and we were unable to recover it. 00:34:18.342 [2024-07-14 05:48:25.220999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.342 [2024-07-14 05:48:25.221025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.342 qpair failed and we were unable to recover it. 00:34:18.342 [2024-07-14 05:48:25.221176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.342 [2024-07-14 05:48:25.221202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.342 qpair failed and we were unable to recover it. 00:34:18.342 [2024-07-14 05:48:25.221356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.342 [2024-07-14 05:48:25.221381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.342 qpair failed and we were unable to recover it. 00:34:18.342 [2024-07-14 05:48:25.221531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.342 [2024-07-14 05:48:25.221557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.342 qpair failed and we were unable to recover it. 00:34:18.342 [2024-07-14 05:48:25.221744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.342 [2024-07-14 05:48:25.221771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.342 qpair failed and we were unable to recover it. 00:34:18.342 [2024-07-14 05:48:25.221924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.342 [2024-07-14 05:48:25.221951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.342 qpair failed and we were unable to recover it. 00:34:18.342 [2024-07-14 05:48:25.222099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.342 [2024-07-14 05:48:25.222125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.342 qpair failed and we were unable to recover it. 00:34:18.342 [2024-07-14 05:48:25.222327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.342 [2024-07-14 05:48:25.222353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.342 qpair failed and we were unable to recover it. 00:34:18.342 [2024-07-14 05:48:25.222538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.343 [2024-07-14 05:48:25.222564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.343 qpair failed and we were unable to recover it. 00:34:18.343 [2024-07-14 05:48:25.222712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.343 [2024-07-14 05:48:25.222738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.343 qpair failed and we were unable to recover it. 00:34:18.343 [2024-07-14 05:48:25.222898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.343 [2024-07-14 05:48:25.222925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.343 qpair failed and we were unable to recover it. 00:34:18.343 [2024-07-14 05:48:25.223078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.343 [2024-07-14 05:48:25.223108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.343 qpair failed and we were unable to recover it. 00:34:18.343 [2024-07-14 05:48:25.223301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.343 [2024-07-14 05:48:25.223327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.343 qpair failed and we were unable to recover it. 00:34:18.343 [2024-07-14 05:48:25.223536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.343 [2024-07-14 05:48:25.223562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.343 qpair failed and we were unable to recover it. 00:34:18.343 [2024-07-14 05:48:25.223767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.343 [2024-07-14 05:48:25.223793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.343 qpair failed and we were unable to recover it. 00:34:18.343 [2024-07-14 05:48:25.223975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.343 [2024-07-14 05:48:25.224001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.343 qpair failed and we were unable to recover it. 00:34:18.343 [2024-07-14 05:48:25.224148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.343 [2024-07-14 05:48:25.224174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.343 qpair failed and we were unable to recover it. 00:34:18.343 [2024-07-14 05:48:25.224355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.343 [2024-07-14 05:48:25.224381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.343 qpair failed and we were unable to recover it. 00:34:18.343 [2024-07-14 05:48:25.224533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.343 [2024-07-14 05:48:25.224559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.343 qpair failed and we were unable to recover it. 00:34:18.343 [2024-07-14 05:48:25.224742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.343 [2024-07-14 05:48:25.224769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.343 qpair failed and we were unable to recover it. 00:34:18.343 [2024-07-14 05:48:25.224976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.343 [2024-07-14 05:48:25.225011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.343 qpair failed and we were unable to recover it. 00:34:18.343 [2024-07-14 05:48:25.225159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.343 [2024-07-14 05:48:25.225185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.343 qpair failed and we were unable to recover it. 00:34:18.343 [2024-07-14 05:48:25.225355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.343 [2024-07-14 05:48:25.225381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.343 qpair failed and we were unable to recover it. 00:34:18.343 [2024-07-14 05:48:25.225558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.343 [2024-07-14 05:48:25.225589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.343 qpair failed and we were unable to recover it. 00:34:18.343 [2024-07-14 05:48:25.225743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.343 [2024-07-14 05:48:25.225769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.343 qpair failed and we were unable to recover it. 00:34:18.343 [2024-07-14 05:48:25.225953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.343 [2024-07-14 05:48:25.225980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.343 qpair failed and we were unable to recover it. 00:34:18.343 [2024-07-14 05:48:25.226159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.343 [2024-07-14 05:48:25.226186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.343 qpair failed and we were unable to recover it. 00:34:18.343 [2024-07-14 05:48:25.226359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.343 [2024-07-14 05:48:25.226385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.343 qpair failed and we were unable to recover it. 00:34:18.343 [2024-07-14 05:48:25.226577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.343 [2024-07-14 05:48:25.226602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.343 qpair failed and we were unable to recover it. 00:34:18.343 [2024-07-14 05:48:25.226788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.343 [2024-07-14 05:48:25.226814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.343 qpair failed and we were unable to recover it. 00:34:18.343 [2024-07-14 05:48:25.226991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.343 [2024-07-14 05:48:25.227018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.343 qpair failed and we were unable to recover it. 00:34:18.343 [2024-07-14 05:48:25.227195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.343 [2024-07-14 05:48:25.227221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.343 qpair failed and we were unable to recover it. 00:34:18.343 [2024-07-14 05:48:25.227375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.343 [2024-07-14 05:48:25.227401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.343 qpair failed and we were unable to recover it. 00:34:18.343 [2024-07-14 05:48:25.227582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.343 [2024-07-14 05:48:25.227608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.343 qpair failed and we were unable to recover it. 00:34:18.343 [2024-07-14 05:48:25.227761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.343 [2024-07-14 05:48:25.227787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.343 qpair failed and we were unable to recover it. 00:34:18.343 [2024-07-14 05:48:25.227993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.343 [2024-07-14 05:48:25.228019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.343 qpair failed and we were unable to recover it. 00:34:18.343 [2024-07-14 05:48:25.228205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.343 [2024-07-14 05:48:25.228231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.343 qpair failed and we were unable to recover it. 00:34:18.343 [2024-07-14 05:48:25.228413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.343 [2024-07-14 05:48:25.228447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.343 qpair failed and we were unable to recover it. 00:34:18.343 [2024-07-14 05:48:25.228632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.343 [2024-07-14 05:48:25.228658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.343 qpair failed and we were unable to recover it. 00:34:18.343 [2024-07-14 05:48:25.228852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.343 [2024-07-14 05:48:25.228883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.343 qpair failed and we were unable to recover it. 00:34:18.343 [2024-07-14 05:48:25.229069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.343 [2024-07-14 05:48:25.229094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.343 qpair failed and we were unable to recover it. 00:34:18.343 [2024-07-14 05:48:25.229281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.343 [2024-07-14 05:48:25.229307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.343 qpair failed and we were unable to recover it. 00:34:18.343 [2024-07-14 05:48:25.229461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.343 [2024-07-14 05:48:25.229486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.343 qpair failed and we were unable to recover it. 00:34:18.343 [2024-07-14 05:48:25.229695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.343 [2024-07-14 05:48:25.229721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.343 qpair failed and we were unable to recover it. 00:34:18.343 [2024-07-14 05:48:25.229941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.343 [2024-07-14 05:48:25.229968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.343 qpair failed and we were unable to recover it. 00:34:18.343 [2024-07-14 05:48:25.230120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.343 [2024-07-14 05:48:25.230152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.343 qpair failed and we were unable to recover it. 00:34:18.343 [2024-07-14 05:48:25.230329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.343 [2024-07-14 05:48:25.230355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.343 qpair failed and we were unable to recover it. 00:34:18.343 [2024-07-14 05:48:25.230502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.343 [2024-07-14 05:48:25.230528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.343 qpair failed and we were unable to recover it. 00:34:18.343 [2024-07-14 05:48:25.230674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.343 [2024-07-14 05:48:25.230700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.343 qpair failed and we were unable to recover it. 00:34:18.343 [2024-07-14 05:48:25.230852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.343 [2024-07-14 05:48:25.230883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.344 qpair failed and we were unable to recover it. 00:34:18.344 [2024-07-14 05:48:25.231064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.344 [2024-07-14 05:48:25.231090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.344 qpair failed and we were unable to recover it. 00:34:18.344 [2024-07-14 05:48:25.231279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.344 [2024-07-14 05:48:25.231305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.344 qpair failed and we were unable to recover it. 00:34:18.344 [2024-07-14 05:48:25.231454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.344 [2024-07-14 05:48:25.231480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.344 qpair failed and we were unable to recover it. 00:34:18.344 [2024-07-14 05:48:25.231663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.344 [2024-07-14 05:48:25.231688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.344 qpair failed and we were unable to recover it. 00:34:18.344 [2024-07-14 05:48:25.231892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.344 [2024-07-14 05:48:25.231919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.344 qpair failed and we were unable to recover it. 00:34:18.344 [2024-07-14 05:48:25.232066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.344 [2024-07-14 05:48:25.232091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.344 qpair failed and we were unable to recover it. 00:34:18.344 [2024-07-14 05:48:25.232250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.344 [2024-07-14 05:48:25.232278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.344 qpair failed and we were unable to recover it. 00:34:18.344 [2024-07-14 05:48:25.232446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.344 [2024-07-14 05:48:25.232472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.344 qpair failed and we were unable to recover it. 00:34:18.344 [2024-07-14 05:48:25.232620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.344 [2024-07-14 05:48:25.232646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.344 qpair failed and we were unable to recover it. 00:34:18.344 [2024-07-14 05:48:25.232857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.344 [2024-07-14 05:48:25.232890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.344 qpair failed and we were unable to recover it. 00:34:18.344 [2024-07-14 05:48:25.233044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.344 [2024-07-14 05:48:25.233070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.344 qpair failed and we were unable to recover it. 00:34:18.344 [2024-07-14 05:48:25.233249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.344 [2024-07-14 05:48:25.233275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.344 qpair failed and we were unable to recover it. 00:34:18.344 [2024-07-14 05:48:25.233432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.344 [2024-07-14 05:48:25.233458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.344 qpair failed and we were unable to recover it. 00:34:18.344 [2024-07-14 05:48:25.233607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.344 [2024-07-14 05:48:25.233633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.344 qpair failed and we were unable to recover it. 00:34:18.344 [2024-07-14 05:48:25.233821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.344 [2024-07-14 05:48:25.233847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.344 qpair failed and we were unable to recover it. 00:34:18.344 [2024-07-14 05:48:25.234041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.344 [2024-07-14 05:48:25.234067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.344 qpair failed and we were unable to recover it. 00:34:18.344 [2024-07-14 05:48:25.234256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.344 [2024-07-14 05:48:25.234282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.344 qpair failed and we were unable to recover it. 00:34:18.344 [2024-07-14 05:48:25.234434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.344 [2024-07-14 05:48:25.234469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.344 qpair failed and we were unable to recover it. 00:34:18.344 [2024-07-14 05:48:25.234624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.344 [2024-07-14 05:48:25.234661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.344 qpair failed and we were unable to recover it. 00:34:18.344 [2024-07-14 05:48:25.234828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.344 [2024-07-14 05:48:25.234853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.344 qpair failed and we were unable to recover it. 00:34:18.344 [2024-07-14 05:48:25.235041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.344 [2024-07-14 05:48:25.235067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.344 qpair failed and we were unable to recover it. 00:34:18.344 [2024-07-14 05:48:25.235215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.344 [2024-07-14 05:48:25.235247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.344 qpair failed and we were unable to recover it. 00:34:18.344 [2024-07-14 05:48:25.235432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.344 [2024-07-14 05:48:25.235457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.344 qpair failed and we were unable to recover it. 00:34:18.344 [2024-07-14 05:48:25.235632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.344 [2024-07-14 05:48:25.235657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.344 qpair failed and we were unable to recover it. 00:34:18.344 [2024-07-14 05:48:25.235834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.344 [2024-07-14 05:48:25.235860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.344 qpair failed and we were unable to recover it. 00:34:18.344 [2024-07-14 05:48:25.236045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.344 [2024-07-14 05:48:25.236071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.344 qpair failed and we were unable to recover it. 00:34:18.344 [2024-07-14 05:48:25.236321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.344 [2024-07-14 05:48:25.236347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.344 qpair failed and we were unable to recover it. 00:34:18.344 [2024-07-14 05:48:25.236531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.344 [2024-07-14 05:48:25.236558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.344 qpair failed and we were unable to recover it. 00:34:18.344 [2024-07-14 05:48:25.236723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.344 [2024-07-14 05:48:25.236749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.344 qpair failed and we were unable to recover it. 00:34:18.344 [2024-07-14 05:48:25.236900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.344 [2024-07-14 05:48:25.236930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.344 qpair failed and we were unable to recover it. 00:34:18.344 [2024-07-14 05:48:25.237108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.344 [2024-07-14 05:48:25.237134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.344 qpair failed and we were unable to recover it. 00:34:18.344 [2024-07-14 05:48:25.237280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.344 [2024-07-14 05:48:25.237307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.344 qpair failed and we were unable to recover it. 00:34:18.344 [2024-07-14 05:48:25.237478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.344 [2024-07-14 05:48:25.237514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.344 qpair failed and we were unable to recover it. 00:34:18.344 [2024-07-14 05:48:25.237714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.344 [2024-07-14 05:48:25.237740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.344 qpair failed and we were unable to recover it. 00:34:18.344 [2024-07-14 05:48:25.237893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.344 [2024-07-14 05:48:25.237919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.344 qpair failed and we were unable to recover it. 00:34:18.344 [2024-07-14 05:48:25.238105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.344 [2024-07-14 05:48:25.238133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.344 qpair failed and we were unable to recover it. 00:34:18.344 [2024-07-14 05:48:25.238292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.344 [2024-07-14 05:48:25.238318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.344 qpair failed and we were unable to recover it. 00:34:18.344 [2024-07-14 05:48:25.238501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.344 [2024-07-14 05:48:25.238527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.344 qpair failed and we were unable to recover it. 00:34:18.344 [2024-07-14 05:48:25.238684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.344 [2024-07-14 05:48:25.238710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.344 qpair failed and we were unable to recover it. 00:34:18.344 [2024-07-14 05:48:25.238894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.344 [2024-07-14 05:48:25.238921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.344 qpair failed and we were unable to recover it. 00:34:18.344 [2024-07-14 05:48:25.239113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.344 [2024-07-14 05:48:25.239139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.344 qpair failed and we were unable to recover it. 00:34:18.345 [2024-07-14 05:48:25.239306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.345 [2024-07-14 05:48:25.239331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.345 qpair failed and we were unable to recover it. 00:34:18.345 [2024-07-14 05:48:25.239511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.345 [2024-07-14 05:48:25.239537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.345 qpair failed and we were unable to recover it. 00:34:18.345 [2024-07-14 05:48:25.239697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.345 [2024-07-14 05:48:25.239724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.345 qpair failed and we were unable to recover it. 00:34:18.345 [2024-07-14 05:48:25.239914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.345 [2024-07-14 05:48:25.239940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.345 qpair failed and we were unable to recover it. 00:34:18.345 [2024-07-14 05:48:25.240097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.345 [2024-07-14 05:48:25.240123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.345 qpair failed and we were unable to recover it. 00:34:18.345 [2024-07-14 05:48:25.240297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.345 [2024-07-14 05:48:25.240323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.345 qpair failed and we were unable to recover it. 00:34:18.345 [2024-07-14 05:48:25.240495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.345 [2024-07-14 05:48:25.240520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.345 qpair failed and we were unable to recover it. 00:34:18.345 [2024-07-14 05:48:25.240697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.345 [2024-07-14 05:48:25.240723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.345 qpair failed and we were unable to recover it. 00:34:18.345 [2024-07-14 05:48:25.240874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.345 [2024-07-14 05:48:25.240901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.345 qpair failed and we were unable to recover it. 00:34:18.345 [2024-07-14 05:48:25.241085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.345 [2024-07-14 05:48:25.241111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.345 qpair failed and we were unable to recover it. 00:34:18.345 [2024-07-14 05:48:25.241298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.345 [2024-07-14 05:48:25.241324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.345 qpair failed and we were unable to recover it. 00:34:18.345 [2024-07-14 05:48:25.241472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.345 [2024-07-14 05:48:25.241507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.345 qpair failed and we were unable to recover it. 00:34:18.345 [2024-07-14 05:48:25.241680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.345 [2024-07-14 05:48:25.241706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.345 qpair failed and we were unable to recover it. 00:34:18.345 [2024-07-14 05:48:25.241863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.345 [2024-07-14 05:48:25.241894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.345 qpair failed and we were unable to recover it. 00:34:18.345 [2024-07-14 05:48:25.242102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.345 [2024-07-14 05:48:25.242127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.345 qpair failed and we were unable to recover it. 00:34:18.345 [2024-07-14 05:48:25.242283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.345 [2024-07-14 05:48:25.242314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.345 qpair failed and we were unable to recover it. 00:34:18.345 [2024-07-14 05:48:25.242469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.345 [2024-07-14 05:48:25.242495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.345 qpair failed and we were unable to recover it. 00:34:18.345 [2024-07-14 05:48:25.242653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.345 [2024-07-14 05:48:25.242678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.345 qpair failed and we were unable to recover it. 00:34:18.345 [2024-07-14 05:48:25.242854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.345 [2024-07-14 05:48:25.242885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.345 qpair failed and we were unable to recover it. 00:34:18.345 [2024-07-14 05:48:25.243046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.345 [2024-07-14 05:48:25.243072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.345 qpair failed and we were unable to recover it. 00:34:18.345 [2024-07-14 05:48:25.243233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.345 [2024-07-14 05:48:25.243258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.345 qpair failed and we were unable to recover it. 00:34:18.345 [2024-07-14 05:48:25.243425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.345 [2024-07-14 05:48:25.243451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.345 qpair failed and we were unable to recover it. 00:34:18.345 [2024-07-14 05:48:25.243620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.345 [2024-07-14 05:48:25.243645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.345 qpair failed and we were unable to recover it. 00:34:18.345 [2024-07-14 05:48:25.243798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.345 [2024-07-14 05:48:25.243824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.345 qpair failed and we were unable to recover it. 00:34:18.345 [2024-07-14 05:48:25.244017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.345 [2024-07-14 05:48:25.244043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.345 qpair failed and we were unable to recover it. 00:34:18.345 [2024-07-14 05:48:25.244219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.345 [2024-07-14 05:48:25.244245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.345 qpair failed and we were unable to recover it. 00:34:18.345 [2024-07-14 05:48:25.244421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.345 [2024-07-14 05:48:25.244447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.345 qpair failed and we were unable to recover it. 00:34:18.345 [2024-07-14 05:48:25.244598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.345 [2024-07-14 05:48:25.244624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.345 qpair failed and we were unable to recover it. 00:34:18.345 [2024-07-14 05:48:25.244788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.345 [2024-07-14 05:48:25.244814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.345 qpair failed and we were unable to recover it. 00:34:18.345 [2024-07-14 05:48:25.245004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.345 [2024-07-14 05:48:25.245030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.345 qpair failed and we were unable to recover it. 00:34:18.345 [2024-07-14 05:48:25.245209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.345 [2024-07-14 05:48:25.245235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.345 qpair failed and we were unable to recover it. 00:34:18.345 [2024-07-14 05:48:25.245417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.345 [2024-07-14 05:48:25.245443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.345 qpair failed and we were unable to recover it. 00:34:18.345 [2024-07-14 05:48:25.245605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.345 [2024-07-14 05:48:25.245631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.345 qpair failed and we were unable to recover it. 00:34:18.345 [2024-07-14 05:48:25.245806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.345 [2024-07-14 05:48:25.245838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.345 qpair failed and we were unable to recover it. 00:34:18.345 [2024-07-14 05:48:25.246009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.345 [2024-07-14 05:48:25.246035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.345 qpair failed and we were unable to recover it. 00:34:18.345 [2024-07-14 05:48:25.246188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.345 [2024-07-14 05:48:25.246215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.345 qpair failed and we were unable to recover it. 00:34:18.345 [2024-07-14 05:48:25.246359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.345 [2024-07-14 05:48:25.246385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.346 qpair failed and we were unable to recover it. 00:34:18.346 [2024-07-14 05:48:25.246599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.346 [2024-07-14 05:48:25.246625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.346 qpair failed and we were unable to recover it. 00:34:18.346 [2024-07-14 05:48:25.246796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.346 [2024-07-14 05:48:25.246822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.346 qpair failed and we were unable to recover it. 00:34:18.346 [2024-07-14 05:48:25.247036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.346 [2024-07-14 05:48:25.247063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.346 qpair failed and we were unable to recover it. 00:34:18.346 [2024-07-14 05:48:25.247220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.346 [2024-07-14 05:48:25.247246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.346 qpair failed and we were unable to recover it. 00:34:18.346 [2024-07-14 05:48:25.247427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.346 [2024-07-14 05:48:25.247453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.346 qpair failed and we were unable to recover it. 00:34:18.346 [2024-07-14 05:48:25.247639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.346 [2024-07-14 05:48:25.247670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.346 qpair failed and we were unable to recover it. 00:34:18.346 [2024-07-14 05:48:25.247815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.346 [2024-07-14 05:48:25.247841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.346 qpair failed and we were unable to recover it. 00:34:18.346 [2024-07-14 05:48:25.248001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.346 [2024-07-14 05:48:25.248027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.346 qpair failed and we were unable to recover it. 00:34:18.346 [2024-07-14 05:48:25.248196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.346 [2024-07-14 05:48:25.248222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.346 qpair failed and we were unable to recover it. 00:34:18.346 [2024-07-14 05:48:25.248373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.346 [2024-07-14 05:48:25.248399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.346 qpair failed and we were unable to recover it. 00:34:18.346 [2024-07-14 05:48:25.248577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.346 [2024-07-14 05:48:25.248602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.346 qpair failed and we were unable to recover it. 00:34:18.346 [2024-07-14 05:48:25.248788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.346 [2024-07-14 05:48:25.248814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.346 qpair failed and we were unable to recover it. 00:34:18.346 [2024-07-14 05:48:25.248993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.346 [2024-07-14 05:48:25.249019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.346 qpair failed and we were unable to recover it. 00:34:18.346 [2024-07-14 05:48:25.249207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.346 [2024-07-14 05:48:25.249233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.346 qpair failed and we were unable to recover it. 00:34:18.346 [2024-07-14 05:48:25.249418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.346 [2024-07-14 05:48:25.249444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.346 qpair failed and we were unable to recover it. 00:34:18.346 [2024-07-14 05:48:25.249593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.346 [2024-07-14 05:48:25.249619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.346 qpair failed and we were unable to recover it. 00:34:18.346 [2024-07-14 05:48:25.249774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.346 [2024-07-14 05:48:25.249799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.346 qpair failed and we were unable to recover it. 00:34:18.346 [2024-07-14 05:48:25.249977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.346 [2024-07-14 05:48:25.250004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.346 qpair failed and we were unable to recover it. 00:34:18.346 [2024-07-14 05:48:25.250168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.346 [2024-07-14 05:48:25.250193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.346 qpair failed and we were unable to recover it. 00:34:18.346 [2024-07-14 05:48:25.250388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.346 [2024-07-14 05:48:25.250414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.346 qpair failed and we were unable to recover it. 00:34:18.346 [2024-07-14 05:48:25.250602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.346 [2024-07-14 05:48:25.250628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.346 qpair failed and we were unable to recover it. 00:34:18.346 [2024-07-14 05:48:25.250811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.346 [2024-07-14 05:48:25.250836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.346 qpair failed and we were unable to recover it. 00:34:18.346 [2024-07-14 05:48:25.251001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.346 [2024-07-14 05:48:25.251027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.346 qpair failed and we were unable to recover it. 00:34:18.346 [2024-07-14 05:48:25.251186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.346 [2024-07-14 05:48:25.251211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.346 qpair failed and we were unable to recover it. 00:34:18.346 [2024-07-14 05:48:25.251371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.346 [2024-07-14 05:48:25.251399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.346 qpair failed and we were unable to recover it. 00:34:18.346 [2024-07-14 05:48:25.251588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.346 [2024-07-14 05:48:25.251615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.346 qpair failed and we were unable to recover it. 00:34:18.346 [2024-07-14 05:48:25.251821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.346 [2024-07-14 05:48:25.251847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.346 qpair failed and we were unable to recover it. 00:34:18.346 [2024-07-14 05:48:25.252023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.346 [2024-07-14 05:48:25.252049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.346 qpair failed and we were unable to recover it. 00:34:18.346 [2024-07-14 05:48:25.252216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.346 [2024-07-14 05:48:25.252242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.346 qpair failed and we were unable to recover it. 00:34:18.346 [2024-07-14 05:48:25.252422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.346 [2024-07-14 05:48:25.252448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.346 qpair failed and we were unable to recover it. 00:34:18.346 [2024-07-14 05:48:25.252657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.346 [2024-07-14 05:48:25.252682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.346 qpair failed and we were unable to recover it. 00:34:18.346 [2024-07-14 05:48:25.252870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.346 [2024-07-14 05:48:25.252913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.346 qpair failed and we were unable to recover it. 00:34:18.346 [2024-07-14 05:48:25.253072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.346 [2024-07-14 05:48:25.253098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.346 qpair failed and we were unable to recover it. 00:34:18.346 [2024-07-14 05:48:25.253286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.346 [2024-07-14 05:48:25.253312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.346 qpair failed and we were unable to recover it. 00:34:18.346 [2024-07-14 05:48:25.253468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.346 [2024-07-14 05:48:25.253493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.346 qpair failed and we were unable to recover it. 00:34:18.346 [2024-07-14 05:48:25.253686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.346 [2024-07-14 05:48:25.253712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.346 qpair failed and we were unable to recover it. 00:34:18.346 [2024-07-14 05:48:25.253863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.346 [2024-07-14 05:48:25.253895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.346 qpair failed and we were unable to recover it. 00:34:18.346 [2024-07-14 05:48:25.254044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.346 [2024-07-14 05:48:25.254069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.346 qpair failed and we were unable to recover it. 00:34:18.347 [2024-07-14 05:48:25.254252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.347 [2024-07-14 05:48:25.254278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.347 qpair failed and we were unable to recover it. 00:34:18.347 [2024-07-14 05:48:25.254451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.347 [2024-07-14 05:48:25.254477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.347 qpair failed and we were unable to recover it. 00:34:18.347 [2024-07-14 05:48:25.254653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.347 [2024-07-14 05:48:25.254679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.347 qpair failed and we were unable to recover it. 00:34:18.347 [2024-07-14 05:48:25.254849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.347 [2024-07-14 05:48:25.254882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.347 qpair failed and we were unable to recover it. 00:34:18.347 [2024-07-14 05:48:25.255054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.347 [2024-07-14 05:48:25.255080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.347 qpair failed and we were unable to recover it. 00:34:18.347 [2024-07-14 05:48:25.255294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.347 [2024-07-14 05:48:25.255319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.347 qpair failed and we were unable to recover it. 00:34:18.347 [2024-07-14 05:48:25.255470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.347 [2024-07-14 05:48:25.255496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.347 qpair failed and we were unable to recover it. 00:34:18.347 [2024-07-14 05:48:25.255700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.347 [2024-07-14 05:48:25.255726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.347 qpair failed and we were unable to recover it. 00:34:18.347 [2024-07-14 05:48:25.255953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.347 [2024-07-14 05:48:25.255980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.347 qpair failed and we were unable to recover it. 00:34:18.347 [2024-07-14 05:48:25.256164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.347 [2024-07-14 05:48:25.256191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.347 qpair failed and we were unable to recover it. 00:34:18.347 [2024-07-14 05:48:25.256397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.347 [2024-07-14 05:48:25.256423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.347 qpair failed and we were unable to recover it. 00:34:18.347 [2024-07-14 05:48:25.256603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.347 [2024-07-14 05:48:25.256629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.347 qpair failed and we were unable to recover it. 00:34:18.347 [2024-07-14 05:48:25.256807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.347 [2024-07-14 05:48:25.256833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.347 qpair failed and we were unable to recover it. 00:34:18.347 [2024-07-14 05:48:25.257036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.347 [2024-07-14 05:48:25.257062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.347 qpair failed and we were unable to recover it. 00:34:18.347 [2024-07-14 05:48:25.257215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.347 [2024-07-14 05:48:25.257240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.347 qpair failed and we were unable to recover it. 00:34:18.347 [2024-07-14 05:48:25.257396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.347 [2024-07-14 05:48:25.257422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.347 qpair failed and we were unable to recover it. 00:34:18.347 [2024-07-14 05:48:25.257630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.347 [2024-07-14 05:48:25.257656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.347 qpair failed and we were unable to recover it. 00:34:18.347 [2024-07-14 05:48:25.257808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.347 [2024-07-14 05:48:25.257836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.347 qpair failed and we were unable to recover it. 00:34:18.347 [2024-07-14 05:48:25.257998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.347 [2024-07-14 05:48:25.258024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.347 qpair failed and we were unable to recover it. 00:34:18.347 [2024-07-14 05:48:25.258204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.347 [2024-07-14 05:48:25.258230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.347 qpair failed and we were unable to recover it. 00:34:18.347 [2024-07-14 05:48:25.258413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.347 [2024-07-14 05:48:25.258438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.347 qpair failed and we were unable to recover it. 00:34:18.347 [2024-07-14 05:48:25.258598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.347 [2024-07-14 05:48:25.258624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.347 qpair failed and we were unable to recover it. 00:34:18.347 [2024-07-14 05:48:25.258787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.347 [2024-07-14 05:48:25.258813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.347 qpair failed and we were unable to recover it. 00:34:18.347 [2024-07-14 05:48:25.259026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.347 [2024-07-14 05:48:25.259053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.347 qpair failed and we were unable to recover it. 00:34:18.347 [2024-07-14 05:48:25.259199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.347 [2024-07-14 05:48:25.259225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.347 qpair failed and we were unable to recover it. 00:34:18.347 [2024-07-14 05:48:25.259393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.347 [2024-07-14 05:48:25.259419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.347 qpair failed and we were unable to recover it. 00:34:18.347 [2024-07-14 05:48:25.259594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.347 [2024-07-14 05:48:25.259620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.347 qpair failed and we were unable to recover it. 00:34:18.347 [2024-07-14 05:48:25.259778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.347 [2024-07-14 05:48:25.259805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.347 qpair failed and we were unable to recover it. 00:34:18.347 [2024-07-14 05:48:25.260002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.347 [2024-07-14 05:48:25.260029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.347 qpair failed and we were unable to recover it. 00:34:18.347 [2024-07-14 05:48:25.260245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.347 [2024-07-14 05:48:25.260270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.347 qpair failed and we were unable to recover it. 00:34:18.347 [2024-07-14 05:48:25.260415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.347 [2024-07-14 05:48:25.260441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.347 qpair failed and we were unable to recover it. 00:34:18.347 [2024-07-14 05:48:25.260624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.347 [2024-07-14 05:48:25.260652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.347 qpair failed and we were unable to recover it. 00:34:18.347 [2024-07-14 05:48:25.260826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.347 [2024-07-14 05:48:25.260852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.347 qpair failed and we were unable to recover it. 00:34:18.347 [2024-07-14 05:48:25.261060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.347 [2024-07-14 05:48:25.261085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.347 qpair failed and we were unable to recover it. 00:34:18.347 [2024-07-14 05:48:25.261249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.347 [2024-07-14 05:48:25.261276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.347 qpair failed and we were unable to recover it. 00:34:18.347 [2024-07-14 05:48:25.261453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.347 [2024-07-14 05:48:25.261483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.347 qpair failed and we were unable to recover it. 00:34:18.347 [2024-07-14 05:48:25.261645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.347 [2024-07-14 05:48:25.261671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.347 qpair failed and we were unable to recover it. 00:34:18.347 [2024-07-14 05:48:25.261853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.347 [2024-07-14 05:48:25.261884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.347 qpair failed and we were unable to recover it. 00:34:18.347 [2024-07-14 05:48:25.262064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.347 [2024-07-14 05:48:25.262090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.347 qpair failed and we were unable to recover it. 00:34:18.347 [2024-07-14 05:48:25.262275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.347 [2024-07-14 05:48:25.262301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.347 qpair failed and we were unable to recover it. 00:34:18.347 [2024-07-14 05:48:25.262448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.347 [2024-07-14 05:48:25.262473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.348 qpair failed and we were unable to recover it. 00:34:18.348 [2024-07-14 05:48:25.262647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.348 [2024-07-14 05:48:25.262673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.348 qpair failed and we were unable to recover it. 00:34:18.348 [2024-07-14 05:48:25.262875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.348 [2024-07-14 05:48:25.262902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.348 qpair failed and we were unable to recover it. 00:34:18.348 [2024-07-14 05:48:25.263079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.348 [2024-07-14 05:48:25.263105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.348 qpair failed and we were unable to recover it. 00:34:18.348 [2024-07-14 05:48:25.263278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.348 [2024-07-14 05:48:25.263304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.348 qpair failed and we were unable to recover it. 00:34:18.348 [2024-07-14 05:48:25.263491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.348 [2024-07-14 05:48:25.263517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.348 qpair failed and we were unable to recover it. 00:34:18.348 [2024-07-14 05:48:25.263695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.348 [2024-07-14 05:48:25.263721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.348 qpair failed and we were unable to recover it. 00:34:18.348 [2024-07-14 05:48:25.263879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.348 [2024-07-14 05:48:25.263906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.348 qpair failed and we were unable to recover it. 00:34:18.348 [2024-07-14 05:48:25.264056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.348 [2024-07-14 05:48:25.264082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.348 qpair failed and we were unable to recover it. 00:34:18.348 [2024-07-14 05:48:25.264245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.348 [2024-07-14 05:48:25.264271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.348 qpair failed and we were unable to recover it. 00:34:18.348 [2024-07-14 05:48:25.264418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.348 [2024-07-14 05:48:25.264444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.348 qpair failed and we were unable to recover it. 00:34:18.348 [2024-07-14 05:48:25.264646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.348 [2024-07-14 05:48:25.264672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.348 qpair failed and we were unable to recover it. 00:34:18.348 [2024-07-14 05:48:25.264824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.348 [2024-07-14 05:48:25.264851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.348 qpair failed and we were unable to recover it. 00:34:18.348 [2024-07-14 05:48:25.265039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.348 [2024-07-14 05:48:25.265066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.348 qpair failed and we were unable to recover it. 00:34:18.348 [2024-07-14 05:48:25.265218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.348 [2024-07-14 05:48:25.265244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.348 qpair failed and we were unable to recover it. 00:34:18.348 [2024-07-14 05:48:25.265428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.348 [2024-07-14 05:48:25.265454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.348 qpair failed and we were unable to recover it. 00:34:18.348 [2024-07-14 05:48:25.265609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.348 [2024-07-14 05:48:25.265635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.348 qpair failed and we were unable to recover it. 00:34:18.348 [2024-07-14 05:48:25.265826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.348 [2024-07-14 05:48:25.265862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.348 qpair failed and we were unable to recover it. 00:34:18.348 [2024-07-14 05:48:25.266033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.348 [2024-07-14 05:48:25.266059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.348 qpair failed and we were unable to recover it. 00:34:18.348 [2024-07-14 05:48:25.266220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.348 [2024-07-14 05:48:25.266245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.348 qpair failed and we were unable to recover it. 00:34:18.348 [2024-07-14 05:48:25.266397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.348 [2024-07-14 05:48:25.266423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.348 qpair failed and we were unable to recover it. 00:34:18.348 [2024-07-14 05:48:25.266581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.348 [2024-07-14 05:48:25.266608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.348 qpair failed and we were unable to recover it. 00:34:18.348 [2024-07-14 05:48:25.266768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.348 [2024-07-14 05:48:25.266798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.348 qpair failed and we were unable to recover it. 00:34:18.348 [2024-07-14 05:48:25.266988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.348 [2024-07-14 05:48:25.267014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.348 qpair failed and we were unable to recover it. 00:34:18.348 [2024-07-14 05:48:25.267162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.348 [2024-07-14 05:48:25.267188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.348 qpair failed and we were unable to recover it. 00:34:18.348 [2024-07-14 05:48:25.267345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.348 [2024-07-14 05:48:25.267371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.348 qpair failed and we were unable to recover it. 00:34:18.348 [2024-07-14 05:48:25.267516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.348 [2024-07-14 05:48:25.267542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.348 qpair failed and we were unable to recover it. 00:34:18.348 [2024-07-14 05:48:25.267691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.348 [2024-07-14 05:48:25.267716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.348 qpair failed and we were unable to recover it. 00:34:18.348 [2024-07-14 05:48:25.267922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.348 [2024-07-14 05:48:25.267948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.348 qpair failed and we were unable to recover it. 00:34:18.348 [2024-07-14 05:48:25.268108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.348 [2024-07-14 05:48:25.268134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.348 qpair failed and we were unable to recover it. 00:34:18.348 [2024-07-14 05:48:25.268299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.348 [2024-07-14 05:48:25.268325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.348 qpair failed and we were unable to recover it. 00:34:18.348 [2024-07-14 05:48:25.268480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.348 [2024-07-14 05:48:25.268506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.348 qpair failed and we were unable to recover it. 00:34:18.348 [2024-07-14 05:48:25.268654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.348 [2024-07-14 05:48:25.268680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.348 qpair failed and we were unable to recover it. 00:34:18.348 [2024-07-14 05:48:25.268860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.348 [2024-07-14 05:48:25.268900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.348 qpair failed and we were unable to recover it. 00:34:18.348 [2024-07-14 05:48:25.269045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.348 [2024-07-14 05:48:25.269071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.348 qpair failed and we were unable to recover it. 00:34:18.348 [2024-07-14 05:48:25.269233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.348 [2024-07-14 05:48:25.269259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.348 qpair failed and we were unable to recover it. 00:34:18.348 [2024-07-14 05:48:25.269434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.348 [2024-07-14 05:48:25.269460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.348 qpair failed and we were unable to recover it. 00:34:18.348 [2024-07-14 05:48:25.269635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.348 [2024-07-14 05:48:25.269661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.348 qpair failed and we were unable to recover it. 00:34:18.348 [2024-07-14 05:48:25.269816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.348 [2024-07-14 05:48:25.269842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.348 qpair failed and we were unable to recover it. 00:34:18.348 [2024-07-14 05:48:25.270034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.348 [2024-07-14 05:48:25.270059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.348 qpair failed and we were unable to recover it. 00:34:18.348 [2024-07-14 05:48:25.270204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.348 [2024-07-14 05:48:25.270229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.348 qpair failed and we were unable to recover it. 00:34:18.348 [2024-07-14 05:48:25.270387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.348 [2024-07-14 05:48:25.270413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.348 qpair failed and we were unable to recover it. 00:34:18.348 [2024-07-14 05:48:25.270610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.349 [2024-07-14 05:48:25.270636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.349 qpair failed and we were unable to recover it. 00:34:18.349 [2024-07-14 05:48:25.270786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.349 [2024-07-14 05:48:25.270812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.349 qpair failed and we were unable to recover it. 00:34:18.349 [2024-07-14 05:48:25.270997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.349 [2024-07-14 05:48:25.271023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.349 qpair failed and we were unable to recover it. 00:34:18.349 [2024-07-14 05:48:25.271197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.349 [2024-07-14 05:48:25.271222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.349 qpair failed and we were unable to recover it. 00:34:18.349 [2024-07-14 05:48:25.271372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.349 [2024-07-14 05:48:25.271398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.349 qpair failed and we were unable to recover it. 00:34:18.349 [2024-07-14 05:48:25.271578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.349 [2024-07-14 05:48:25.271603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.349 qpair failed and we were unable to recover it. 00:34:18.349 [2024-07-14 05:48:25.271767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.349 [2024-07-14 05:48:25.271792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.349 qpair failed and we were unable to recover it. 00:34:18.349 [2024-07-14 05:48:25.271972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.349 [2024-07-14 05:48:25.271998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.349 qpair failed and we were unable to recover it. 00:34:18.349 [2024-07-14 05:48:25.272149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.349 [2024-07-14 05:48:25.272175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.349 qpair failed and we were unable to recover it. 00:34:18.349 [2024-07-14 05:48:25.272380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.349 [2024-07-14 05:48:25.272406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.349 qpair failed and we were unable to recover it. 00:34:18.349 [2024-07-14 05:48:25.272562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.349 [2024-07-14 05:48:25.272587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.349 qpair failed and we were unable to recover it. 00:34:18.349 [2024-07-14 05:48:25.272733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.349 [2024-07-14 05:48:25.272758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.349 qpair failed and we were unable to recover it. 00:34:18.349 [2024-07-14 05:48:25.272919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.349 [2024-07-14 05:48:25.272946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.349 qpair failed and we were unable to recover it. 00:34:18.349 [2024-07-14 05:48:25.273088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.349 [2024-07-14 05:48:25.273113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.349 qpair failed and we were unable to recover it. 00:34:18.349 [2024-07-14 05:48:25.273260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.349 [2024-07-14 05:48:25.273286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.349 qpair failed and we were unable to recover it. 00:34:18.349 [2024-07-14 05:48:25.273440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.349 [2024-07-14 05:48:25.273466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.349 qpair failed and we were unable to recover it. 00:34:18.349 [2024-07-14 05:48:25.273665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.349 [2024-07-14 05:48:25.273691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.349 qpair failed and we were unable to recover it. 00:34:18.349 [2024-07-14 05:48:25.273845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.349 [2024-07-14 05:48:25.273875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.349 qpair failed and we were unable to recover it. 00:34:18.349 [2024-07-14 05:48:25.274024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.349 [2024-07-14 05:48:25.274049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.349 qpair failed and we were unable to recover it. 00:34:18.349 [2024-07-14 05:48:25.274229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.349 [2024-07-14 05:48:25.274254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.349 qpair failed and we were unable to recover it. 00:34:18.349 [2024-07-14 05:48:25.274412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.349 [2024-07-14 05:48:25.274438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.349 qpair failed and we were unable to recover it. 00:34:18.349 [2024-07-14 05:48:25.274630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.349 [2024-07-14 05:48:25.274656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.349 qpair failed and we were unable to recover it. 00:34:18.349 [2024-07-14 05:48:25.274832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.349 [2024-07-14 05:48:25.274871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.349 qpair failed and we were unable to recover it. 00:34:18.349 [2024-07-14 05:48:25.275044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.349 [2024-07-14 05:48:25.275070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.349 qpair failed and we were unable to recover it. 00:34:18.349 [2024-07-14 05:48:25.275255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.349 [2024-07-14 05:48:25.275280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.349 qpair failed and we were unable to recover it. 00:34:18.349 [2024-07-14 05:48:25.275457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.349 [2024-07-14 05:48:25.275482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.349 qpair failed and we were unable to recover it. 00:34:18.349 [2024-07-14 05:48:25.275655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.349 [2024-07-14 05:48:25.275681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.349 qpair failed and we were unable to recover it. 00:34:18.349 [2024-07-14 05:48:25.275892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.349 [2024-07-14 05:48:25.275918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.349 qpair failed and we were unable to recover it. 00:34:18.349 [2024-07-14 05:48:25.276072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.349 [2024-07-14 05:48:25.276098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.349 qpair failed and we were unable to recover it. 00:34:18.349 [2024-07-14 05:48:25.276285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.349 [2024-07-14 05:48:25.276311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.349 qpair failed and we were unable to recover it. 00:34:18.349 [2024-07-14 05:48:25.276485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.349 [2024-07-14 05:48:25.276510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.349 qpair failed and we were unable to recover it. 00:34:18.349 [2024-07-14 05:48:25.276659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.349 [2024-07-14 05:48:25.276685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.349 qpair failed and we were unable to recover it. 00:34:18.349 [2024-07-14 05:48:25.276888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.349 [2024-07-14 05:48:25.276914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.349 qpair failed and we were unable to recover it. 00:34:18.349 [2024-07-14 05:48:25.277084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.349 [2024-07-14 05:48:25.277109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.349 qpair failed and we were unable to recover it. 00:34:18.349 [2024-07-14 05:48:25.277262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.349 [2024-07-14 05:48:25.277288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.349 qpair failed and we were unable to recover it. 00:34:18.349 [2024-07-14 05:48:25.277451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.349 [2024-07-14 05:48:25.277477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.349 qpair failed and we were unable to recover it. 00:34:18.349 [2024-07-14 05:48:25.277655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.349 [2024-07-14 05:48:25.277681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.349 qpair failed and we were unable to recover it. 00:34:18.349 [2024-07-14 05:48:25.277823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.349 [2024-07-14 05:48:25.277849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.349 qpair failed and we were unable to recover it. 00:34:18.349 [2024-07-14 05:48:25.278041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.349 [2024-07-14 05:48:25.278068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.349 qpair failed and we were unable to recover it. 00:34:18.349 [2024-07-14 05:48:25.278214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.349 [2024-07-14 05:48:25.278239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.349 qpair failed and we were unable to recover it. 00:34:18.349 [2024-07-14 05:48:25.278417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.349 [2024-07-14 05:48:25.278442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.349 qpair failed and we were unable to recover it. 00:34:18.349 [2024-07-14 05:48:25.278612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.350 [2024-07-14 05:48:25.278638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.350 qpair failed and we were unable to recover it. 00:34:18.350 [2024-07-14 05:48:25.278813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.350 [2024-07-14 05:48:25.278842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.350 qpair failed and we were unable to recover it. 00:34:18.350 [2024-07-14 05:48:25.279001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.350 [2024-07-14 05:48:25.279029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.350 qpair failed and we were unable to recover it. 00:34:18.350 05:48:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:18.350 [2024-07-14 05:48:25.279224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.350 [2024-07-14 05:48:25.279252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.350 qpair failed and we were unable to recover it. 00:34:18.350 05:48:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:34:18.350 [2024-07-14 05:48:25.279429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.350 05:48:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:18.350 [2024-07-14 05:48:25.279455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.350 qpair failed and we were unable to recover it. 00:34:18.350 05:48:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:18.350 [2024-07-14 05:48:25.279600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.350 [2024-07-14 05:48:25.279626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.350 qpair failed and we were unable to recover it. 00:34:18.350 05:48:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:18.350 [2024-07-14 05:48:25.279786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.350 [2024-07-14 05:48:25.279813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.350 qpair failed and we were unable to recover it. 00:34:18.350 [2024-07-14 05:48:25.279986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.350 [2024-07-14 05:48:25.280013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.351 qpair failed and we were unable to recover it. 00:34:18.351 [2024-07-14 05:48:25.280164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.351 [2024-07-14 05:48:25.280190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.351 qpair failed and we were unable to recover it. 00:34:18.351 [2024-07-14 05:48:25.280366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.351 [2024-07-14 05:48:25.280392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.351 qpair failed and we were unable to recover it. 00:34:18.351 [2024-07-14 05:48:25.280551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.351 [2024-07-14 05:48:25.280586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.351 qpair failed and we were unable to recover it. 00:34:18.351 [2024-07-14 05:48:25.280767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.351 [2024-07-14 05:48:25.280792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.351 qpair failed and we were unable to recover it. 00:34:18.351 [2024-07-14 05:48:25.280951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.351 [2024-07-14 05:48:25.280977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.351 qpair failed and we were unable to recover it. 00:34:18.351 [2024-07-14 05:48:25.281156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.351 [2024-07-14 05:48:25.281183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.351 qpair failed and we were unable to recover it. 00:34:18.351 [2024-07-14 05:48:25.281333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.351 [2024-07-14 05:48:25.281359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.351 qpair failed and we were unable to recover it. 00:34:18.351 [2024-07-14 05:48:25.281505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.351 [2024-07-14 05:48:25.281531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.351 qpair failed and we were unable to recover it. 00:34:18.351 [2024-07-14 05:48:25.281712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.351 [2024-07-14 05:48:25.281738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.351 qpair failed and we were unable to recover it. 00:34:18.351 [2024-07-14 05:48:25.281910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.351 [2024-07-14 05:48:25.281947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.351 qpair failed and we were unable to recover it. 00:34:18.351 [2024-07-14 05:48:25.282112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.351 [2024-07-14 05:48:25.282138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.351 qpair failed and we were unable to recover it. 00:34:18.351 [2024-07-14 05:48:25.282305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.351 [2024-07-14 05:48:25.282331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.351 qpair failed and we were unable to recover it. 00:34:18.351 [2024-07-14 05:48:25.282486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.351 [2024-07-14 05:48:25.282519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.351 qpair failed and we were unable to recover it. 00:34:18.351 [2024-07-14 05:48:25.282695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.351 [2024-07-14 05:48:25.282721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.351 qpair failed and we were unable to recover it. 00:34:18.351 [2024-07-14 05:48:25.282924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.351 [2024-07-14 05:48:25.282951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.351 qpair failed and we were unable to recover it. 00:34:18.351 [2024-07-14 05:48:25.283099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.351 [2024-07-14 05:48:25.283125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.351 qpair failed and we were unable to recover it. 00:34:18.351 [2024-07-14 05:48:25.283304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.351 [2024-07-14 05:48:25.283331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.351 qpair failed and we were unable to recover it. 00:34:18.351 [2024-07-14 05:48:25.283491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.351 [2024-07-14 05:48:25.283517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.351 qpair failed and we were unable to recover it. 00:34:18.351 [2024-07-14 05:48:25.283700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.351 [2024-07-14 05:48:25.283727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.351 qpair failed and we were unable to recover it. 00:34:18.351 [2024-07-14 05:48:25.283898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.351 [2024-07-14 05:48:25.283925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.351 qpair failed and we were unable to recover it. 00:34:18.351 [2024-07-14 05:48:25.284093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.351 [2024-07-14 05:48:25.284119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.351 qpair failed and we were unable to recover it. 00:34:18.351 [2024-07-14 05:48:25.284288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.351 [2024-07-14 05:48:25.284315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.351 qpair failed and we were unable to recover it. 00:34:18.351 [2024-07-14 05:48:25.284476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.351 [2024-07-14 05:48:25.284502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.351 qpair failed and we were unable to recover it. 00:34:18.351 [2024-07-14 05:48:25.284669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.351 [2024-07-14 05:48:25.284695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.351 qpair failed and we were unable to recover it. 00:34:18.351 [2024-07-14 05:48:25.284896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.351 [2024-07-14 05:48:25.284927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.351 qpair failed and we were unable to recover it. 00:34:18.351 [2024-07-14 05:48:25.285077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.351 [2024-07-14 05:48:25.285103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.351 qpair failed and we were unable to recover it. 00:34:18.351 [2024-07-14 05:48:25.285287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.351 [2024-07-14 05:48:25.285313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.351 qpair failed and we were unable to recover it. 00:34:18.351 [2024-07-14 05:48:25.285522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.351 [2024-07-14 05:48:25.285549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.351 qpair failed and we were unable to recover it. 00:34:18.351 [2024-07-14 05:48:25.285695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.351 [2024-07-14 05:48:25.285721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.351 qpair failed and we were unable to recover it. 00:34:18.351 [2024-07-14 05:48:25.285877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.351 [2024-07-14 05:48:25.285904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.352 qpair failed and we were unable to recover it. 00:34:18.352 [2024-07-14 05:48:25.286065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.352 [2024-07-14 05:48:25.286091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.352 qpair failed and we were unable to recover it. 00:34:18.352 [2024-07-14 05:48:25.286242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.352 [2024-07-14 05:48:25.286268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.352 qpair failed and we were unable to recover it. 00:34:18.352 [2024-07-14 05:48:25.286421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.352 [2024-07-14 05:48:25.286447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.352 qpair failed and we were unable to recover it. 00:34:18.352 [2024-07-14 05:48:25.286659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.352 [2024-07-14 05:48:25.286686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.352 qpair failed and we were unable to recover it. 00:34:18.352 [2024-07-14 05:48:25.286836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.352 [2024-07-14 05:48:25.286881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.352 qpair failed and we were unable to recover it. 00:34:18.352 [2024-07-14 05:48:25.287064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.352 [2024-07-14 05:48:25.287090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.352 qpair failed and we were unable to recover it. 00:34:18.352 [2024-07-14 05:48:25.287267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.352 [2024-07-14 05:48:25.287293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.352 qpair failed and we were unable to recover it. 00:34:18.352 [2024-07-14 05:48:25.287500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.352 [2024-07-14 05:48:25.287526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.352 qpair failed and we were unable to recover it. 00:34:18.352 [2024-07-14 05:48:25.287712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.352 [2024-07-14 05:48:25.287738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.352 qpair failed and we were unable to recover it. 00:34:18.352 [2024-07-14 05:48:25.287912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.352 [2024-07-14 05:48:25.287938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.352 qpair failed and we were unable to recover it. 00:34:18.352 [2024-07-14 05:48:25.288087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.352 [2024-07-14 05:48:25.288112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.352 qpair failed and we were unable to recover it. 00:34:18.352 [2024-07-14 05:48:25.288301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.352 [2024-07-14 05:48:25.288327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.352 qpair failed and we were unable to recover it. 00:34:18.352 [2024-07-14 05:48:25.288487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.352 [2024-07-14 05:48:25.288513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.352 qpair failed and we were unable to recover it. 00:34:18.352 [2024-07-14 05:48:25.288672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.352 [2024-07-14 05:48:25.288699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.352 qpair failed and we were unable to recover it. 00:34:18.352 [2024-07-14 05:48:25.288881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.352 [2024-07-14 05:48:25.288907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.352 qpair failed and we were unable to recover it. 00:34:18.352 [2024-07-14 05:48:25.289088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.352 [2024-07-14 05:48:25.289115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.352 qpair failed and we were unable to recover it. 00:34:18.352 [2024-07-14 05:48:25.289297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.352 [2024-07-14 05:48:25.289324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.352 qpair failed and we were unable to recover it. 00:34:18.352 [2024-07-14 05:48:25.289472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.352 [2024-07-14 05:48:25.289498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.352 qpair failed and we were unable to recover it. 00:34:18.352 [2024-07-14 05:48:25.289654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.352 [2024-07-14 05:48:25.289679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.352 qpair failed and we were unable to recover it. 00:34:18.352 [2024-07-14 05:48:25.289858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.352 [2024-07-14 05:48:25.289889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.352 qpair failed and we were unable to recover it. 00:34:18.352 [2024-07-14 05:48:25.290041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.352 [2024-07-14 05:48:25.290067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.352 qpair failed and we were unable to recover it. 00:34:18.352 [2024-07-14 05:48:25.290252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.352 [2024-07-14 05:48:25.290283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.352 qpair failed and we were unable to recover it. 00:34:18.352 [2024-07-14 05:48:25.290467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.352 [2024-07-14 05:48:25.290500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.352 qpair failed and we were unable to recover it. 00:34:18.352 [2024-07-14 05:48:25.290651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.352 [2024-07-14 05:48:25.290682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.352 qpair failed and we were unable to recover it. 00:34:18.352 [2024-07-14 05:48:25.290841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.352 [2024-07-14 05:48:25.290871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.352 qpair failed and we were unable to recover it. 00:34:18.352 [2024-07-14 05:48:25.291085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.352 [2024-07-14 05:48:25.291111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.352 qpair failed and we were unable to recover it. 00:34:18.352 [2024-07-14 05:48:25.291295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.352 [2024-07-14 05:48:25.291321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.352 qpair failed and we were unable to recover it. 00:34:18.352 [2024-07-14 05:48:25.291483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.352 [2024-07-14 05:48:25.291508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.352 qpair failed and we were unable to recover it. 00:34:18.352 [2024-07-14 05:48:25.291686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.352 [2024-07-14 05:48:25.291712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.352 qpair failed and we were unable to recover it. 00:34:18.352 [2024-07-14 05:48:25.291891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.352 [2024-07-14 05:48:25.291918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.352 qpair failed and we were unable to recover it. 00:34:18.352 [2024-07-14 05:48:25.292068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.352 [2024-07-14 05:48:25.292094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.352 qpair failed and we were unable to recover it. 00:34:18.352 [2024-07-14 05:48:25.292284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.352 [2024-07-14 05:48:25.292309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.352 qpair failed and we were unable to recover it. 00:34:18.352 [2024-07-14 05:48:25.292474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.352 [2024-07-14 05:48:25.292500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.352 qpair failed and we were unable to recover it. 00:34:18.352 [2024-07-14 05:48:25.292644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.352 [2024-07-14 05:48:25.292670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.352 qpair failed and we were unable to recover it. 00:34:18.352 [2024-07-14 05:48:25.292815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.352 [2024-07-14 05:48:25.292841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.352 qpair failed and we were unable to recover it. 00:34:18.352 [2024-07-14 05:48:25.293008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.352 [2024-07-14 05:48:25.293035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.352 qpair failed and we were unable to recover it. 00:34:18.352 [2024-07-14 05:48:25.293211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.352 [2024-07-14 05:48:25.293242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.352 qpair failed and we were unable to recover it. 00:34:18.352 [2024-07-14 05:48:25.293430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.352 [2024-07-14 05:48:25.293456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.352 qpair failed and we were unable to recover it. 00:34:18.352 [2024-07-14 05:48:25.293655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.352 [2024-07-14 05:48:25.293681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.352 qpair failed and we were unable to recover it. 00:34:18.352 [2024-07-14 05:48:25.293861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.352 [2024-07-14 05:48:25.293892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.352 qpair failed and we were unable to recover it. 00:34:18.352 [2024-07-14 05:48:25.294073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.352 [2024-07-14 05:48:25.294099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.352 qpair failed and we were unable to recover it. 00:34:18.353 [2024-07-14 05:48:25.294278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.353 [2024-07-14 05:48:25.294304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.353 qpair failed and we were unable to recover it. 00:34:18.353 [2024-07-14 05:48:25.294464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.353 [2024-07-14 05:48:25.294490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.353 qpair failed and we were unable to recover it. 00:34:18.353 [2024-07-14 05:48:25.294649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.353 [2024-07-14 05:48:25.294675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.353 qpair failed and we were unable to recover it. 00:34:18.353 [2024-07-14 05:48:25.294842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.353 [2024-07-14 05:48:25.294884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.353 qpair failed and we were unable to recover it. 00:34:18.353 [2024-07-14 05:48:25.295053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.353 [2024-07-14 05:48:25.295079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.353 qpair failed and we were unable to recover it. 00:34:18.353 [2024-07-14 05:48:25.295223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.353 [2024-07-14 05:48:25.295261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.353 qpair failed and we were unable to recover it. 00:34:18.353 [2024-07-14 05:48:25.295444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.353 [2024-07-14 05:48:25.295471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.353 qpair failed and we were unable to recover it. 00:34:18.353 [2024-07-14 05:48:25.295645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.353 [2024-07-14 05:48:25.295672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.353 qpair failed and we were unable to recover it. 00:34:18.353 [2024-07-14 05:48:25.295871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.353 [2024-07-14 05:48:25.295897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.353 qpair failed and we were unable to recover it. 00:34:18.353 [2024-07-14 05:48:25.296095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.353 [2024-07-14 05:48:25.296121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.353 qpair failed and we were unable to recover it. 00:34:18.353 [2024-07-14 05:48:25.296288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.353 [2024-07-14 05:48:25.296315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.353 qpair failed and we were unable to recover it. 00:34:18.353 [2024-07-14 05:48:25.296495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.353 [2024-07-14 05:48:25.296521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.353 qpair failed and we were unable to recover it. 00:34:18.353 [2024-07-14 05:48:25.296710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.353 [2024-07-14 05:48:25.296735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.353 qpair failed and we were unable to recover it. 00:34:18.353 [2024-07-14 05:48:25.296917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.353 [2024-07-14 05:48:25.296945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.353 qpair failed and we were unable to recover it. 00:34:18.353 [2024-07-14 05:48:25.297101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.353 [2024-07-14 05:48:25.297127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.353 qpair failed and we were unable to recover it. 00:34:18.353 05:48:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:18.353 [2024-07-14 05:48:25.297313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.353 [2024-07-14 05:48:25.297352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.353 qpair failed and we were unable to recover it. 00:34:18.353 05:48:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:18.353 05:48:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:18.353 [2024-07-14 05:48:25.297560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.353 [2024-07-14 05:48:25.297587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.353 qpair failed and we were unable to recover it. 00:34:18.353 05:48:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:18.353 [2024-07-14 05:48:25.297814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.353 [2024-07-14 05:48:25.297840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.353 qpair failed and we were unable to recover it. 00:34:18.353 [2024-07-14 05:48:25.298007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.353 [2024-07-14 05:48:25.298034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.353 qpair failed and we were unable to recover it. 00:34:18.353 [2024-07-14 05:48:25.298220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.353 [2024-07-14 05:48:25.298251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.353 qpair failed and we were unable to recover it. 00:34:18.353 [2024-07-14 05:48:25.298410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.353 [2024-07-14 05:48:25.298435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.353 qpair failed and we were unable to recover it. 00:34:18.353 [2024-07-14 05:48:25.298582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.353 [2024-07-14 05:48:25.298608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.353 qpair failed and we were unable to recover it. 00:34:18.353 [2024-07-14 05:48:25.298763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.353 [2024-07-14 05:48:25.298791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.353 qpair failed and we were unable to recover it. 00:34:18.353 [2024-07-14 05:48:25.298976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.353 [2024-07-14 05:48:25.299002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.353 qpair failed and we were unable to recover it. 00:34:18.353 [2024-07-14 05:48:25.299149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.353 [2024-07-14 05:48:25.299186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.353 qpair failed and we were unable to recover it. 00:34:18.353 [2024-07-14 05:48:25.299347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.353 [2024-07-14 05:48:25.299372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.353 qpair failed and we were unable to recover it. 00:34:18.353 [2024-07-14 05:48:25.299553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.353 [2024-07-14 05:48:25.299578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.353 qpair failed and we were unable to recover it. 00:34:18.353 [2024-07-14 05:48:25.299755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.353 [2024-07-14 05:48:25.299781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.353 qpair failed and we were unable to recover it. 00:34:18.353 [2024-07-14 05:48:25.299932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.353 [2024-07-14 05:48:25.299958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.353 qpair failed and we were unable to recover it. 00:34:18.353 [2024-07-14 05:48:25.300119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.353 [2024-07-14 05:48:25.300145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.353 qpair failed and we were unable to recover it. 00:34:18.353 [2024-07-14 05:48:25.300300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.353 [2024-07-14 05:48:25.300326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.353 qpair failed and we were unable to recover it. 00:34:18.353 [2024-07-14 05:48:25.300505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.353 [2024-07-14 05:48:25.300531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.353 qpair failed and we were unable to recover it. 00:34:18.353 [2024-07-14 05:48:25.300696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.353 [2024-07-14 05:48:25.300722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.353 qpair failed and we were unable to recover it. 00:34:18.353 [2024-07-14 05:48:25.300882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.353 [2024-07-14 05:48:25.300909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.353 qpair failed and we were unable to recover it. 00:34:18.353 [2024-07-14 05:48:25.301066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.353 [2024-07-14 05:48:25.301092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.353 qpair failed and we were unable to recover it. 00:34:18.353 [2024-07-14 05:48:25.301241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.353 [2024-07-14 05:48:25.301267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.353 qpair failed and we were unable to recover it. 00:34:18.353 [2024-07-14 05:48:25.301477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.353 [2024-07-14 05:48:25.301503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.353 qpair failed and we were unable to recover it. 00:34:18.353 [2024-07-14 05:48:25.301653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.353 [2024-07-14 05:48:25.301679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.353 qpair failed and we were unable to recover it. 00:34:18.353 [2024-07-14 05:48:25.301830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.353 [2024-07-14 05:48:25.301856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.353 qpair failed and we were unable to recover it. 00:34:18.353 [2024-07-14 05:48:25.302018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.353 [2024-07-14 05:48:25.302045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.354 qpair failed and we were unable to recover it. 00:34:18.354 [2024-07-14 05:48:25.302194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.354 [2024-07-14 05:48:25.302220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.354 qpair failed and we were unable to recover it. 00:34:18.354 [2024-07-14 05:48:25.302368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.354 [2024-07-14 05:48:25.302393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.354 qpair failed and we were unable to recover it. 00:34:18.354 [2024-07-14 05:48:25.302569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.354 [2024-07-14 05:48:25.302594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.354 qpair failed and we were unable to recover it. 00:34:18.354 [2024-07-14 05:48:25.302776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.354 [2024-07-14 05:48:25.302802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.354 qpair failed and we were unable to recover it. 00:34:18.354 [2024-07-14 05:48:25.302972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.354 [2024-07-14 05:48:25.302999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.354 qpair failed and we were unable to recover it. 00:34:18.354 [2024-07-14 05:48:25.303149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.354 [2024-07-14 05:48:25.303174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.354 qpair failed and we were unable to recover it. 00:34:18.354 [2024-07-14 05:48:25.303392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.354 [2024-07-14 05:48:25.303421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.354 qpair failed and we were unable to recover it. 00:34:18.354 [2024-07-14 05:48:25.303635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.354 [2024-07-14 05:48:25.303661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.354 qpair failed and we were unable to recover it. 00:34:18.354 [2024-07-14 05:48:25.303846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.354 [2024-07-14 05:48:25.303883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.354 qpair failed and we were unable to recover it. 00:34:18.354 [2024-07-14 05:48:25.304045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.354 [2024-07-14 05:48:25.304070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.354 qpair failed and we were unable to recover it. 00:34:18.354 [2024-07-14 05:48:25.304232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.354 [2024-07-14 05:48:25.304257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.354 qpair failed and we were unable to recover it. 00:34:18.354 [2024-07-14 05:48:25.304402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.354 [2024-07-14 05:48:25.304428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.354 qpair failed and we were unable to recover it. 00:34:18.354 [2024-07-14 05:48:25.304627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.354 [2024-07-14 05:48:25.304653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.354 qpair failed and we were unable to recover it. 00:34:18.354 [2024-07-14 05:48:25.304823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.354 [2024-07-14 05:48:25.304848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.354 qpair failed and we were unable to recover it. 00:34:18.354 [2024-07-14 05:48:25.305021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.354 [2024-07-14 05:48:25.305048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.354 qpair failed and we were unable to recover it. 00:34:18.354 [2024-07-14 05:48:25.305209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.354 [2024-07-14 05:48:25.305235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.354 qpair failed and we were unable to recover it. 00:34:18.354 [2024-07-14 05:48:25.305410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.354 [2024-07-14 05:48:25.305436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.354 qpair failed and we were unable to recover it. 00:34:18.354 [2024-07-14 05:48:25.305604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.354 [2024-07-14 05:48:25.305630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.354 qpair failed and we were unable to recover it. 00:34:18.354 [2024-07-14 05:48:25.305839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.354 [2024-07-14 05:48:25.305887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.354 qpair failed and we were unable to recover it. 00:34:18.354 [2024-07-14 05:48:25.306049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.354 [2024-07-14 05:48:25.306076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.354 qpair failed and we were unable to recover it. 00:34:18.354 [2024-07-14 05:48:25.306231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.354 [2024-07-14 05:48:25.306257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.354 qpair failed and we were unable to recover it. 00:34:18.354 [2024-07-14 05:48:25.306412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.354 [2024-07-14 05:48:25.306439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.354 qpair failed and we were unable to recover it. 00:34:18.354 [2024-07-14 05:48:25.306626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.354 [2024-07-14 05:48:25.306652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.354 qpair failed and we were unable to recover it. 00:34:18.354 [2024-07-14 05:48:25.306807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.354 [2024-07-14 05:48:25.306832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.354 qpair failed and we were unable to recover it. 00:34:18.354 [2024-07-14 05:48:25.307008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.354 [2024-07-14 05:48:25.307034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.354 qpair failed and we were unable to recover it. 00:34:18.354 [2024-07-14 05:48:25.307187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.354 [2024-07-14 05:48:25.307213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.354 qpair failed and we were unable to recover it. 00:34:18.354 [2024-07-14 05:48:25.307389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.354 [2024-07-14 05:48:25.307415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.354 qpair failed and we were unable to recover it. 00:34:18.354 [2024-07-14 05:48:25.307569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.354 [2024-07-14 05:48:25.307594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.354 qpair failed and we were unable to recover it. 00:34:18.354 [2024-07-14 05:48:25.307758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.354 [2024-07-14 05:48:25.307784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.354 qpair failed and we were unable to recover it. 00:34:18.354 [2024-07-14 05:48:25.307959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.354 [2024-07-14 05:48:25.307985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.354 qpair failed and we were unable to recover it. 00:34:18.354 [2024-07-14 05:48:25.308193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.354 [2024-07-14 05:48:25.308218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.354 qpair failed and we were unable to recover it. 00:34:18.354 [2024-07-14 05:48:25.308418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.354 [2024-07-14 05:48:25.308443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.354 qpair failed and we were unable to recover it. 00:34:18.354 [2024-07-14 05:48:25.308631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.354 [2024-07-14 05:48:25.308656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.354 qpair failed and we were unable to recover it. 00:34:18.354 [2024-07-14 05:48:25.308811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.354 [2024-07-14 05:48:25.308837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.354 qpair failed and we were unable to recover it. 00:34:18.354 [2024-07-14 05:48:25.309053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.354 [2024-07-14 05:48:25.309080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.354 qpair failed and we were unable to recover it. 00:34:18.354 [2024-07-14 05:48:25.309264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.354 [2024-07-14 05:48:25.309290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.354 qpair failed and we were unable to recover it. 00:34:18.354 [2024-07-14 05:48:25.309499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.354 [2024-07-14 05:48:25.309525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.354 qpair failed and we were unable to recover it. 00:34:18.354 [2024-07-14 05:48:25.309713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.354 [2024-07-14 05:48:25.309739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.354 qpair failed and we were unable to recover it. 00:34:18.354 [2024-07-14 05:48:25.309928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.354 [2024-07-14 05:48:25.309954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.354 qpair failed and we were unable to recover it. 00:34:18.354 [2024-07-14 05:48:25.310105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.354 [2024-07-14 05:48:25.310132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.354 qpair failed and we were unable to recover it. 00:34:18.354 [2024-07-14 05:48:25.310342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.354 [2024-07-14 05:48:25.310368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.355 qpair failed and we were unable to recover it. 00:34:18.355 [2024-07-14 05:48:25.310548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.355 [2024-07-14 05:48:25.310573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.355 qpair failed and we were unable to recover it. 00:34:18.355 [2024-07-14 05:48:25.310726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.355 [2024-07-14 05:48:25.310751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.355 qpair failed and we were unable to recover it. 00:34:18.355 [2024-07-14 05:48:25.310911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.355 [2024-07-14 05:48:25.310938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.355 qpair failed and we were unable to recover it. 00:34:18.355 [2024-07-14 05:48:25.311095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.355 [2024-07-14 05:48:25.311121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.355 qpair failed and we were unable to recover it. 00:34:18.355 [2024-07-14 05:48:25.311272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.355 [2024-07-14 05:48:25.311298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.355 qpair failed and we were unable to recover it. 00:34:18.355 [2024-07-14 05:48:25.311475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.355 [2024-07-14 05:48:25.311501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.355 qpair failed and we were unable to recover it. 00:34:18.355 [2024-07-14 05:48:25.311692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.355 [2024-07-14 05:48:25.311718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.355 qpair failed and we were unable to recover it. 00:34:18.355 [2024-07-14 05:48:25.311899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.355 [2024-07-14 05:48:25.311926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.355 qpair failed and we were unable to recover it. 00:34:18.355 [2024-07-14 05:48:25.312081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.355 [2024-07-14 05:48:25.312107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.355 qpair failed and we were unable to recover it. 00:34:18.355 [2024-07-14 05:48:25.312315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.355 [2024-07-14 05:48:25.312342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.355 qpair failed and we were unable to recover it. 00:34:18.355 [2024-07-14 05:48:25.312504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.355 [2024-07-14 05:48:25.312538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.355 qpair failed and we were unable to recover it. 00:34:18.355 [2024-07-14 05:48:25.312693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.355 [2024-07-14 05:48:25.312719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.355 qpair failed and we were unable to recover it. 00:34:18.355 [2024-07-14 05:48:25.312899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.355 [2024-07-14 05:48:25.312925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.355 qpair failed and we were unable to recover it. 00:34:18.355 [2024-07-14 05:48:25.313131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.355 [2024-07-14 05:48:25.313157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.355 qpair failed and we were unable to recover it. 00:34:18.355 [2024-07-14 05:48:25.313321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.355 [2024-07-14 05:48:25.313347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.355 qpair failed and we were unable to recover it. 00:34:18.355 [2024-07-14 05:48:25.313505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.355 [2024-07-14 05:48:25.313531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.355 qpair failed and we were unable to recover it. 00:34:18.355 [2024-07-14 05:48:25.313677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.355 [2024-07-14 05:48:25.313703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.355 qpair failed and we were unable to recover it. 00:34:18.355 [2024-07-14 05:48:25.313896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.355 [2024-07-14 05:48:25.313922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.355 qpair failed and we were unable to recover it. 00:34:18.355 [2024-07-14 05:48:25.314076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.355 [2024-07-14 05:48:25.314102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.355 qpair failed and we were unable to recover it. 00:34:18.355 [2024-07-14 05:48:25.314257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.355 [2024-07-14 05:48:25.314282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.355 qpair failed and we were unable to recover it. 00:34:18.355 [2024-07-14 05:48:25.314502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.355 [2024-07-14 05:48:25.314528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.355 qpair failed and we were unable to recover it. 00:34:18.355 [2024-07-14 05:48:25.314677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.355 [2024-07-14 05:48:25.314703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.355 qpair failed and we were unable to recover it. 00:34:18.355 [2024-07-14 05:48:25.314889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.355 [2024-07-14 05:48:25.314915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.355 qpair failed and we were unable to recover it. 00:34:18.355 [2024-07-14 05:48:25.315096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.355 [2024-07-14 05:48:25.315122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.355 qpair failed and we were unable to recover it. 00:34:18.355 [2024-07-14 05:48:25.315281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.355 [2024-07-14 05:48:25.315307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.355 qpair failed and we were unable to recover it. 00:34:18.355 [2024-07-14 05:48:25.315461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.355 [2024-07-14 05:48:25.315487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.355 qpair failed and we were unable to recover it. 00:34:18.355 [2024-07-14 05:48:25.315762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.355 [2024-07-14 05:48:25.315799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.355 qpair failed and we were unable to recover it. 00:34:18.355 [2024-07-14 05:48:25.316000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.355 [2024-07-14 05:48:25.316027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.355 qpair failed and we were unable to recover it. 00:34:18.355 [2024-07-14 05:48:25.316184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.355 [2024-07-14 05:48:25.316210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.355 qpair failed and we were unable to recover it. 00:34:18.355 [2024-07-14 05:48:25.316403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.355 [2024-07-14 05:48:25.316429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.355 qpair failed and we were unable to recover it. 00:34:18.355 [2024-07-14 05:48:25.316597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.355 [2024-07-14 05:48:25.316622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.355 qpair failed and we were unable to recover it. 00:34:18.355 [2024-07-14 05:48:25.316786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.355 [2024-07-14 05:48:25.316811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.355 qpair failed and we were unable to recover it. 00:34:18.355 [2024-07-14 05:48:25.317002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.355 [2024-07-14 05:48:25.317029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.355 qpair failed and we were unable to recover it. 00:34:18.355 [2024-07-14 05:48:25.317175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.355 [2024-07-14 05:48:25.317205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.355 qpair failed and we were unable to recover it. 00:34:18.355 [2024-07-14 05:48:25.317364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.355 [2024-07-14 05:48:25.317390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.355 qpair failed and we were unable to recover it. 00:34:18.355 [2024-07-14 05:48:25.317605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.355 [2024-07-14 05:48:25.317631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.355 qpair failed and we were unable to recover it. 00:34:18.356 [2024-07-14 05:48:25.317817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.356 [2024-07-14 05:48:25.317843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.356 qpair failed and we were unable to recover it. 00:34:18.356 [2024-07-14 05:48:25.318049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.356 [2024-07-14 05:48:25.318075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.356 qpair failed and we were unable to recover it. 00:34:18.356 [2024-07-14 05:48:25.318248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.356 [2024-07-14 05:48:25.318274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.356 qpair failed and we were unable to recover it. 00:34:18.356 [2024-07-14 05:48:25.318420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.356 [2024-07-14 05:48:25.318446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.356 qpair failed and we were unable to recover it. 00:34:18.356 [2024-07-14 05:48:25.318624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.356 [2024-07-14 05:48:25.318650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.356 qpair failed and we were unable to recover it. 00:34:18.356 [2024-07-14 05:48:25.318826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.356 [2024-07-14 05:48:25.318851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.356 qpair failed and we were unable to recover it. 00:34:18.356 [2024-07-14 05:48:25.319007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.356 [2024-07-14 05:48:25.319034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.356 qpair failed and we were unable to recover it. 00:34:18.356 [2024-07-14 05:48:25.319219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.356 [2024-07-14 05:48:25.319247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.356 qpair failed and we were unable to recover it. 00:34:18.356 [2024-07-14 05:48:25.319400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.356 [2024-07-14 05:48:25.319426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.356 qpair failed and we were unable to recover it. 00:34:18.356 [2024-07-14 05:48:25.319588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.356 [2024-07-14 05:48:25.319614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.356 qpair failed and we were unable to recover it. 00:34:18.356 [2024-07-14 05:48:25.319794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.356 [2024-07-14 05:48:25.319820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.356 qpair failed and we were unable to recover it. 00:34:18.356 [2024-07-14 05:48:25.320010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.356 [2024-07-14 05:48:25.320036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.356 qpair failed and we were unable to recover it. 00:34:18.356 [2024-07-14 05:48:25.320190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.356 [2024-07-14 05:48:25.320216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.356 qpair failed and we were unable to recover it. 00:34:18.356 [2024-07-14 05:48:25.320392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.356 [2024-07-14 05:48:25.320418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.356 qpair failed and we were unable to recover it. 00:34:18.356 [2024-07-14 05:48:25.320593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.356 [2024-07-14 05:48:25.320619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.356 qpair failed and we were unable to recover it. 00:34:18.356 [2024-07-14 05:48:25.320788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.356 [2024-07-14 05:48:25.320815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.356 qpair failed and we were unable to recover it. 00:34:18.356 [2024-07-14 05:48:25.320975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.356 [2024-07-14 05:48:25.321001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.356 qpair failed and we were unable to recover it. 00:34:18.356 [2024-07-14 05:48:25.321184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.356 [2024-07-14 05:48:25.321210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.356 qpair failed and we were unable to recover it. 00:34:18.356 [2024-07-14 05:48:25.321358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.356 [2024-07-14 05:48:25.321384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.356 qpair failed and we were unable to recover it. 00:34:18.356 [2024-07-14 05:48:25.321533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.356 [2024-07-14 05:48:25.321559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.356 qpair failed and we were unable to recover it. 00:34:18.356 [2024-07-14 05:48:25.321721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.356 [2024-07-14 05:48:25.321747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.356 qpair failed and we were unable to recover it. 00:34:18.356 [2024-07-14 05:48:25.321897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.356 [2024-07-14 05:48:25.321923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.356 qpair failed and we were unable to recover it. 00:34:18.356 [2024-07-14 05:48:25.322101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.356 [2024-07-14 05:48:25.322127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.356 qpair failed and we were unable to recover it. 00:34:18.356 [2024-07-14 05:48:25.322283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.356 [2024-07-14 05:48:25.322309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.356 qpair failed and we were unable to recover it. 00:34:18.356 Malloc0 00:34:18.356 [2024-07-14 05:48:25.322484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.356 [2024-07-14 05:48:25.322517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.356 qpair failed and we were unable to recover it. 00:34:18.356 [2024-07-14 05:48:25.322696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.356 [2024-07-14 05:48:25.322723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.356 qpair failed and we were unable to recover it. 00:34:18.356 [2024-07-14 05:48:25.322892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.356 [2024-07-14 05:48:25.322918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.356 qpair failed and we were unable to recover it. 00:34:18.356 05:48:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:18.356 05:48:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:18.356 [2024-07-14 05:48:25.323122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.356 [2024-07-14 05:48:25.323148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.356 qpair failed and we were unable to recover it. 00:34:18.356 05:48:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:18.356 [2024-07-14 05:48:25.323315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.356 [2024-07-14 05:48:25.323341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.356 05:48:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:18.356 qpair failed and we were unable to recover it. 00:34:18.356 [2024-07-14 05:48:25.323548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.356 [2024-07-14 05:48:25.323575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.356 qpair failed and we were unable to recover it. 00:34:18.356 [2024-07-14 05:48:25.323743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.356 [2024-07-14 05:48:25.323769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.356 qpair failed and we were unable to recover it. 00:34:18.356 [2024-07-14 05:48:25.323943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.356 [2024-07-14 05:48:25.323970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.356 qpair failed and we were unable to recover it. 00:34:18.356 [2024-07-14 05:48:25.324129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.356 [2024-07-14 05:48:25.324166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.356 qpair failed and we were unable to recover it. 00:34:18.356 [2024-07-14 05:48:25.324326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.356 [2024-07-14 05:48:25.324351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.356 qpair failed and we were unable to recover it. 00:34:18.356 [2024-07-14 05:48:25.324507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.356 [2024-07-14 05:48:25.324533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.356 qpair failed and we were unable to recover it. 00:34:18.356 [2024-07-14 05:48:25.324680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.356 [2024-07-14 05:48:25.324706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.356 qpair failed and we were unable to recover it. 00:34:18.356 [2024-07-14 05:48:25.324893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.356 [2024-07-14 05:48:25.324923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.356 qpair failed and we were unable to recover it. 00:34:18.356 [2024-07-14 05:48:25.325107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.356 [2024-07-14 05:48:25.325132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.356 qpair failed and we were unable to recover it. 00:34:18.356 [2024-07-14 05:48:25.325318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.356 [2024-07-14 05:48:25.325344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.356 qpair failed and we were unable to recover it. 00:34:18.356 [2024-07-14 05:48:25.325529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.357 [2024-07-14 05:48:25.325556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.357 qpair failed and we were unable to recover it. 00:34:18.357 [2024-07-14 05:48:25.325770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.357 [2024-07-14 05:48:25.325795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.357 qpair failed and we were unable to recover it. 00:34:18.357 [2024-07-14 05:48:25.325969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.357 [2024-07-14 05:48:25.325996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.357 qpair failed and we were unable to recover it. 00:34:18.357 [2024-07-14 05:48:25.326147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.357 [2024-07-14 05:48:25.326181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.357 [2024-07-14 05:48:25.326179] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:18.357 qpair failed and we were unable to recover it. 00:34:18.357 [2024-07-14 05:48:25.326364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.357 [2024-07-14 05:48:25.326391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.357 qpair failed and we were unable to recover it. 00:34:18.357 [2024-07-14 05:48:25.326548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.357 [2024-07-14 05:48:25.326574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.357 qpair failed and we were unable to recover it. 00:34:18.357 [2024-07-14 05:48:25.326759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.357 [2024-07-14 05:48:25.326785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.357 qpair failed and we were unable to recover it. 00:34:18.357 [2024-07-14 05:48:25.326953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.357 [2024-07-14 05:48:25.326980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.357 qpair failed and we were unable to recover it. 00:34:18.357 [2024-07-14 05:48:25.327158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.357 [2024-07-14 05:48:25.327184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.357 qpair failed and we were unable to recover it. 00:34:18.357 [2024-07-14 05:48:25.327347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.357 [2024-07-14 05:48:25.327372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.357 qpair failed and we were unable to recover it. 00:34:18.357 [2024-07-14 05:48:25.327542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.357 [2024-07-14 05:48:25.327568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.357 qpair failed and we were unable to recover it. 00:34:18.357 [2024-07-14 05:48:25.327725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.357 [2024-07-14 05:48:25.327751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.357 qpair failed and we were unable to recover it. 00:34:18.357 [2024-07-14 05:48:25.327918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.357 [2024-07-14 05:48:25.327945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.357 qpair failed and we were unable to recover it. 00:34:18.357 [2024-07-14 05:48:25.328119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.357 [2024-07-14 05:48:25.328145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.357 qpair failed and we were unable to recover it. 00:34:18.357 [2024-07-14 05:48:25.328365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.357 [2024-07-14 05:48:25.328390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.357 qpair failed and we were unable to recover it. 00:34:18.357 [2024-07-14 05:48:25.328566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.357 [2024-07-14 05:48:25.328591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.357 qpair failed and we were unable to recover it. 00:34:18.357 [2024-07-14 05:48:25.328743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.357 [2024-07-14 05:48:25.328769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.357 qpair failed and we were unable to recover it. 00:34:18.357 [2024-07-14 05:48:25.328962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.357 [2024-07-14 05:48:25.328989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.357 qpair failed and we were unable to recover it. 00:34:18.357 [2024-07-14 05:48:25.329146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.357 [2024-07-14 05:48:25.329181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.357 qpair failed and we were unable to recover it. 00:34:18.357 [2024-07-14 05:48:25.329328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.357 [2024-07-14 05:48:25.329354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.357 qpair failed and we were unable to recover it. 00:34:18.357 [2024-07-14 05:48:25.329508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.357 [2024-07-14 05:48:25.329534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.357 qpair failed and we were unable to recover it. 00:34:18.357 [2024-07-14 05:48:25.329706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.357 [2024-07-14 05:48:25.329731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.357 qpair failed and we were unable to recover it. 00:34:18.357 [2024-07-14 05:48:25.329908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.357 [2024-07-14 05:48:25.329934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.357 qpair failed and we were unable to recover it. 00:34:18.357 [2024-07-14 05:48:25.330113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.357 [2024-07-14 05:48:25.330138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.357 qpair failed and we were unable to recover it. 00:34:18.357 [2024-07-14 05:48:25.330364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.357 [2024-07-14 05:48:25.330390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.357 qpair failed and we were unable to recover it. 00:34:18.357 [2024-07-14 05:48:25.330558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.357 [2024-07-14 05:48:25.330584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.357 qpair failed and we were unable to recover it. 00:34:18.357 [2024-07-14 05:48:25.330761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.357 [2024-07-14 05:48:25.330787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.357 qpair failed and we were unable to recover it. 00:34:18.357 [2024-07-14 05:48:25.330949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.357 [2024-07-14 05:48:25.330976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.357 qpair failed and we were unable to recover it. 00:34:18.357 [2024-07-14 05:48:25.331123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.357 [2024-07-14 05:48:25.331148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.357 qpair failed and we were unable to recover it. 00:34:18.357 [2024-07-14 05:48:25.331336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.357 [2024-07-14 05:48:25.331362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.357 qpair failed and we were unable to recover it. 00:34:18.357 [2024-07-14 05:48:25.331513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.357 [2024-07-14 05:48:25.331539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.357 qpair failed and we were unable to recover it. 00:34:18.357 [2024-07-14 05:48:25.331685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.357 [2024-07-14 05:48:25.331710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.357 qpair failed and we were unable to recover it. 00:34:18.357 [2024-07-14 05:48:25.331918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.357 [2024-07-14 05:48:25.331944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.357 qpair failed and we were unable to recover it. 00:34:18.357 [2024-07-14 05:48:25.332104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.357 [2024-07-14 05:48:25.332129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.357 qpair failed and we were unable to recover it. 00:34:18.357 [2024-07-14 05:48:25.332304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.357 [2024-07-14 05:48:25.332331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.357 qpair failed and we were unable to recover it. 00:34:18.357 [2024-07-14 05:48:25.332485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.357 [2024-07-14 05:48:25.332511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.357 qpair failed and we were unable to recover it. 00:34:18.357 [2024-07-14 05:48:25.332665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.357 [2024-07-14 05:48:25.332691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.357 qpair failed and we were unable to recover it. 00:34:18.357 [2024-07-14 05:48:25.332874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.357 [2024-07-14 05:48:25.332901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.357 qpair failed and we were unable to recover it. 00:34:18.357 [2024-07-14 05:48:25.333085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.357 [2024-07-14 05:48:25.333111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.357 qpair failed and we were unable to recover it. 00:34:18.357 [2024-07-14 05:48:25.333266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.357 [2024-07-14 05:48:25.333291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.357 qpair failed and we were unable to recover it. 00:34:18.357 [2024-07-14 05:48:25.333438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.357 [2024-07-14 05:48:25.333464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.357 qpair failed and we were unable to recover it. 00:34:18.357 [2024-07-14 05:48:25.333621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.357 [2024-07-14 05:48:25.333646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.358 qpair failed and we were unable to recover it. 00:34:18.358 [2024-07-14 05:48:25.333792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.358 [2024-07-14 05:48:25.333817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.358 qpair failed and we were unable to recover it. 00:34:18.358 [2024-07-14 05:48:25.334017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.358 [2024-07-14 05:48:25.334043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.358 qpair failed and we were unable to recover it. 00:34:18.358 [2024-07-14 05:48:25.334194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.358 [2024-07-14 05:48:25.334220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.358 qpair failed and we were unable to recover it. 00:34:18.358 [2024-07-14 05:48:25.334371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.358 [2024-07-14 05:48:25.334396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.358 qpair failed and we were unable to recover it. 00:34:18.358 05:48:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:18.358 [2024-07-14 05:48:25.334560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.358 [2024-07-14 05:48:25.334586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.358 05:48:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:18.358 qpair failed and we were unable to recover it. 00:34:18.358 05:48:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:18.358 [2024-07-14 05:48:25.334751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.358 [2024-07-14 05:48:25.334777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.358 qpair failed and we were unable to recover it. 00:34:18.358 05:48:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:18.358 [2024-07-14 05:48:25.334948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.358 [2024-07-14 05:48:25.334976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.358 qpair failed and we were unable to recover it. 00:34:18.358 [2024-07-14 05:48:25.335127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.358 [2024-07-14 05:48:25.335168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.358 qpair failed and we were unable to recover it. 00:34:18.358 [2024-07-14 05:48:25.335357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.358 [2024-07-14 05:48:25.335383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.358 qpair failed and we were unable to recover it. 00:34:18.358 [2024-07-14 05:48:25.335542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.358 [2024-07-14 05:48:25.335568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.358 qpair failed and we were unable to recover it. 00:34:18.358 [2024-07-14 05:48:25.335736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.358 [2024-07-14 05:48:25.335762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.358 qpair failed and we were unable to recover it. 00:34:18.358 [2024-07-14 05:48:25.335917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.358 [2024-07-14 05:48:25.335943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.358 qpair failed and we were unable to recover it. 00:34:18.358 [2024-07-14 05:48:25.336095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.358 [2024-07-14 05:48:25.336120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.358 qpair failed and we were unable to recover it. 00:34:18.358 [2024-07-14 05:48:25.336288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.358 [2024-07-14 05:48:25.336315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.358 qpair failed and we were unable to recover it. 00:34:18.358 [2024-07-14 05:48:25.336468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.358 [2024-07-14 05:48:25.336494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.358 qpair failed and we were unable to recover it. 00:34:18.358 [2024-07-14 05:48:25.336638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.358 [2024-07-14 05:48:25.336663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.358 qpair failed and we were unable to recover it. 00:34:18.358 [2024-07-14 05:48:25.336811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.358 [2024-07-14 05:48:25.336837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.358 qpair failed and we were unable to recover it. 00:34:18.358 [2024-07-14 05:48:25.337000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.358 [2024-07-14 05:48:25.337026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.358 qpair failed and we were unable to recover it. 00:34:18.358 [2024-07-14 05:48:25.337171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.358 [2024-07-14 05:48:25.337198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.358 qpair failed and we were unable to recover it. 00:34:18.358 [2024-07-14 05:48:25.337377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.358 [2024-07-14 05:48:25.337403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.358 qpair failed and we were unable to recover it. 00:34:18.358 [2024-07-14 05:48:25.337566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.358 [2024-07-14 05:48:25.337592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.358 qpair failed and we were unable to recover it. 00:34:18.358 [2024-07-14 05:48:25.337750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.358 [2024-07-14 05:48:25.337776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.358 qpair failed and we were unable to recover it. 00:34:18.358 [2024-07-14 05:48:25.337965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.358 [2024-07-14 05:48:25.337992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.358 qpair failed and we were unable to recover it. 00:34:18.358 [2024-07-14 05:48:25.338142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.358 [2024-07-14 05:48:25.338168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.358 qpair failed and we were unable to recover it. 00:34:18.358 [2024-07-14 05:48:25.338327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.358 [2024-07-14 05:48:25.338353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.358 qpair failed and we were unable to recover it. 00:34:18.358 [2024-07-14 05:48:25.338512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.358 [2024-07-14 05:48:25.338538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.358 qpair failed and we were unable to recover it. 00:34:18.358 [2024-07-14 05:48:25.338689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.358 [2024-07-14 05:48:25.338716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.358 qpair failed and we were unable to recover it. 00:34:18.358 [2024-07-14 05:48:25.338898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.358 [2024-07-14 05:48:25.338924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.358 qpair failed and we were unable to recover it. 00:34:18.358 [2024-07-14 05:48:25.339095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.358 [2024-07-14 05:48:25.339121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.358 qpair failed and we were unable to recover it. 00:34:18.358 [2024-07-14 05:48:25.339295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.358 [2024-07-14 05:48:25.339321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.358 qpair failed and we were unable to recover it. 00:34:18.358 [2024-07-14 05:48:25.339474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.358 [2024-07-14 05:48:25.339500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.358 qpair failed and we were unable to recover it. 00:34:18.358 [2024-07-14 05:48:25.339651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.358 [2024-07-14 05:48:25.339677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.358 qpair failed and we were unable to recover it. 00:34:18.358 [2024-07-14 05:48:25.339823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.358 [2024-07-14 05:48:25.339849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.358 qpair failed and we were unable to recover it. 00:34:18.358 [2024-07-14 05:48:25.340008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.358 [2024-07-14 05:48:25.340034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.358 qpair failed and we were unable to recover it. 00:34:18.358 [2024-07-14 05:48:25.340193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.358 [2024-07-14 05:48:25.340219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.358 qpair failed and we were unable to recover it. 00:34:18.358 [2024-07-14 05:48:25.340400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.358 [2024-07-14 05:48:25.340425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.358 qpair failed and we were unable to recover it. 00:34:18.358 [2024-07-14 05:48:25.340580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.358 [2024-07-14 05:48:25.340606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.358 qpair failed and we were unable to recover it. 00:34:18.358 [2024-07-14 05:48:25.340782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.358 [2024-07-14 05:48:25.340807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.358 qpair failed and we were unable to recover it. 00:34:18.358 [2024-07-14 05:48:25.340959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.358 [2024-07-14 05:48:25.340986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.358 qpair failed and we were unable to recover it. 00:34:18.359 [2024-07-14 05:48:25.341169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.359 [2024-07-14 05:48:25.341195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.359 qpair failed and we were unable to recover it. 00:34:18.359 [2024-07-14 05:48:25.341341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.359 [2024-07-14 05:48:25.341367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.359 qpair failed and we were unable to recover it. 00:34:18.359 [2024-07-14 05:48:25.341551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.359 [2024-07-14 05:48:25.341579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.359 qpair failed and we were unable to recover it. 00:34:18.359 [2024-07-14 05:48:25.341730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.359 [2024-07-14 05:48:25.341756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.359 qpair failed and we were unable to recover it. 00:34:18.359 [2024-07-14 05:48:25.341909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.359 [2024-07-14 05:48:25.341936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.359 qpair failed and we were unable to recover it. 00:34:18.359 [2024-07-14 05:48:25.342124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.359 [2024-07-14 05:48:25.342150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.359 qpair failed and we were unable to recover it. 00:34:18.359 [2024-07-14 05:48:25.342316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.359 [2024-07-14 05:48:25.342342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.359 qpair failed and we were unable to recover it. 00:34:18.359 05:48:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:18.359 [2024-07-14 05:48:25.342499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.359 [2024-07-14 05:48:25.342525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.359 qpair failed and we were unable to recover it. 00:34:18.359 05:48:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:18.359 [2024-07-14 05:48:25.342706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.359 [2024-07-14 05:48:25.342732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.359 qpair failed and we were unable to recover it. 00:34:18.359 05:48:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:18.359 [2024-07-14 05:48:25.342907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.359 05:48:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:18.359 [2024-07-14 05:48:25.342933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.359 qpair failed and we were unable to recover it. 00:34:18.359 [2024-07-14 05:48:25.343101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.359 [2024-07-14 05:48:25.343127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.359 qpair failed and we were unable to recover it. 00:34:18.359 [2024-07-14 05:48:25.343309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.359 [2024-07-14 05:48:25.343335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.359 qpair failed and we were unable to recover it. 00:34:18.359 [2024-07-14 05:48:25.343490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.359 [2024-07-14 05:48:25.343515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.359 qpair failed and we were unable to recover it. 00:34:18.359 [2024-07-14 05:48:25.343672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.359 [2024-07-14 05:48:25.343698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.359 qpair failed and we were unable to recover it. 00:34:18.359 [2024-07-14 05:48:25.343848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.359 [2024-07-14 05:48:25.343878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.359 qpair failed and we were unable to recover it. 00:34:18.359 [2024-07-14 05:48:25.344058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.359 [2024-07-14 05:48:25.344084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.359 qpair failed and we were unable to recover it. 00:34:18.359 [2024-07-14 05:48:25.344291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.359 [2024-07-14 05:48:25.344317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.359 qpair failed and we were unable to recover it. 00:34:18.359 [2024-07-14 05:48:25.344484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.359 [2024-07-14 05:48:25.344510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.359 qpair failed and we were unable to recover it. 00:34:18.359 [2024-07-14 05:48:25.344674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.359 [2024-07-14 05:48:25.344701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.359 qpair failed and we were unable to recover it. 00:34:18.359 [2024-07-14 05:48:25.344886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.359 [2024-07-14 05:48:25.344913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.359 qpair failed and we were unable to recover it. 00:34:18.359 [2024-07-14 05:48:25.345070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.359 [2024-07-14 05:48:25.345096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.359 qpair failed and we were unable to recover it. 00:34:18.359 [2024-07-14 05:48:25.345308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.359 [2024-07-14 05:48:25.345333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.359 qpair failed and we were unable to recover it. 00:34:18.359 [2024-07-14 05:48:25.345480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.359 [2024-07-14 05:48:25.345506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.359 qpair failed and we were unable to recover it. 00:34:18.359 [2024-07-14 05:48:25.345675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.359 [2024-07-14 05:48:25.345701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.359 qpair failed and we were unable to recover it. 00:34:18.359 [2024-07-14 05:48:25.345885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.359 [2024-07-14 05:48:25.345911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.359 qpair failed and we were unable to recover it. 00:34:18.359 [2024-07-14 05:48:25.346081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.359 [2024-07-14 05:48:25.346107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.359 qpair failed and we were unable to recover it. 00:34:18.359 [2024-07-14 05:48:25.346264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.359 [2024-07-14 05:48:25.346289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.359 qpair failed and we were unable to recover it. 00:34:18.359 [2024-07-14 05:48:25.346458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.359 [2024-07-14 05:48:25.346484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.359 qpair failed and we were unable to recover it. 00:34:18.359 [2024-07-14 05:48:25.346656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.359 [2024-07-14 05:48:25.346682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.359 qpair failed and we were unable to recover it. 00:34:18.359 [2024-07-14 05:48:25.346839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.359 [2024-07-14 05:48:25.346870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.359 qpair failed and we were unable to recover it. 00:34:18.359 [2024-07-14 05:48:25.347027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.359 [2024-07-14 05:48:25.347053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.359 qpair failed and we were unable to recover it. 00:34:18.359 [2024-07-14 05:48:25.347205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.359 [2024-07-14 05:48:25.347230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.359 qpair failed and we were unable to recover it. 00:34:18.359 [2024-07-14 05:48:25.347387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.359 [2024-07-14 05:48:25.347413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.359 qpair failed and we were unable to recover it. 00:34:18.359 [2024-07-14 05:48:25.347562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.359 [2024-07-14 05:48:25.347587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.359 qpair failed and we were unable to recover it. 00:34:18.359 [2024-07-14 05:48:25.347750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.359 [2024-07-14 05:48:25.347780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.360 qpair failed and we were unable to recover it. 00:34:18.360 [2024-07-14 05:48:25.347954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.360 [2024-07-14 05:48:25.347981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.360 qpair failed and we were unable to recover it. 00:34:18.360 [2024-07-14 05:48:25.348139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.360 [2024-07-14 05:48:25.348164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.360 qpair failed and we were unable to recover it. 00:34:18.360 [2024-07-14 05:48:25.348328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.360 [2024-07-14 05:48:25.348354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.360 qpair failed and we were unable to recover it. 00:34:18.360 [2024-07-14 05:48:25.348560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.360 [2024-07-14 05:48:25.348586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.360 qpair failed and we were unable to recover it. 00:34:18.360 [2024-07-14 05:48:25.348773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.360 [2024-07-14 05:48:25.348798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.360 qpair failed and we were unable to recover it. 00:34:18.360 [2024-07-14 05:48:25.348983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.360 [2024-07-14 05:48:25.349009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.360 qpair failed and we were unable to recover it. 00:34:18.360 [2024-07-14 05:48:25.349165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.360 [2024-07-14 05:48:25.349191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.360 qpair failed and we were unable to recover it. 00:34:18.360 [2024-07-14 05:48:25.349366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.360 [2024-07-14 05:48:25.349392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.360 qpair failed and we were unable to recover it. 00:34:18.360 [2024-07-14 05:48:25.349575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.360 [2024-07-14 05:48:25.349601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.360 qpair failed and we were unable to recover it. 00:34:18.360 [2024-07-14 05:48:25.349771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.360 [2024-07-14 05:48:25.349796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.360 qpair failed and we were unable to recover it. 00:34:18.360 [2024-07-14 05:48:25.349988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.360 [2024-07-14 05:48:25.350015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.360 qpair failed and we were unable to recover it. 00:34:18.360 [2024-07-14 05:48:25.350169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.360 [2024-07-14 05:48:25.350195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.360 qpair failed and we were unable to recover it. 00:34:18.360 [2024-07-14 05:48:25.350350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.360 [2024-07-14 05:48:25.350376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.360 qpair failed and we were unable to recover it. 00:34:18.360 05:48:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:18.360 05:48:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:18.360 [2024-07-14 05:48:25.350565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.360 [2024-07-14 05:48:25.350591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.360 qpair failed and we were unable to recover it. 00:34:18.360 05:48:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:18.360 [2024-07-14 05:48:25.350754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.360 05:48:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:18.360 [2024-07-14 05:48:25.350781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.360 qpair failed and we were unable to recover it. 00:34:18.360 [2024-07-14 05:48:25.350949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.360 [2024-07-14 05:48:25.350975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.360 qpair failed and we were unable to recover it. 00:34:18.360 [2024-07-14 05:48:25.351153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.360 [2024-07-14 05:48:25.351179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.360 qpair failed and we were unable to recover it. 00:34:18.360 [2024-07-14 05:48:25.351338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.360 [2024-07-14 05:48:25.351363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.360 qpair failed and we were unable to recover it. 00:34:18.360 [2024-07-14 05:48:25.351566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.360 [2024-07-14 05:48:25.351591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.360 qpair failed and we were unable to recover it. 00:34:18.360 [2024-07-14 05:48:25.351742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.360 [2024-07-14 05:48:25.351768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.360 qpair failed and we were unable to recover it. 00:34:18.360 [2024-07-14 05:48:25.351924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.360 [2024-07-14 05:48:25.351951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.360 qpair failed and we were unable to recover it. 00:34:18.360 [2024-07-14 05:48:25.352106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.360 [2024-07-14 05:48:25.352132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.360 qpair failed and we were unable to recover it. 00:34:18.360 [2024-07-14 05:48:25.352312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.360 [2024-07-14 05:48:25.352338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.360 qpair failed and we were unable to recover it. 00:34:18.360 [2024-07-14 05:48:25.352491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.360 [2024-07-14 05:48:25.352517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.360 qpair failed and we were unable to recover it. 00:34:18.360 [2024-07-14 05:48:25.352665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.360 [2024-07-14 05:48:25.352691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.360 qpair failed and we were unable to recover it. 00:34:18.360 [2024-07-14 05:48:25.352845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.360 [2024-07-14 05:48:25.352878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.360 qpair failed and we were unable to recover it. 00:34:18.360 [2024-07-14 05:48:25.353079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.360 [2024-07-14 05:48:25.353105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.360 qpair failed and we were unable to recover it. 00:34:18.360 [2024-07-14 05:48:25.353288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.360 [2024-07-14 05:48:25.353314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.360 qpair failed and we were unable to recover it. 00:34:18.360 [2024-07-14 05:48:25.353485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.360 [2024-07-14 05:48:25.353510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.360 qpair failed and we were unable to recover it. 00:34:18.360 [2024-07-14 05:48:25.353721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.360 [2024-07-14 05:48:25.353747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.360 qpair failed and we were unable to recover it. 00:34:18.360 [2024-07-14 05:48:25.353927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.360 [2024-07-14 05:48:25.353953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.360 qpair failed and we were unable to recover it. 00:34:18.360 [2024-07-14 05:48:25.354104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.360 [2024-07-14 05:48:25.354130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.360 qpair failed and we were unable to recover it. 00:34:18.360 [2024-07-14 05:48:25.354290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.360 [2024-07-14 05:48:25.354315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1405840 with addr=10.0.0.2, port=4420 00:34:18.360 qpair failed and we were unable to recover it. 00:34:18.360 [2024-07-14 05:48:25.354403] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:18.360 [2024-07-14 05:48:25.356999] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.360 [2024-07-14 05:48:25.357209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.360 [2024-07-14 05:48:25.357236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.360 [2024-07-14 05:48:25.357253] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.360 [2024-07-14 05:48:25.357266] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:18.360 [2024-07-14 05:48:25.357315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.360 qpair failed and we were unable to recover it. 00:34:18.360 05:48:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:18.360 05:48:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:18.360 05:48:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:18.360 05:48:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:18.360 05:48:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:18.360 05:48:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3393472 00:34:18.360 [2024-07-14 05:48:25.366782] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.361 [2024-07-14 05:48:25.366958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.361 [2024-07-14 05:48:25.366986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.361 [2024-07-14 05:48:25.367003] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.361 [2024-07-14 05:48:25.367017] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:18.361 [2024-07-14 05:48:25.367046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.361 qpair failed and we were unable to recover it. 00:34:18.361 [2024-07-14 05:48:25.376813] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.361 [2024-07-14 05:48:25.376984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.361 [2024-07-14 05:48:25.377012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.361 [2024-07-14 05:48:25.377028] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.361 [2024-07-14 05:48:25.377042] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:18.361 [2024-07-14 05:48:25.377072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.361 qpair failed and we were unable to recover it. 00:34:18.361 [2024-07-14 05:48:25.386801] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.361 [2024-07-14 05:48:25.387007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.361 [2024-07-14 05:48:25.387034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.361 [2024-07-14 05:48:25.387050] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.361 [2024-07-14 05:48:25.387064] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:18.361 [2024-07-14 05:48:25.387094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.361 qpair failed and we were unable to recover it. 00:34:18.361 [2024-07-14 05:48:25.396820] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.361 [2024-07-14 05:48:25.397010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.361 [2024-07-14 05:48:25.397038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.361 [2024-07-14 05:48:25.397054] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.361 [2024-07-14 05:48:25.397068] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:18.361 [2024-07-14 05:48:25.397097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.361 qpair failed and we were unable to recover it. 00:34:18.361 [2024-07-14 05:48:25.406847] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.361 [2024-07-14 05:48:25.407062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.361 [2024-07-14 05:48:25.407094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.361 [2024-07-14 05:48:25.407111] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.361 [2024-07-14 05:48:25.407125] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:18.361 [2024-07-14 05:48:25.407155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.361 qpair failed and we were unable to recover it. 00:34:18.620 [2024-07-14 05:48:25.416934] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.620 [2024-07-14 05:48:25.417090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.620 [2024-07-14 05:48:25.417119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.620 [2024-07-14 05:48:25.417135] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.620 [2024-07-14 05:48:25.417149] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:18.620 [2024-07-14 05:48:25.417184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.620 qpair failed and we were unable to recover it. 00:34:18.620 [2024-07-14 05:48:25.426912] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.620 [2024-07-14 05:48:25.427076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.620 [2024-07-14 05:48:25.427103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.620 [2024-07-14 05:48:25.427118] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.620 [2024-07-14 05:48:25.427133] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:18.620 [2024-07-14 05:48:25.427178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.620 qpair failed and we were unable to recover it. 00:34:18.620 [2024-07-14 05:48:25.436912] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.620 [2024-07-14 05:48:25.437075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.620 [2024-07-14 05:48:25.437102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.620 [2024-07-14 05:48:25.437119] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.620 [2024-07-14 05:48:25.437133] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:18.620 [2024-07-14 05:48:25.437174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.620 qpair failed and we were unable to recover it. 00:34:18.620 [2024-07-14 05:48:25.446897] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.620 [2024-07-14 05:48:25.447057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.620 [2024-07-14 05:48:25.447084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.620 [2024-07-14 05:48:25.447099] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.620 [2024-07-14 05:48:25.447114] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:18.620 [2024-07-14 05:48:25.447150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.620 qpair failed and we were unable to recover it. 00:34:18.620 [2024-07-14 05:48:25.456975] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.620 [2024-07-14 05:48:25.457128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.620 [2024-07-14 05:48:25.457155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.620 [2024-07-14 05:48:25.457174] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.620 [2024-07-14 05:48:25.457187] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:18.620 [2024-07-14 05:48:25.457231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.620 qpair failed and we were unable to recover it. 00:34:18.620 [2024-07-14 05:48:25.466978] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.620 [2024-07-14 05:48:25.467140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.620 [2024-07-14 05:48:25.467173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.620 [2024-07-14 05:48:25.467189] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.620 [2024-07-14 05:48:25.467203] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:18.620 [2024-07-14 05:48:25.467232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.620 qpair failed and we were unable to recover it. 00:34:18.620 [2024-07-14 05:48:25.477067] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.620 [2024-07-14 05:48:25.477223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.620 [2024-07-14 05:48:25.477251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.620 [2024-07-14 05:48:25.477266] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.620 [2024-07-14 05:48:25.477281] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:18.620 [2024-07-14 05:48:25.477325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.620 qpair failed and we were unable to recover it. 00:34:18.620 [2024-07-14 05:48:25.487079] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.620 [2024-07-14 05:48:25.487231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.620 [2024-07-14 05:48:25.487257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.620 [2024-07-14 05:48:25.487272] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.620 [2024-07-14 05:48:25.487286] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:18.620 [2024-07-14 05:48:25.487323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.620 qpair failed and we were unable to recover it. 00:34:18.620 [2024-07-14 05:48:25.497123] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.620 [2024-07-14 05:48:25.497286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.620 [2024-07-14 05:48:25.497318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.620 [2024-07-14 05:48:25.497334] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.620 [2024-07-14 05:48:25.497348] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:18.620 [2024-07-14 05:48:25.497377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.620 qpair failed and we were unable to recover it. 00:34:18.620 [2024-07-14 05:48:25.507086] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.620 [2024-07-14 05:48:25.507254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.620 [2024-07-14 05:48:25.507280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.620 [2024-07-14 05:48:25.507295] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.620 [2024-07-14 05:48:25.507309] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:18.620 [2024-07-14 05:48:25.507338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.620 qpair failed and we were unable to recover it. 00:34:18.620 [2024-07-14 05:48:25.517150] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.620 [2024-07-14 05:48:25.517308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.620 [2024-07-14 05:48:25.517334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.620 [2024-07-14 05:48:25.517349] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.620 [2024-07-14 05:48:25.517364] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:18.620 [2024-07-14 05:48:25.517393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.620 qpair failed and we were unable to recover it. 00:34:18.620 [2024-07-14 05:48:25.527217] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.620 [2024-07-14 05:48:25.527422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.621 [2024-07-14 05:48:25.527448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.621 [2024-07-14 05:48:25.527464] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.621 [2024-07-14 05:48:25.527478] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:18.621 [2024-07-14 05:48:25.527521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.621 qpair failed and we were unable to recover it. 00:34:18.621 [2024-07-14 05:48:25.537355] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.621 [2024-07-14 05:48:25.537526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.621 [2024-07-14 05:48:25.537552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.621 [2024-07-14 05:48:25.537568] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.621 [2024-07-14 05:48:25.537602] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:18.621 [2024-07-14 05:48:25.537631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.621 qpair failed and we were unable to recover it. 00:34:18.621 [2024-07-14 05:48:25.547252] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.621 [2024-07-14 05:48:25.547454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.621 [2024-07-14 05:48:25.547480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.621 [2024-07-14 05:48:25.547495] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.621 [2024-07-14 05:48:25.547509] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:18.621 [2024-07-14 05:48:25.547539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.621 qpair failed and we were unable to recover it. 00:34:18.621 [2024-07-14 05:48:25.557386] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.621 [2024-07-14 05:48:25.557581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.621 [2024-07-14 05:48:25.557623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.621 [2024-07-14 05:48:25.557638] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.621 [2024-07-14 05:48:25.557653] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:18.621 [2024-07-14 05:48:25.557707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.621 qpair failed and we were unable to recover it. 00:34:18.621 [2024-07-14 05:48:25.567278] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.621 [2024-07-14 05:48:25.567485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.621 [2024-07-14 05:48:25.567511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.621 [2024-07-14 05:48:25.567527] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.621 [2024-07-14 05:48:25.567541] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:18.621 [2024-07-14 05:48:25.567570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.621 qpair failed and we were unable to recover it. 00:34:18.621 [2024-07-14 05:48:25.577309] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.621 [2024-07-14 05:48:25.577467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.621 [2024-07-14 05:48:25.577493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.621 [2024-07-14 05:48:25.577508] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.621 [2024-07-14 05:48:25.577523] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:18.621 [2024-07-14 05:48:25.577552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.621 qpair failed and we were unable to recover it. 00:34:18.621 [2024-07-14 05:48:25.587319] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.621 [2024-07-14 05:48:25.587487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.621 [2024-07-14 05:48:25.587513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.621 [2024-07-14 05:48:25.587528] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.621 [2024-07-14 05:48:25.587543] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:18.621 [2024-07-14 05:48:25.587572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.621 qpair failed and we were unable to recover it. 00:34:18.621 [2024-07-14 05:48:25.597393] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.621 [2024-07-14 05:48:25.597550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.621 [2024-07-14 05:48:25.597577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.621 [2024-07-14 05:48:25.597592] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.621 [2024-07-14 05:48:25.597606] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:18.621 [2024-07-14 05:48:25.597635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.621 qpair failed and we were unable to recover it. 00:34:18.621 [2024-07-14 05:48:25.607377] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.621 [2024-07-14 05:48:25.607541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.621 [2024-07-14 05:48:25.607567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.621 [2024-07-14 05:48:25.607583] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.621 [2024-07-14 05:48:25.607597] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:18.621 [2024-07-14 05:48:25.607626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.621 qpair failed and we were unable to recover it. 00:34:18.621 [2024-07-14 05:48:25.617491] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.621 [2024-07-14 05:48:25.617649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.621 [2024-07-14 05:48:25.617676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.621 [2024-07-14 05:48:25.617691] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.621 [2024-07-14 05:48:25.617705] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:18.621 [2024-07-14 05:48:25.617734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.621 qpair failed and we were unable to recover it. 00:34:18.621 [2024-07-14 05:48:25.627466] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.621 [2024-07-14 05:48:25.627666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.621 [2024-07-14 05:48:25.627692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.621 [2024-07-14 05:48:25.627708] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.621 [2024-07-14 05:48:25.627728] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:18.621 [2024-07-14 05:48:25.627757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.621 qpair failed and we were unable to recover it. 00:34:18.621 [2024-07-14 05:48:25.637476] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.621 [2024-07-14 05:48:25.637640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.621 [2024-07-14 05:48:25.637666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.621 [2024-07-14 05:48:25.637681] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.621 [2024-07-14 05:48:25.637696] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:18.621 [2024-07-14 05:48:25.637725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.621 qpair failed and we were unable to recover it. 00:34:18.621 [2024-07-14 05:48:25.647497] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.621 [2024-07-14 05:48:25.647654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.621 [2024-07-14 05:48:25.647679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.621 [2024-07-14 05:48:25.647694] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.621 [2024-07-14 05:48:25.647709] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:18.621 [2024-07-14 05:48:25.647739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.621 qpair failed and we were unable to recover it. 00:34:18.621 [2024-07-14 05:48:25.657546] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.621 [2024-07-14 05:48:25.657758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.621 [2024-07-14 05:48:25.657785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.621 [2024-07-14 05:48:25.657804] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.621 [2024-07-14 05:48:25.657817] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:18.621 [2024-07-14 05:48:25.657860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.621 qpair failed and we were unable to recover it. 00:34:18.621 [2024-07-14 05:48:25.667533] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.621 [2024-07-14 05:48:25.667739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.621 [2024-07-14 05:48:25.667765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.621 [2024-07-14 05:48:25.667781] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.621 [2024-07-14 05:48:25.667795] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:18.622 [2024-07-14 05:48:25.667825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.622 qpair failed and we were unable to recover it. 00:34:18.622 [2024-07-14 05:48:25.677566] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.622 [2024-07-14 05:48:25.677732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.622 [2024-07-14 05:48:25.677758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.622 [2024-07-14 05:48:25.677773] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.622 [2024-07-14 05:48:25.677787] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:18.622 [2024-07-14 05:48:25.677816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.622 qpair failed and we were unable to recover it. 00:34:18.622 [2024-07-14 05:48:25.687583] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.622 [2024-07-14 05:48:25.687740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.622 [2024-07-14 05:48:25.687765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.622 [2024-07-14 05:48:25.687781] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.622 [2024-07-14 05:48:25.687795] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:18.622 [2024-07-14 05:48:25.687824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.622 qpair failed and we were unable to recover it. 00:34:18.622 [2024-07-14 05:48:25.697703] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.622 [2024-07-14 05:48:25.697860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.622 [2024-07-14 05:48:25.697898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.622 [2024-07-14 05:48:25.697914] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.622 [2024-07-14 05:48:25.697928] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:18.622 [2024-07-14 05:48:25.697957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.622 qpair failed and we were unable to recover it. 00:34:18.622 [2024-07-14 05:48:25.707666] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.622 [2024-07-14 05:48:25.707845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.622 [2024-07-14 05:48:25.707880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.622 [2024-07-14 05:48:25.707898] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.622 [2024-07-14 05:48:25.707913] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:18.622 [2024-07-14 05:48:25.707943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.622 qpair failed and we were unable to recover it. 00:34:18.622 [2024-07-14 05:48:25.717679] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.622 [2024-07-14 05:48:25.717860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.622 [2024-07-14 05:48:25.717891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.622 [2024-07-14 05:48:25.717906] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.622 [2024-07-14 05:48:25.717926] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:18.622 [2024-07-14 05:48:25.717955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.622 qpair failed and we were unable to recover it. 00:34:18.880 [2024-07-14 05:48:25.727742] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.880 [2024-07-14 05:48:25.727953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.880 [2024-07-14 05:48:25.727982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.880 [2024-07-14 05:48:25.727998] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.880 [2024-07-14 05:48:25.728013] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:18.880 [2024-07-14 05:48:25.728042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.880 qpair failed and we were unable to recover it. 00:34:18.880 [2024-07-14 05:48:25.737731] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.880 [2024-07-14 05:48:25.737893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.880 [2024-07-14 05:48:25.737921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.880 [2024-07-14 05:48:25.737937] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.880 [2024-07-14 05:48:25.737951] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:18.880 [2024-07-14 05:48:25.737981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.880 qpair failed and we were unable to recover it. 00:34:18.880 [2024-07-14 05:48:25.747799] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.880 [2024-07-14 05:48:25.747966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.880 [2024-07-14 05:48:25.747993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.880 [2024-07-14 05:48:25.748008] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.880 [2024-07-14 05:48:25.748022] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:18.880 [2024-07-14 05:48:25.748052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.880 qpair failed and we were unable to recover it. 00:34:18.880 [2024-07-14 05:48:25.757808] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.880 [2024-07-14 05:48:25.758004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.881 [2024-07-14 05:48:25.758030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.881 [2024-07-14 05:48:25.758046] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.881 [2024-07-14 05:48:25.758060] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:18.881 [2024-07-14 05:48:25.758089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.881 qpair failed and we were unable to recover it. 00:34:18.881 [2024-07-14 05:48:25.767855] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.881 [2024-07-14 05:48:25.768031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.881 [2024-07-14 05:48:25.768057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.881 [2024-07-14 05:48:25.768073] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.881 [2024-07-14 05:48:25.768088] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:18.881 [2024-07-14 05:48:25.768117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.881 qpair failed and we were unable to recover it. 00:34:18.881 [2024-07-14 05:48:25.777859] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.881 [2024-07-14 05:48:25.778026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.881 [2024-07-14 05:48:25.778052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.881 [2024-07-14 05:48:25.778067] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.881 [2024-07-14 05:48:25.778081] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:18.881 [2024-07-14 05:48:25.778111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.881 qpair failed and we were unable to recover it. 00:34:18.881 [2024-07-14 05:48:25.787969] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.881 [2024-07-14 05:48:25.788132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.881 [2024-07-14 05:48:25.788158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.881 [2024-07-14 05:48:25.788174] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.881 [2024-07-14 05:48:25.788204] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:18.881 [2024-07-14 05:48:25.788232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.881 qpair failed and we were unable to recover it. 00:34:18.881 [2024-07-14 05:48:25.797927] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.881 [2024-07-14 05:48:25.798093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.881 [2024-07-14 05:48:25.798120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.881 [2024-07-14 05:48:25.798135] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.881 [2024-07-14 05:48:25.798149] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:18.881 [2024-07-14 05:48:25.798178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.881 qpair failed and we were unable to recover it. 00:34:18.881 [2024-07-14 05:48:25.807928] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.881 [2024-07-14 05:48:25.808079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.881 [2024-07-14 05:48:25.808105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.881 [2024-07-14 05:48:25.808127] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.881 [2024-07-14 05:48:25.808142] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:18.881 [2024-07-14 05:48:25.808171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.881 qpair failed and we were unable to recover it. 00:34:18.881 [2024-07-14 05:48:25.817967] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.881 [2024-07-14 05:48:25.818129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.881 [2024-07-14 05:48:25.818155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.881 [2024-07-14 05:48:25.818170] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.881 [2024-07-14 05:48:25.818199] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:18.881 [2024-07-14 05:48:25.818228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.881 qpair failed and we were unable to recover it. 00:34:18.881 [2024-07-14 05:48:25.828033] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.881 [2024-07-14 05:48:25.828203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.881 [2024-07-14 05:48:25.828229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.881 [2024-07-14 05:48:25.828260] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.881 [2024-07-14 05:48:25.828274] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:18.881 [2024-07-14 05:48:25.828303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.881 qpair failed and we were unable to recover it. 00:34:18.881 [2024-07-14 05:48:25.838016] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.881 [2024-07-14 05:48:25.838226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.881 [2024-07-14 05:48:25.838252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.881 [2024-07-14 05:48:25.838267] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.881 [2024-07-14 05:48:25.838281] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:18.881 [2024-07-14 05:48:25.838310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.881 qpair failed and we were unable to recover it. 00:34:18.881 [2024-07-14 05:48:25.848069] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.881 [2024-07-14 05:48:25.848265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.881 [2024-07-14 05:48:25.848290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.881 [2024-07-14 05:48:25.848306] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.881 [2024-07-14 05:48:25.848320] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:18.881 [2024-07-14 05:48:25.848350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.881 qpair failed and we were unable to recover it. 00:34:18.881 [2024-07-14 05:48:25.858071] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.881 [2024-07-14 05:48:25.858229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.881 [2024-07-14 05:48:25.858256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.881 [2024-07-14 05:48:25.858271] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.881 [2024-07-14 05:48:25.858286] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:18.881 [2024-07-14 05:48:25.858314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.881 qpair failed and we were unable to recover it. 00:34:18.881 [2024-07-14 05:48:25.868122] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.881 [2024-07-14 05:48:25.868291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.881 [2024-07-14 05:48:25.868317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.881 [2024-07-14 05:48:25.868332] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.881 [2024-07-14 05:48:25.868347] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:18.881 [2024-07-14 05:48:25.868376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.881 qpair failed and we were unable to recover it. 00:34:18.881 [2024-07-14 05:48:25.878164] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.881 [2024-07-14 05:48:25.878329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.881 [2024-07-14 05:48:25.878355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.881 [2024-07-14 05:48:25.878370] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.881 [2024-07-14 05:48:25.878385] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:18.881 [2024-07-14 05:48:25.878429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.881 qpair failed and we were unable to recover it. 00:34:18.881 [2024-07-14 05:48:25.888151] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.881 [2024-07-14 05:48:25.888320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.881 [2024-07-14 05:48:25.888348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.881 [2024-07-14 05:48:25.888363] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.881 [2024-07-14 05:48:25.888377] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:18.881 [2024-07-14 05:48:25.888406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.881 qpair failed and we were unable to recover it. 00:34:18.881 [2024-07-14 05:48:25.898223] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.881 [2024-07-14 05:48:25.898399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.881 [2024-07-14 05:48:25.898426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.881 [2024-07-14 05:48:25.898462] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.881 [2024-07-14 05:48:25.898476] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:18.882 [2024-07-14 05:48:25.898505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.882 qpair failed and we were unable to recover it. 00:34:18.882 [2024-07-14 05:48:25.908292] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.882 [2024-07-14 05:48:25.908458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.882 [2024-07-14 05:48:25.908485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.882 [2024-07-14 05:48:25.908500] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.882 [2024-07-14 05:48:25.908514] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:18.882 [2024-07-14 05:48:25.908543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.882 qpair failed and we were unable to recover it. 00:34:18.882 [2024-07-14 05:48:25.918245] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.882 [2024-07-14 05:48:25.918426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.882 [2024-07-14 05:48:25.918453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.882 [2024-07-14 05:48:25.918483] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.882 [2024-07-14 05:48:25.918497] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:18.882 [2024-07-14 05:48:25.918525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.882 qpair failed and we were unable to recover it. 00:34:18.882 [2024-07-14 05:48:25.928323] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.882 [2024-07-14 05:48:25.928482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.882 [2024-07-14 05:48:25.928510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.882 [2024-07-14 05:48:25.928526] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.882 [2024-07-14 05:48:25.928554] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:18.882 [2024-07-14 05:48:25.928583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.882 qpair failed and we were unable to recover it. 00:34:18.882 [2024-07-14 05:48:25.938312] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.882 [2024-07-14 05:48:25.938467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.882 [2024-07-14 05:48:25.938493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.882 [2024-07-14 05:48:25.938509] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.882 [2024-07-14 05:48:25.938523] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:18.882 [2024-07-14 05:48:25.938552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.882 qpair failed and we were unable to recover it. 00:34:18.882 [2024-07-14 05:48:25.948329] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.882 [2024-07-14 05:48:25.948490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.882 [2024-07-14 05:48:25.948518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.882 [2024-07-14 05:48:25.948534] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.882 [2024-07-14 05:48:25.948547] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:18.882 [2024-07-14 05:48:25.948574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.882 qpair failed and we were unable to recover it. 00:34:18.882 [2024-07-14 05:48:25.958346] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.882 [2024-07-14 05:48:25.958504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.882 [2024-07-14 05:48:25.958532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.882 [2024-07-14 05:48:25.958548] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.882 [2024-07-14 05:48:25.958562] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:18.882 [2024-07-14 05:48:25.958591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.882 qpair failed and we were unable to recover it. 00:34:18.882 [2024-07-14 05:48:25.968485] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.882 [2024-07-14 05:48:25.968694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.882 [2024-07-14 05:48:25.968736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.882 [2024-07-14 05:48:25.968751] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.882 [2024-07-14 05:48:25.968764] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:18.882 [2024-07-14 05:48:25.968807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.882 qpair failed and we were unable to recover it. 00:34:18.882 [2024-07-14 05:48:25.978518] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.882 [2024-07-14 05:48:25.978688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.882 [2024-07-14 05:48:25.978714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.882 [2024-07-14 05:48:25.978730] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.882 [2024-07-14 05:48:25.978742] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:18.882 [2024-07-14 05:48:25.978784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.882 qpair failed and we were unable to recover it. 00:34:19.140 [2024-07-14 05:48:25.988460] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.140 [2024-07-14 05:48:25.988621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.140 [2024-07-14 05:48:25.988653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.140 [2024-07-14 05:48:25.988669] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.140 [2024-07-14 05:48:25.988683] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.140 [2024-07-14 05:48:25.988712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.140 qpair failed and we were unable to recover it. 00:34:19.140 [2024-07-14 05:48:25.998469] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.140 [2024-07-14 05:48:25.998638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.140 [2024-07-14 05:48:25.998667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.140 [2024-07-14 05:48:25.998683] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.140 [2024-07-14 05:48:25.998697] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.140 [2024-07-14 05:48:25.998727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.140 qpair failed and we were unable to recover it. 00:34:19.140 [2024-07-14 05:48:26.008506] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.140 [2024-07-14 05:48:26.008738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.140 [2024-07-14 05:48:26.008765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.140 [2024-07-14 05:48:26.008781] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.140 [2024-07-14 05:48:26.008794] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.140 [2024-07-14 05:48:26.008837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.140 qpair failed and we were unable to recover it. 00:34:19.140 [2024-07-14 05:48:26.018592] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.140 [2024-07-14 05:48:26.018746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.140 [2024-07-14 05:48:26.018774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.140 [2024-07-14 05:48:26.018789] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.140 [2024-07-14 05:48:26.018803] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.140 [2024-07-14 05:48:26.018846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.140 qpair failed and we were unable to recover it. 00:34:19.140 [2024-07-14 05:48:26.028615] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.140 [2024-07-14 05:48:26.028777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.140 [2024-07-14 05:48:26.028804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.140 [2024-07-14 05:48:26.028819] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.140 [2024-07-14 05:48:26.028833] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.140 [2024-07-14 05:48:26.028861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.140 qpair failed and we were unable to recover it. 00:34:19.140 [2024-07-14 05:48:26.038676] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.140 [2024-07-14 05:48:26.038846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.140 [2024-07-14 05:48:26.038893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.140 [2024-07-14 05:48:26.038912] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.140 [2024-07-14 05:48:26.038926] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.140 [2024-07-14 05:48:26.038956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.140 qpair failed and we were unable to recover it. 00:34:19.140 [2024-07-14 05:48:26.048626] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.140 [2024-07-14 05:48:26.048821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.140 [2024-07-14 05:48:26.048862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.140 [2024-07-14 05:48:26.048888] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.140 [2024-07-14 05:48:26.048916] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.140 [2024-07-14 05:48:26.048946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.140 qpair failed and we were unable to recover it. 00:34:19.140 [2024-07-14 05:48:26.058673] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.140 [2024-07-14 05:48:26.058832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.140 [2024-07-14 05:48:26.058860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.140 [2024-07-14 05:48:26.058883] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.140 [2024-07-14 05:48:26.058898] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.140 [2024-07-14 05:48:26.058927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.140 qpair failed and we were unable to recover it. 00:34:19.140 [2024-07-14 05:48:26.068688] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.140 [2024-07-14 05:48:26.068878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.140 [2024-07-14 05:48:26.068905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.140 [2024-07-14 05:48:26.068921] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.140 [2024-07-14 05:48:26.068935] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.140 [2024-07-14 05:48:26.068964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.140 qpair failed and we were unable to recover it. 00:34:19.140 [2024-07-14 05:48:26.078715] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.140 [2024-07-14 05:48:26.078917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.140 [2024-07-14 05:48:26.078949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.140 [2024-07-14 05:48:26.078966] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.140 [2024-07-14 05:48:26.078981] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.140 [2024-07-14 05:48:26.079011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.140 qpair failed and we were unable to recover it. 00:34:19.140 [2024-07-14 05:48:26.088741] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.140 [2024-07-14 05:48:26.088910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.140 [2024-07-14 05:48:26.088937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.140 [2024-07-14 05:48:26.088952] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.140 [2024-07-14 05:48:26.088968] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.140 [2024-07-14 05:48:26.088997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.140 qpair failed and we were unable to recover it. 00:34:19.140 [2024-07-14 05:48:26.098768] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.140 [2024-07-14 05:48:26.098925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.140 [2024-07-14 05:48:26.098952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.140 [2024-07-14 05:48:26.098968] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.140 [2024-07-14 05:48:26.098983] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.140 [2024-07-14 05:48:26.099013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.140 qpair failed and we were unable to recover it. 00:34:19.140 [2024-07-14 05:48:26.108807] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.140 [2024-07-14 05:48:26.108976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.140 [2024-07-14 05:48:26.109002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.140 [2024-07-14 05:48:26.109018] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.140 [2024-07-14 05:48:26.109032] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.140 [2024-07-14 05:48:26.109062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.140 qpair failed and we were unable to recover it. 00:34:19.140 [2024-07-14 05:48:26.118972] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.141 [2024-07-14 05:48:26.119138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.141 [2024-07-14 05:48:26.119172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.141 [2024-07-14 05:48:26.119188] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.141 [2024-07-14 05:48:26.119201] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.141 [2024-07-14 05:48:26.119236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.141 qpair failed and we were unable to recover it. 00:34:19.141 [2024-07-14 05:48:26.128855] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.141 [2024-07-14 05:48:26.129020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.141 [2024-07-14 05:48:26.129047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.141 [2024-07-14 05:48:26.129063] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.141 [2024-07-14 05:48:26.129077] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.141 [2024-07-14 05:48:26.129106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.141 qpair failed and we were unable to recover it. 00:34:19.141 [2024-07-14 05:48:26.138892] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.141 [2024-07-14 05:48:26.139068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.141 [2024-07-14 05:48:26.139094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.141 [2024-07-14 05:48:26.139110] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.141 [2024-07-14 05:48:26.139124] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.141 [2024-07-14 05:48:26.139153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.141 qpair failed and we were unable to recover it. 00:34:19.141 [2024-07-14 05:48:26.148928] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.141 [2024-07-14 05:48:26.149091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.141 [2024-07-14 05:48:26.149118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.141 [2024-07-14 05:48:26.149133] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.141 [2024-07-14 05:48:26.149146] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.141 [2024-07-14 05:48:26.149195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.141 qpair failed and we were unable to recover it. 00:34:19.141 [2024-07-14 05:48:26.158943] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.141 [2024-07-14 05:48:26.159102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.141 [2024-07-14 05:48:26.159129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.141 [2024-07-14 05:48:26.159144] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.141 [2024-07-14 05:48:26.159158] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.141 [2024-07-14 05:48:26.159189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.141 qpair failed and we were unable to recover it. 00:34:19.141 [2024-07-14 05:48:26.169155] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.141 [2024-07-14 05:48:26.169336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.141 [2024-07-14 05:48:26.169369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.141 [2024-07-14 05:48:26.169385] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.141 [2024-07-14 05:48:26.169399] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.141 [2024-07-14 05:48:26.169429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.141 qpair failed and we were unable to recover it. 00:34:19.141 [2024-07-14 05:48:26.179048] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.141 [2024-07-14 05:48:26.179247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.141 [2024-07-14 05:48:26.179288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.141 [2024-07-14 05:48:26.179303] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.141 [2024-07-14 05:48:26.179317] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.141 [2024-07-14 05:48:26.179361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.141 qpair failed and we were unable to recover it. 00:34:19.141 [2024-07-14 05:48:26.189104] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.141 [2024-07-14 05:48:26.189265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.141 [2024-07-14 05:48:26.189291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.141 [2024-07-14 05:48:26.189307] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.141 [2024-07-14 05:48:26.189321] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.141 [2024-07-14 05:48:26.189349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.141 qpair failed and we were unable to recover it. 00:34:19.141 [2024-07-14 05:48:26.199103] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.141 [2024-07-14 05:48:26.199262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.141 [2024-07-14 05:48:26.199288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.141 [2024-07-14 05:48:26.199304] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.141 [2024-07-14 05:48:26.199318] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.141 [2024-07-14 05:48:26.199347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.141 qpair failed and we were unable to recover it. 00:34:19.141 [2024-07-14 05:48:26.209082] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.141 [2024-07-14 05:48:26.209249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.141 [2024-07-14 05:48:26.209275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.141 [2024-07-14 05:48:26.209291] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.141 [2024-07-14 05:48:26.209305] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.141 [2024-07-14 05:48:26.209339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.141 qpair failed and we were unable to recover it. 00:34:19.141 [2024-07-14 05:48:26.219107] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.141 [2024-07-14 05:48:26.219259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.141 [2024-07-14 05:48:26.219287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.141 [2024-07-14 05:48:26.219303] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.141 [2024-07-14 05:48:26.219317] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.141 [2024-07-14 05:48:26.219345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.141 qpair failed and we were unable to recover it. 00:34:19.141 [2024-07-14 05:48:26.229138] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.141 [2024-07-14 05:48:26.229308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.141 [2024-07-14 05:48:26.229334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.141 [2024-07-14 05:48:26.229365] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.141 [2024-07-14 05:48:26.229379] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.141 [2024-07-14 05:48:26.229408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.141 qpair failed and we were unable to recover it. 00:34:19.141 [2024-07-14 05:48:26.239220] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.141 [2024-07-14 05:48:26.239459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.141 [2024-07-14 05:48:26.239485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.141 [2024-07-14 05:48:26.239501] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.141 [2024-07-14 05:48:26.239514] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.141 [2024-07-14 05:48:26.239541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.141 qpair failed and we were unable to recover it. 00:34:19.400 [2024-07-14 05:48:26.249206] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.400 [2024-07-14 05:48:26.249362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.400 [2024-07-14 05:48:26.249402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.400 [2024-07-14 05:48:26.249432] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.400 [2024-07-14 05:48:26.249461] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.400 [2024-07-14 05:48:26.249512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.400 qpair failed and we were unable to recover it. 00:34:19.400 [2024-07-14 05:48:26.259220] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.400 [2024-07-14 05:48:26.259374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.400 [2024-07-14 05:48:26.259408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.400 [2024-07-14 05:48:26.259424] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.400 [2024-07-14 05:48:26.259438] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.400 [2024-07-14 05:48:26.259468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.400 qpair failed and we were unable to recover it. 00:34:19.400 [2024-07-14 05:48:26.269347] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.400 [2024-07-14 05:48:26.269522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.400 [2024-07-14 05:48:26.269549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.400 [2024-07-14 05:48:26.269581] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.400 [2024-07-14 05:48:26.269594] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.400 [2024-07-14 05:48:26.269623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.400 qpair failed and we were unable to recover it. 00:34:19.400 [2024-07-14 05:48:26.279343] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.400 [2024-07-14 05:48:26.279510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.400 [2024-07-14 05:48:26.279536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.401 [2024-07-14 05:48:26.279552] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.401 [2024-07-14 05:48:26.279581] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.401 [2024-07-14 05:48:26.279610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.401 qpair failed and we were unable to recover it. 00:34:19.401 [2024-07-14 05:48:26.289381] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.401 [2024-07-14 05:48:26.289606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.401 [2024-07-14 05:48:26.289632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.401 [2024-07-14 05:48:26.289648] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.401 [2024-07-14 05:48:26.289662] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.401 [2024-07-14 05:48:26.289704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.401 qpair failed and we were unable to recover it. 00:34:19.401 [2024-07-14 05:48:26.299403] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.401 [2024-07-14 05:48:26.299562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.401 [2024-07-14 05:48:26.299589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.401 [2024-07-14 05:48:26.299605] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.401 [2024-07-14 05:48:26.299639] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.401 [2024-07-14 05:48:26.299669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.401 qpair failed and we were unable to recover it. 00:34:19.401 [2024-07-14 05:48:26.309418] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.401 [2024-07-14 05:48:26.309582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.401 [2024-07-14 05:48:26.309609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.401 [2024-07-14 05:48:26.309624] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.401 [2024-07-14 05:48:26.309638] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.401 [2024-07-14 05:48:26.309667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.401 qpair failed and we were unable to recover it. 00:34:19.401 [2024-07-14 05:48:26.319434] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.401 [2024-07-14 05:48:26.319591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.401 [2024-07-14 05:48:26.319618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.401 [2024-07-14 05:48:26.319634] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.401 [2024-07-14 05:48:26.319648] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.401 [2024-07-14 05:48:26.319676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.401 qpair failed and we were unable to recover it. 00:34:19.401 [2024-07-14 05:48:26.329459] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.401 [2024-07-14 05:48:26.329653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.401 [2024-07-14 05:48:26.329694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.401 [2024-07-14 05:48:26.329710] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.401 [2024-07-14 05:48:26.329723] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.401 [2024-07-14 05:48:26.329766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.401 qpair failed and we were unable to recover it. 00:34:19.401 [2024-07-14 05:48:26.339456] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.401 [2024-07-14 05:48:26.339609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.401 [2024-07-14 05:48:26.339637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.401 [2024-07-14 05:48:26.339653] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.401 [2024-07-14 05:48:26.339666] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.401 [2024-07-14 05:48:26.339695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.401 qpair failed and we were unable to recover it. 00:34:19.401 [2024-07-14 05:48:26.349522] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.401 [2024-07-14 05:48:26.349704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.401 [2024-07-14 05:48:26.349745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.401 [2024-07-14 05:48:26.349761] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.401 [2024-07-14 05:48:26.349775] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.401 [2024-07-14 05:48:26.349817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.401 qpair failed and we were unable to recover it. 00:34:19.401 [2024-07-14 05:48:26.359524] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.401 [2024-07-14 05:48:26.359676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.401 [2024-07-14 05:48:26.359704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.401 [2024-07-14 05:48:26.359719] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.401 [2024-07-14 05:48:26.359733] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.401 [2024-07-14 05:48:26.359762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.401 qpair failed and we were unable to recover it. 00:34:19.401 [2024-07-14 05:48:26.369555] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.401 [2024-07-14 05:48:26.369743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.401 [2024-07-14 05:48:26.369784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.401 [2024-07-14 05:48:26.369799] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.401 [2024-07-14 05:48:26.369813] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.401 [2024-07-14 05:48:26.369856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.401 qpair failed and we were unable to recover it. 00:34:19.401 [2024-07-14 05:48:26.379565] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.401 [2024-07-14 05:48:26.379722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.401 [2024-07-14 05:48:26.379750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.401 [2024-07-14 05:48:26.379765] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.401 [2024-07-14 05:48:26.379779] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.401 [2024-07-14 05:48:26.379807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.401 qpair failed and we were unable to recover it. 00:34:19.401 [2024-07-14 05:48:26.389616] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.401 [2024-07-14 05:48:26.389803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.401 [2024-07-14 05:48:26.389845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.401 [2024-07-14 05:48:26.389861] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.401 [2024-07-14 05:48:26.389901] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.401 [2024-07-14 05:48:26.389932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.401 qpair failed and we were unable to recover it. 00:34:19.401 [2024-07-14 05:48:26.399676] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.401 [2024-07-14 05:48:26.399852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.401 [2024-07-14 05:48:26.399884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.401 [2024-07-14 05:48:26.399901] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.401 [2024-07-14 05:48:26.399915] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.401 [2024-07-14 05:48:26.399943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.401 qpair failed and we were unable to recover it. 00:34:19.401 [2024-07-14 05:48:26.409709] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.401 [2024-07-14 05:48:26.409895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.401 [2024-07-14 05:48:26.409923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.401 [2024-07-14 05:48:26.409939] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.401 [2024-07-14 05:48:26.409952] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.401 [2024-07-14 05:48:26.409981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.401 qpair failed and we were unable to recover it. 00:34:19.401 [2024-07-14 05:48:26.419671] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.401 [2024-07-14 05:48:26.419837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.401 [2024-07-14 05:48:26.419871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.401 [2024-07-14 05:48:26.419890] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.402 [2024-07-14 05:48:26.419904] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.402 [2024-07-14 05:48:26.419932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.402 qpair failed and we were unable to recover it. 00:34:19.402 [2024-07-14 05:48:26.429748] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.402 [2024-07-14 05:48:26.429914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.402 [2024-07-14 05:48:26.429941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.402 [2024-07-14 05:48:26.429956] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.402 [2024-07-14 05:48:26.429970] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.402 [2024-07-14 05:48:26.429998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.402 qpair failed and we were unable to recover it. 00:34:19.402 [2024-07-14 05:48:26.439761] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.402 [2024-07-14 05:48:26.439928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.402 [2024-07-14 05:48:26.439955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.402 [2024-07-14 05:48:26.439971] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.402 [2024-07-14 05:48:26.439984] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.402 [2024-07-14 05:48:26.440013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.402 qpair failed and we were unable to recover it. 00:34:19.402 [2024-07-14 05:48:26.449784] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.402 [2024-07-14 05:48:26.449950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.402 [2024-07-14 05:48:26.449978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.402 [2024-07-14 05:48:26.449994] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.402 [2024-07-14 05:48:26.450007] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.402 [2024-07-14 05:48:26.450036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.402 qpair failed and we were unable to recover it. 00:34:19.402 [2024-07-14 05:48:26.459824] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.402 [2024-07-14 05:48:26.460009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.402 [2024-07-14 05:48:26.460036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.402 [2024-07-14 05:48:26.460051] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.402 [2024-07-14 05:48:26.460065] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.402 [2024-07-14 05:48:26.460094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.402 qpair failed and we were unable to recover it. 00:34:19.402 [2024-07-14 05:48:26.469842] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.402 [2024-07-14 05:48:26.470009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.402 [2024-07-14 05:48:26.470037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.402 [2024-07-14 05:48:26.470052] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.402 [2024-07-14 05:48:26.470066] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.402 [2024-07-14 05:48:26.470096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.402 qpair failed and we were unable to recover it. 00:34:19.402 [2024-07-14 05:48:26.479873] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.402 [2024-07-14 05:48:26.480032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.402 [2024-07-14 05:48:26.480059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.402 [2024-07-14 05:48:26.480075] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.402 [2024-07-14 05:48:26.480095] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.402 [2024-07-14 05:48:26.480124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.402 qpair failed and we were unable to recover it. 00:34:19.402 [2024-07-14 05:48:26.489907] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.402 [2024-07-14 05:48:26.490085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.402 [2024-07-14 05:48:26.490111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.402 [2024-07-14 05:48:26.490127] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.402 [2024-07-14 05:48:26.490141] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.402 [2024-07-14 05:48:26.490170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.402 qpair failed and we were unable to recover it. 00:34:19.402 [2024-07-14 05:48:26.499911] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.402 [2024-07-14 05:48:26.500085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.402 [2024-07-14 05:48:26.500112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.402 [2024-07-14 05:48:26.500127] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.402 [2024-07-14 05:48:26.500141] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.402 [2024-07-14 05:48:26.500170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.402 qpair failed and we were unable to recover it. 00:34:19.661 [2024-07-14 05:48:26.509975] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.661 [2024-07-14 05:48:26.510136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.661 [2024-07-14 05:48:26.510165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.661 [2024-07-14 05:48:26.510181] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.661 [2024-07-14 05:48:26.510194] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.661 [2024-07-14 05:48:26.510226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.661 [2024-07-14 05:48:26.520007] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.661 [2024-07-14 05:48:26.520197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.661 [2024-07-14 05:48:26.520226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.661 [2024-07-14 05:48:26.520242] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.661 [2024-07-14 05:48:26.520271] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.661 [2024-07-14 05:48:26.520300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.661 [2024-07-14 05:48:26.530026] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.661 [2024-07-14 05:48:26.530180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.661 [2024-07-14 05:48:26.530207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.661 [2024-07-14 05:48:26.530223] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.661 [2024-07-14 05:48:26.530237] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.661 [2024-07-14 05:48:26.530265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.661 [2024-07-14 05:48:26.540028] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.661 [2024-07-14 05:48:26.540183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.661 [2024-07-14 05:48:26.540210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.661 [2024-07-14 05:48:26.540225] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.661 [2024-07-14 05:48:26.540238] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.661 [2024-07-14 05:48:26.540268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.661 [2024-07-14 05:48:26.550068] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.661 [2024-07-14 05:48:26.550231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.661 [2024-07-14 05:48:26.550258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.661 [2024-07-14 05:48:26.550273] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.661 [2024-07-14 05:48:26.550286] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.661 [2024-07-14 05:48:26.550315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.661 [2024-07-14 05:48:26.560099] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.661 [2024-07-14 05:48:26.560259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.661 [2024-07-14 05:48:26.560286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.661 [2024-07-14 05:48:26.560303] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.661 [2024-07-14 05:48:26.560331] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.661 [2024-07-14 05:48:26.560360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.661 [2024-07-14 05:48:26.570166] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.661 [2024-07-14 05:48:26.570325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.661 [2024-07-14 05:48:26.570351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.661 [2024-07-14 05:48:26.570372] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.661 [2024-07-14 05:48:26.570386] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.661 [2024-07-14 05:48:26.570431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.661 [2024-07-14 05:48:26.580232] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.661 [2024-07-14 05:48:26.580391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.661 [2024-07-14 05:48:26.580417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.661 [2024-07-14 05:48:26.580432] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.661 [2024-07-14 05:48:26.580445] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.661 [2024-07-14 05:48:26.580489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.661 [2024-07-14 05:48:26.590199] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.661 [2024-07-14 05:48:26.590356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.661 [2024-07-14 05:48:26.590382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.661 [2024-07-14 05:48:26.590398] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.661 [2024-07-14 05:48:26.590411] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.661 [2024-07-14 05:48:26.590440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.661 [2024-07-14 05:48:26.600212] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.661 [2024-07-14 05:48:26.600375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.661 [2024-07-14 05:48:26.600402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.661 [2024-07-14 05:48:26.600418] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.661 [2024-07-14 05:48:26.600431] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.661 [2024-07-14 05:48:26.600460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.661 qpair failed and we were unable to recover it. 00:34:19.661 [2024-07-14 05:48:26.610234] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.661 [2024-07-14 05:48:26.610398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.661 [2024-07-14 05:48:26.610424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.661 [2024-07-14 05:48:26.610439] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.661 [2024-07-14 05:48:26.610468] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.662 [2024-07-14 05:48:26.610498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-07-14 05:48:26.620249] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.662 [2024-07-14 05:48:26.620402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.662 [2024-07-14 05:48:26.620428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.662 [2024-07-14 05:48:26.620443] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.662 [2024-07-14 05:48:26.620457] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.662 [2024-07-14 05:48:26.620486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-07-14 05:48:26.630291] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.662 [2024-07-14 05:48:26.630450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.662 [2024-07-14 05:48:26.630476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.662 [2024-07-14 05:48:26.630492] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.662 [2024-07-14 05:48:26.630505] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.662 [2024-07-14 05:48:26.630534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-07-14 05:48:26.640349] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.662 [2024-07-14 05:48:26.640544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.662 [2024-07-14 05:48:26.640586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.662 [2024-07-14 05:48:26.640601] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.662 [2024-07-14 05:48:26.640614] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.662 [2024-07-14 05:48:26.640642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-07-14 05:48:26.650371] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.662 [2024-07-14 05:48:26.650558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.662 [2024-07-14 05:48:26.650584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.662 [2024-07-14 05:48:26.650600] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.662 [2024-07-14 05:48:26.650613] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.662 [2024-07-14 05:48:26.650642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-07-14 05:48:26.660372] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.662 [2024-07-14 05:48:26.660524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.662 [2024-07-14 05:48:26.660551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.662 [2024-07-14 05:48:26.660573] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.662 [2024-07-14 05:48:26.660589] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.662 [2024-07-14 05:48:26.660619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-07-14 05:48:26.670452] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.662 [2024-07-14 05:48:26.670636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.662 [2024-07-14 05:48:26.670663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.662 [2024-07-14 05:48:26.670678] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.662 [2024-07-14 05:48:26.670693] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.662 [2024-07-14 05:48:26.670721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-07-14 05:48:26.680449] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.662 [2024-07-14 05:48:26.680605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.662 [2024-07-14 05:48:26.680632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.662 [2024-07-14 05:48:26.680648] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.662 [2024-07-14 05:48:26.680662] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.662 [2024-07-14 05:48:26.680691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-07-14 05:48:26.690501] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.662 [2024-07-14 05:48:26.690657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.662 [2024-07-14 05:48:26.690684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.662 [2024-07-14 05:48:26.690699] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.662 [2024-07-14 05:48:26.690713] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.662 [2024-07-14 05:48:26.690756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-07-14 05:48:26.700530] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.662 [2024-07-14 05:48:26.700692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.662 [2024-07-14 05:48:26.700719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.662 [2024-07-14 05:48:26.700735] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.662 [2024-07-14 05:48:26.700748] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.662 [2024-07-14 05:48:26.700794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-07-14 05:48:26.710553] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.662 [2024-07-14 05:48:26.710711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.662 [2024-07-14 05:48:26.710739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.662 [2024-07-14 05:48:26.710755] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.662 [2024-07-14 05:48:26.710769] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.662 [2024-07-14 05:48:26.710797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-07-14 05:48:26.720563] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.662 [2024-07-14 05:48:26.720721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.662 [2024-07-14 05:48:26.720748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.662 [2024-07-14 05:48:26.720764] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.662 [2024-07-14 05:48:26.720777] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.662 [2024-07-14 05:48:26.720821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-07-14 05:48:26.730579] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.662 [2024-07-14 05:48:26.730738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.662 [2024-07-14 05:48:26.730765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.662 [2024-07-14 05:48:26.730781] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.662 [2024-07-14 05:48:26.730795] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.662 [2024-07-14 05:48:26.730826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-07-14 05:48:26.740587] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.662 [2024-07-14 05:48:26.740739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.662 [2024-07-14 05:48:26.740766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.662 [2024-07-14 05:48:26.740782] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.662 [2024-07-14 05:48:26.740796] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.662 [2024-07-14 05:48:26.740825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-07-14 05:48:26.750635] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.662 [2024-07-14 05:48:26.750788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.662 [2024-07-14 05:48:26.750815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.662 [2024-07-14 05:48:26.750835] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.662 [2024-07-14 05:48:26.750870] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.662 [2024-07-14 05:48:26.750902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.662 qpair failed and we were unable to recover it. 00:34:19.662 [2024-07-14 05:48:26.760656] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.663 [2024-07-14 05:48:26.760856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.663 [2024-07-14 05:48:26.760889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.663 [2024-07-14 05:48:26.760905] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.663 [2024-07-14 05:48:26.760918] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.663 [2024-07-14 05:48:26.760947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.663 qpair failed and we were unable to recover it. 00:34:19.921 [2024-07-14 05:48:26.770689] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.921 [2024-07-14 05:48:26.770858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.921 [2024-07-14 05:48:26.770896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.921 [2024-07-14 05:48:26.770913] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.921 [2024-07-14 05:48:26.770927] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.921 [2024-07-14 05:48:26.770958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.921 qpair failed and we were unable to recover it. 00:34:19.921 [2024-07-14 05:48:26.780761] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.921 [2024-07-14 05:48:26.780931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.921 [2024-07-14 05:48:26.780960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.921 [2024-07-14 05:48:26.780977] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.921 [2024-07-14 05:48:26.780990] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.921 [2024-07-14 05:48:26.781019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.921 qpair failed and we were unable to recover it. 00:34:19.921 [2024-07-14 05:48:26.790778] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.921 [2024-07-14 05:48:26.790960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.922 [2024-07-14 05:48:26.790988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.922 [2024-07-14 05:48:26.791004] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.922 [2024-07-14 05:48:26.791018] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.922 [2024-07-14 05:48:26.791047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.922 qpair failed and we were unable to recover it. 00:34:19.922 [2024-07-14 05:48:26.800787] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.922 [2024-07-14 05:48:26.800990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.922 [2024-07-14 05:48:26.801016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.922 [2024-07-14 05:48:26.801033] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.922 [2024-07-14 05:48:26.801047] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.922 [2024-07-14 05:48:26.801076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.922 qpair failed and we were unable to recover it. 00:34:19.922 [2024-07-14 05:48:26.810810] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.922 [2024-07-14 05:48:26.810973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.922 [2024-07-14 05:48:26.810999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.922 [2024-07-14 05:48:26.811015] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.922 [2024-07-14 05:48:26.811029] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.922 [2024-07-14 05:48:26.811058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.922 qpair failed and we were unable to recover it. 00:34:19.922 [2024-07-14 05:48:26.820850] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.922 [2024-07-14 05:48:26.821052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.922 [2024-07-14 05:48:26.821080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.922 [2024-07-14 05:48:26.821095] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.922 [2024-07-14 05:48:26.821109] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.922 [2024-07-14 05:48:26.821137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.922 qpair failed and we were unable to recover it. 00:34:19.922 [2024-07-14 05:48:26.830915] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.922 [2024-07-14 05:48:26.831077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.922 [2024-07-14 05:48:26.831104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.922 [2024-07-14 05:48:26.831120] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.922 [2024-07-14 05:48:26.831134] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.922 [2024-07-14 05:48:26.831164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.922 qpair failed and we were unable to recover it. 00:34:19.922 [2024-07-14 05:48:26.840919] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.922 [2024-07-14 05:48:26.841118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.922 [2024-07-14 05:48:26.841151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.922 [2024-07-14 05:48:26.841167] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.922 [2024-07-14 05:48:26.841181] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.922 [2024-07-14 05:48:26.841210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.922 qpair failed and we were unable to recover it. 00:34:19.922 [2024-07-14 05:48:26.850923] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.922 [2024-07-14 05:48:26.851075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.922 [2024-07-14 05:48:26.851102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.922 [2024-07-14 05:48:26.851118] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.922 [2024-07-14 05:48:26.851132] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.922 [2024-07-14 05:48:26.851162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.922 qpair failed and we were unable to recover it. 00:34:19.922 [2024-07-14 05:48:26.861047] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.922 [2024-07-14 05:48:26.861215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.922 [2024-07-14 05:48:26.861242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.922 [2024-07-14 05:48:26.861258] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.922 [2024-07-14 05:48:26.861272] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.922 [2024-07-14 05:48:26.861300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.922 qpair failed and we were unable to recover it. 00:34:19.922 [2024-07-14 05:48:26.871002] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.922 [2024-07-14 05:48:26.871172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.922 [2024-07-14 05:48:26.871198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.922 [2024-07-14 05:48:26.871228] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.922 [2024-07-14 05:48:26.871242] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.922 [2024-07-14 05:48:26.871270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.922 qpair failed and we were unable to recover it. 00:34:19.922 [2024-07-14 05:48:26.881023] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.922 [2024-07-14 05:48:26.881209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.922 [2024-07-14 05:48:26.881253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.922 [2024-07-14 05:48:26.881271] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.922 [2024-07-14 05:48:26.881286] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.922 [2024-07-14 05:48:26.881335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.922 qpair failed and we were unable to recover it. 00:34:19.922 [2024-07-14 05:48:26.891092] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.922 [2024-07-14 05:48:26.891257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.922 [2024-07-14 05:48:26.891283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.922 [2024-07-14 05:48:26.891299] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.922 [2024-07-14 05:48:26.891312] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.922 [2024-07-14 05:48:26.891342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.922 qpair failed and we were unable to recover it. 00:34:19.922 [2024-07-14 05:48:26.901085] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.922 [2024-07-14 05:48:26.901250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.922 [2024-07-14 05:48:26.901276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.922 [2024-07-14 05:48:26.901292] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.922 [2024-07-14 05:48:26.901305] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.922 [2024-07-14 05:48:26.901335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.922 qpair failed and we were unable to recover it. 00:34:19.922 [2024-07-14 05:48:26.911114] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.922 [2024-07-14 05:48:26.911271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.922 [2024-07-14 05:48:26.911297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.922 [2024-07-14 05:48:26.911312] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.922 [2024-07-14 05:48:26.911326] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.922 [2024-07-14 05:48:26.911356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.922 qpair failed and we were unable to recover it. 00:34:19.922 [2024-07-14 05:48:26.921161] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.922 [2024-07-14 05:48:26.921364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.922 [2024-07-14 05:48:26.921391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.922 [2024-07-14 05:48:26.921406] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.922 [2024-07-14 05:48:26.921419] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.922 [2024-07-14 05:48:26.921448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.922 qpair failed and we were unable to recover it. 00:34:19.922 [2024-07-14 05:48:26.931168] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.922 [2024-07-14 05:48:26.931320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.922 [2024-07-14 05:48:26.931352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.922 [2024-07-14 05:48:26.931368] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.922 [2024-07-14 05:48:26.931383] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.923 [2024-07-14 05:48:26.931427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.923 qpair failed and we were unable to recover it. 00:34:19.923 [2024-07-14 05:48:26.941245] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.923 [2024-07-14 05:48:26.941425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.923 [2024-07-14 05:48:26.941451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.923 [2024-07-14 05:48:26.941467] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.923 [2024-07-14 05:48:26.941481] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.923 [2024-07-14 05:48:26.941525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.923 qpair failed and we were unable to recover it. 00:34:19.923 [2024-07-14 05:48:26.951251] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.923 [2024-07-14 05:48:26.951409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.923 [2024-07-14 05:48:26.951434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.923 [2024-07-14 05:48:26.951450] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.923 [2024-07-14 05:48:26.951463] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.923 [2024-07-14 05:48:26.951506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.923 qpair failed and we were unable to recover it. 00:34:19.923 [2024-07-14 05:48:26.961311] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.923 [2024-07-14 05:48:26.961505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.923 [2024-07-14 05:48:26.961547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.923 [2024-07-14 05:48:26.961566] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.923 [2024-07-14 05:48:26.961580] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.923 [2024-07-14 05:48:26.961625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.923 qpair failed and we were unable to recover it. 00:34:19.923 [2024-07-14 05:48:26.971290] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.923 [2024-07-14 05:48:26.971445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.923 [2024-07-14 05:48:26.971471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.923 [2024-07-14 05:48:26.971486] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.923 [2024-07-14 05:48:26.971500] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.923 [2024-07-14 05:48:26.971535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.923 qpair failed and we were unable to recover it. 00:34:19.923 [2024-07-14 05:48:26.981306] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.923 [2024-07-14 05:48:26.981469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.923 [2024-07-14 05:48:26.981496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.923 [2024-07-14 05:48:26.981512] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.923 [2024-07-14 05:48:26.981526] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.923 [2024-07-14 05:48:26.981555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.923 qpair failed and we were unable to recover it. 00:34:19.923 [2024-07-14 05:48:26.991366] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.923 [2024-07-14 05:48:26.991525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.923 [2024-07-14 05:48:26.991550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.923 [2024-07-14 05:48:26.991564] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.923 [2024-07-14 05:48:26.991592] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.923 [2024-07-14 05:48:26.991621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.923 qpair failed and we were unable to recover it. 00:34:19.923 [2024-07-14 05:48:27.001354] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.923 [2024-07-14 05:48:27.001515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.923 [2024-07-14 05:48:27.001542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.923 [2024-07-14 05:48:27.001557] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.923 [2024-07-14 05:48:27.001571] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.923 [2024-07-14 05:48:27.001602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.923 qpair failed and we were unable to recover it. 00:34:19.923 [2024-07-14 05:48:27.011460] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.923 [2024-07-14 05:48:27.011654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.923 [2024-07-14 05:48:27.011695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.923 [2024-07-14 05:48:27.011710] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.923 [2024-07-14 05:48:27.011722] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.923 [2024-07-14 05:48:27.011766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.923 qpair failed and we were unable to recover it. 00:34:19.923 [2024-07-14 05:48:27.021443] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.923 [2024-07-14 05:48:27.021602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.923 [2024-07-14 05:48:27.021634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.923 [2024-07-14 05:48:27.021650] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.923 [2024-07-14 05:48:27.021680] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:19.923 [2024-07-14 05:48:27.021709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.923 qpair failed and we were unable to recover it. 00:34:20.181 [2024-07-14 05:48:27.031446] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.182 [2024-07-14 05:48:27.031605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.182 [2024-07-14 05:48:27.031633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.182 [2024-07-14 05:48:27.031649] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.182 [2024-07-14 05:48:27.031665] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.182 [2024-07-14 05:48:27.031695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.182 qpair failed and we were unable to recover it. 00:34:20.182 [2024-07-14 05:48:27.041530] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.182 [2024-07-14 05:48:27.041691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.182 [2024-07-14 05:48:27.041719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.182 [2024-07-14 05:48:27.041735] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.182 [2024-07-14 05:48:27.041764] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.182 [2024-07-14 05:48:27.041793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.182 qpair failed and we were unable to recover it. 00:34:20.182 [2024-07-14 05:48:27.051511] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.182 [2024-07-14 05:48:27.051661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.182 [2024-07-14 05:48:27.051687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.182 [2024-07-14 05:48:27.051703] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.182 [2024-07-14 05:48:27.051718] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.182 [2024-07-14 05:48:27.051761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.182 qpair failed and we were unable to recover it. 00:34:20.182 [2024-07-14 05:48:27.061601] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.182 [2024-07-14 05:48:27.061769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.182 [2024-07-14 05:48:27.061798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.182 [2024-07-14 05:48:27.061815] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.182 [2024-07-14 05:48:27.061829] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.182 [2024-07-14 05:48:27.061887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.182 qpair failed and we were unable to recover it. 00:34:20.182 [2024-07-14 05:48:27.071584] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.182 [2024-07-14 05:48:27.071745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.182 [2024-07-14 05:48:27.071771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.182 [2024-07-14 05:48:27.071787] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.182 [2024-07-14 05:48:27.071800] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.182 [2024-07-14 05:48:27.071831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.182 qpair failed and we were unable to recover it. 00:34:20.182 [2024-07-14 05:48:27.081673] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.182 [2024-07-14 05:48:27.081826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.182 [2024-07-14 05:48:27.081853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.182 [2024-07-14 05:48:27.081875] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.182 [2024-07-14 05:48:27.081892] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.182 [2024-07-14 05:48:27.081922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.182 qpair failed and we were unable to recover it. 00:34:20.182 [2024-07-14 05:48:27.091603] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.182 [2024-07-14 05:48:27.091769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.182 [2024-07-14 05:48:27.091795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.182 [2024-07-14 05:48:27.091811] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.182 [2024-07-14 05:48:27.091826] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.182 [2024-07-14 05:48:27.091855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.182 qpair failed and we were unable to recover it. 00:34:20.182 [2024-07-14 05:48:27.101637] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.182 [2024-07-14 05:48:27.101836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.182 [2024-07-14 05:48:27.101862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.182 [2024-07-14 05:48:27.101886] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.182 [2024-07-14 05:48:27.101900] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.182 [2024-07-14 05:48:27.101929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.182 qpair failed and we were unable to recover it. 00:34:20.182 [2024-07-14 05:48:27.111674] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.182 [2024-07-14 05:48:27.111892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.182 [2024-07-14 05:48:27.111923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.182 [2024-07-14 05:48:27.111940] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.182 [2024-07-14 05:48:27.111954] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.182 [2024-07-14 05:48:27.111983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.182 qpair failed and we were unable to recover it. 00:34:20.182 [2024-07-14 05:48:27.121697] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.182 [2024-07-14 05:48:27.121851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.182 [2024-07-14 05:48:27.121883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.182 [2024-07-14 05:48:27.121900] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.182 [2024-07-14 05:48:27.121914] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.182 [2024-07-14 05:48:27.121943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.182 qpair failed and we were unable to recover it. 00:34:20.182 [2024-07-14 05:48:27.131793] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.182 [2024-07-14 05:48:27.131987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.182 [2024-07-14 05:48:27.132014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.182 [2024-07-14 05:48:27.132029] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.182 [2024-07-14 05:48:27.132042] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.182 [2024-07-14 05:48:27.132072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.182 qpair failed and we were unable to recover it. 00:34:20.182 [2024-07-14 05:48:27.141798] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.182 [2024-07-14 05:48:27.141987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.182 [2024-07-14 05:48:27.142013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.182 [2024-07-14 05:48:27.142029] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.182 [2024-07-14 05:48:27.142042] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.182 [2024-07-14 05:48:27.142072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.182 qpair failed and we were unable to recover it. 00:34:20.182 [2024-07-14 05:48:27.151916] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.182 [2024-07-14 05:48:27.152078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.182 [2024-07-14 05:48:27.152105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.182 [2024-07-14 05:48:27.152121] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.182 [2024-07-14 05:48:27.152141] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.182 [2024-07-14 05:48:27.152171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.182 qpair failed and we were unable to recover it. 00:34:20.182 [2024-07-14 05:48:27.161811] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.182 [2024-07-14 05:48:27.161980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.182 [2024-07-14 05:48:27.162006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.182 [2024-07-14 05:48:27.162022] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.182 [2024-07-14 05:48:27.162035] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.182 [2024-07-14 05:48:27.162065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.182 qpair failed and we were unable to recover it. 00:34:20.182 [2024-07-14 05:48:27.171873] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.182 [2024-07-14 05:48:27.172033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.182 [2024-07-14 05:48:27.172062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.182 [2024-07-14 05:48:27.172080] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.183 [2024-07-14 05:48:27.172094] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.183 [2024-07-14 05:48:27.172125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.183 qpair failed and we were unable to recover it. 00:34:20.183 [2024-07-14 05:48:27.181849] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.183 [2024-07-14 05:48:27.182007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.183 [2024-07-14 05:48:27.182034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.183 [2024-07-14 05:48:27.182049] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.183 [2024-07-14 05:48:27.182062] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.183 [2024-07-14 05:48:27.182093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.183 qpair failed and we were unable to recover it. 00:34:20.183 [2024-07-14 05:48:27.191933] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.183 [2024-07-14 05:48:27.192141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.183 [2024-07-14 05:48:27.192167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.183 [2024-07-14 05:48:27.192182] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.183 [2024-07-14 05:48:27.192197] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.183 [2024-07-14 05:48:27.192226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.183 qpair failed and we were unable to recover it. 00:34:20.183 [2024-07-14 05:48:27.202006] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.183 [2024-07-14 05:48:27.202163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.183 [2024-07-14 05:48:27.202189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.183 [2024-07-14 05:48:27.202205] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.183 [2024-07-14 05:48:27.202218] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.183 [2024-07-14 05:48:27.202248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.183 qpair failed and we were unable to recover it. 00:34:20.183 [2024-07-14 05:48:27.211973] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.183 [2024-07-14 05:48:27.212137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.183 [2024-07-14 05:48:27.212162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.183 [2024-07-14 05:48:27.212178] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.183 [2024-07-14 05:48:27.212193] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.183 [2024-07-14 05:48:27.212221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.183 qpair failed and we were unable to recover it. 00:34:20.183 [2024-07-14 05:48:27.222006] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.183 [2024-07-14 05:48:27.222163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.183 [2024-07-14 05:48:27.222189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.183 [2024-07-14 05:48:27.222204] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.183 [2024-07-14 05:48:27.222217] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.183 [2024-07-14 05:48:27.222263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.183 qpair failed and we were unable to recover it. 00:34:20.183 [2024-07-14 05:48:27.232008] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.183 [2024-07-14 05:48:27.232164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.183 [2024-07-14 05:48:27.232190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.183 [2024-07-14 05:48:27.232205] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.183 [2024-07-14 05:48:27.232220] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.183 [2024-07-14 05:48:27.232249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.183 qpair failed and we were unable to recover it. 00:34:20.183 [2024-07-14 05:48:27.242040] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.183 [2024-07-14 05:48:27.242204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.183 [2024-07-14 05:48:27.242232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.183 [2024-07-14 05:48:27.242251] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.183 [2024-07-14 05:48:27.242286] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.183 [2024-07-14 05:48:27.242316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.183 qpair failed and we were unable to recover it. 00:34:20.183 [2024-07-14 05:48:27.252071] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.183 [2024-07-14 05:48:27.252227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.183 [2024-07-14 05:48:27.252254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.183 [2024-07-14 05:48:27.252269] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.183 [2024-07-14 05:48:27.252285] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.183 [2024-07-14 05:48:27.252314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.183 qpair failed and we were unable to recover it. 00:34:20.183 [2024-07-14 05:48:27.262217] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.183 [2024-07-14 05:48:27.262389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.183 [2024-07-14 05:48:27.262415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.183 [2024-07-14 05:48:27.262431] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.183 [2024-07-14 05:48:27.262445] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.183 [2024-07-14 05:48:27.262474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.183 qpair failed and we were unable to recover it. 00:34:20.183 [2024-07-14 05:48:27.272146] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.183 [2024-07-14 05:48:27.272311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.183 [2024-07-14 05:48:27.272337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.183 [2024-07-14 05:48:27.272352] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.183 [2024-07-14 05:48:27.272366] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.183 [2024-07-14 05:48:27.272394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.183 qpair failed and we were unable to recover it. 00:34:20.183 [2024-07-14 05:48:27.282235] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.183 [2024-07-14 05:48:27.282394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.183 [2024-07-14 05:48:27.282420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.183 [2024-07-14 05:48:27.282437] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.183 [2024-07-14 05:48:27.282450] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.183 [2024-07-14 05:48:27.282479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.183 qpair failed and we were unable to recover it. 00:34:20.442 [2024-07-14 05:48:27.292160] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.442 [2024-07-14 05:48:27.292328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.442 [2024-07-14 05:48:27.292357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.442 [2024-07-14 05:48:27.292373] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.442 [2024-07-14 05:48:27.292388] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.442 [2024-07-14 05:48:27.292417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.442 qpair failed and we were unable to recover it. 00:34:20.442 [2024-07-14 05:48:27.302222] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.442 [2024-07-14 05:48:27.302390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.442 [2024-07-14 05:48:27.302417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.442 [2024-07-14 05:48:27.302433] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.442 [2024-07-14 05:48:27.302447] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.442 [2024-07-14 05:48:27.302477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.442 qpair failed and we were unable to recover it. 00:34:20.442 [2024-07-14 05:48:27.312230] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.442 [2024-07-14 05:48:27.312392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.442 [2024-07-14 05:48:27.312419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.442 [2024-07-14 05:48:27.312434] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.442 [2024-07-14 05:48:27.312448] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.442 [2024-07-14 05:48:27.312477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.442 qpair failed and we were unable to recover it. 00:34:20.442 [2024-07-14 05:48:27.322286] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.442 [2024-07-14 05:48:27.322463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.442 [2024-07-14 05:48:27.322489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.442 [2024-07-14 05:48:27.322505] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.442 [2024-07-14 05:48:27.322520] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.442 [2024-07-14 05:48:27.322549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.442 qpair failed and we were unable to recover it. 00:34:20.442 [2024-07-14 05:48:27.332305] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.442 [2024-07-14 05:48:27.332505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.442 [2024-07-14 05:48:27.332546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.442 [2024-07-14 05:48:27.332567] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.442 [2024-07-14 05:48:27.332582] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.442 [2024-07-14 05:48:27.332625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.442 qpair failed and we were unable to recover it. 00:34:20.442 [2024-07-14 05:48:27.342291] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.442 [2024-07-14 05:48:27.342451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.442 [2024-07-14 05:48:27.342478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.442 [2024-07-14 05:48:27.342493] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.442 [2024-07-14 05:48:27.342507] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.442 [2024-07-14 05:48:27.342536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.442 qpair failed and we were unable to recover it. 00:34:20.442 [2024-07-14 05:48:27.352365] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.442 [2024-07-14 05:48:27.352525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.442 [2024-07-14 05:48:27.352551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.442 [2024-07-14 05:48:27.352566] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.442 [2024-07-14 05:48:27.352581] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.442 [2024-07-14 05:48:27.352610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.442 qpair failed and we were unable to recover it. 00:34:20.442 [2024-07-14 05:48:27.362376] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.442 [2024-07-14 05:48:27.362535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.442 [2024-07-14 05:48:27.362561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.442 [2024-07-14 05:48:27.362576] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.442 [2024-07-14 05:48:27.362591] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.442 [2024-07-14 05:48:27.362619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.442 qpair failed and we were unable to recover it. 00:34:20.442 [2024-07-14 05:48:27.372385] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.442 [2024-07-14 05:48:27.372546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.442 [2024-07-14 05:48:27.372572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.442 [2024-07-14 05:48:27.372587] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.442 [2024-07-14 05:48:27.372601] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.442 [2024-07-14 05:48:27.372631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.442 qpair failed and we were unable to recover it. 00:34:20.442 [2024-07-14 05:48:27.382470] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.442 [2024-07-14 05:48:27.382630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.442 [2024-07-14 05:48:27.382657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.442 [2024-07-14 05:48:27.382673] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.442 [2024-07-14 05:48:27.382687] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.442 [2024-07-14 05:48:27.382716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.442 qpair failed and we were unable to recover it. 00:34:20.442 [2024-07-14 05:48:27.392499] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.442 [2024-07-14 05:48:27.392707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.442 [2024-07-14 05:48:27.392733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.442 [2024-07-14 05:48:27.392748] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.442 [2024-07-14 05:48:27.392762] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.442 [2024-07-14 05:48:27.392791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.442 qpair failed and we were unable to recover it. 00:34:20.442 [2024-07-14 05:48:27.402491] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.442 [2024-07-14 05:48:27.402655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.442 [2024-07-14 05:48:27.402681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.443 [2024-07-14 05:48:27.402698] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.443 [2024-07-14 05:48:27.402712] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.443 [2024-07-14 05:48:27.402741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.443 qpair failed and we were unable to recover it. 00:34:20.443 [2024-07-14 05:48:27.412561] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.443 [2024-07-14 05:48:27.412775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.443 [2024-07-14 05:48:27.412815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.443 [2024-07-14 05:48:27.412830] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.443 [2024-07-14 05:48:27.412861] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.443 [2024-07-14 05:48:27.412899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.443 qpair failed and we were unable to recover it. 00:34:20.443 [2024-07-14 05:48:27.422591] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.443 [2024-07-14 05:48:27.422774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.443 [2024-07-14 05:48:27.422800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.443 [2024-07-14 05:48:27.422823] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.443 [2024-07-14 05:48:27.422852] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.443 [2024-07-14 05:48:27.422888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.443 qpair failed and we were unable to recover it. 00:34:20.443 [2024-07-14 05:48:27.432663] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.443 [2024-07-14 05:48:27.432840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.443 [2024-07-14 05:48:27.432874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.443 [2024-07-14 05:48:27.432894] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.443 [2024-07-14 05:48:27.432909] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.443 [2024-07-14 05:48:27.432940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.443 qpair failed and we were unable to recover it. 00:34:20.443 [2024-07-14 05:48:27.442619] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.443 [2024-07-14 05:48:27.442819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.443 [2024-07-14 05:48:27.442846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.443 [2024-07-14 05:48:27.442862] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.443 [2024-07-14 05:48:27.442883] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.443 [2024-07-14 05:48:27.442913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.443 qpair failed and we were unable to recover it. 00:34:20.443 [2024-07-14 05:48:27.452644] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.443 [2024-07-14 05:48:27.452807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.443 [2024-07-14 05:48:27.452833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.443 [2024-07-14 05:48:27.452849] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.443 [2024-07-14 05:48:27.452863] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.443 [2024-07-14 05:48:27.452900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.443 qpair failed and we were unable to recover it. 00:34:20.443 [2024-07-14 05:48:27.462672] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.443 [2024-07-14 05:48:27.462835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.443 [2024-07-14 05:48:27.462862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.443 [2024-07-14 05:48:27.462887] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.443 [2024-07-14 05:48:27.462903] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.443 [2024-07-14 05:48:27.462932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.443 qpair failed and we were unable to recover it. 00:34:20.443 [2024-07-14 05:48:27.472710] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.443 [2024-07-14 05:48:27.472917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.443 [2024-07-14 05:48:27.472942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.443 [2024-07-14 05:48:27.472957] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.443 [2024-07-14 05:48:27.472972] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.443 [2024-07-14 05:48:27.473001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.443 qpair failed and we were unable to recover it. 00:34:20.443 [2024-07-14 05:48:27.482757] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.443 [2024-07-14 05:48:27.482921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.443 [2024-07-14 05:48:27.482947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.443 [2024-07-14 05:48:27.482963] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.443 [2024-07-14 05:48:27.482977] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.443 [2024-07-14 05:48:27.483006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.443 qpair failed and we were unable to recover it. 00:34:20.443 [2024-07-14 05:48:27.492846] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.443 [2024-07-14 05:48:27.493009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.443 [2024-07-14 05:48:27.493035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.443 [2024-07-14 05:48:27.493050] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.443 [2024-07-14 05:48:27.493065] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.443 [2024-07-14 05:48:27.493094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.443 qpair failed and we were unable to recover it. 00:34:20.443 [2024-07-14 05:48:27.502793] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.443 [2024-07-14 05:48:27.502958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.443 [2024-07-14 05:48:27.502985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.443 [2024-07-14 05:48:27.503000] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.443 [2024-07-14 05:48:27.503014] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.443 [2024-07-14 05:48:27.503043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.443 qpair failed and we were unable to recover it. 00:34:20.443 [2024-07-14 05:48:27.512819] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.443 [2024-07-14 05:48:27.512993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.443 [2024-07-14 05:48:27.513019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.443 [2024-07-14 05:48:27.513041] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.443 [2024-07-14 05:48:27.513056] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.443 [2024-07-14 05:48:27.513085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.443 qpair failed and we were unable to recover it. 00:34:20.443 [2024-07-14 05:48:27.522902] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.443 [2024-07-14 05:48:27.523063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.443 [2024-07-14 05:48:27.523090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.443 [2024-07-14 05:48:27.523105] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.443 [2024-07-14 05:48:27.523119] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.443 [2024-07-14 05:48:27.523149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.443 qpair failed and we were unable to recover it. 00:34:20.443 [2024-07-14 05:48:27.532910] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.443 [2024-07-14 05:48:27.533069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.443 [2024-07-14 05:48:27.533094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.443 [2024-07-14 05:48:27.533109] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.443 [2024-07-14 05:48:27.533124] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.443 [2024-07-14 05:48:27.533153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.443 qpair failed and we were unable to recover it. 00:34:20.443 [2024-07-14 05:48:27.542962] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.443 [2024-07-14 05:48:27.543166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.443 [2024-07-14 05:48:27.543194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.443 [2024-07-14 05:48:27.543210] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.443 [2024-07-14 05:48:27.543224] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.443 [2024-07-14 05:48:27.543254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.443 qpair failed and we were unable to recover it. 00:34:20.702 [2024-07-14 05:48:27.552994] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.702 [2024-07-14 05:48:27.553159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.702 [2024-07-14 05:48:27.553188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.702 [2024-07-14 05:48:27.553204] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.702 [2024-07-14 05:48:27.553218] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.702 [2024-07-14 05:48:27.553247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.702 qpair failed and we were unable to recover it. 00:34:20.702 [2024-07-14 05:48:27.563012] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.702 [2024-07-14 05:48:27.563179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.702 [2024-07-14 05:48:27.563206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.702 [2024-07-14 05:48:27.563222] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.702 [2024-07-14 05:48:27.563251] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.702 [2024-07-14 05:48:27.563280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.702 qpair failed and we were unable to recover it. 00:34:20.702 [2024-07-14 05:48:27.573041] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.702 [2024-07-14 05:48:27.573218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.702 [2024-07-14 05:48:27.573247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.702 [2024-07-14 05:48:27.573266] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.702 [2024-07-14 05:48:27.573295] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.702 [2024-07-14 05:48:27.573325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.702 qpair failed and we were unable to recover it. 00:34:20.702 [2024-07-14 05:48:27.583035] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.702 [2024-07-14 05:48:27.583187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.702 [2024-07-14 05:48:27.583214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.702 [2024-07-14 05:48:27.583238] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.702 [2024-07-14 05:48:27.583251] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.702 [2024-07-14 05:48:27.583282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.702 qpair failed and we were unable to recover it. 00:34:20.702 [2024-07-14 05:48:27.593074] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.702 [2024-07-14 05:48:27.593239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.702 [2024-07-14 05:48:27.593265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.702 [2024-07-14 05:48:27.593280] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.702 [2024-07-14 05:48:27.593294] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.702 [2024-07-14 05:48:27.593323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.702 qpair failed and we were unable to recover it. 00:34:20.702 [2024-07-14 05:48:27.603179] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.702 [2024-07-14 05:48:27.603341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.702 [2024-07-14 05:48:27.603372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.702 [2024-07-14 05:48:27.603388] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.702 [2024-07-14 05:48:27.603403] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.702 [2024-07-14 05:48:27.603432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.702 qpair failed and we were unable to recover it. 00:34:20.702 [2024-07-14 05:48:27.613107] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.702 [2024-07-14 05:48:27.613261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.702 [2024-07-14 05:48:27.613287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.702 [2024-07-14 05:48:27.613302] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.702 [2024-07-14 05:48:27.613316] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.702 [2024-07-14 05:48:27.613345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.702 qpair failed and we were unable to recover it. 00:34:20.702 [2024-07-14 05:48:27.623140] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.702 [2024-07-14 05:48:27.623292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.702 [2024-07-14 05:48:27.623318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.702 [2024-07-14 05:48:27.623333] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.702 [2024-07-14 05:48:27.623348] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.702 [2024-07-14 05:48:27.623376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.702 qpair failed and we were unable to recover it. 00:34:20.702 [2024-07-14 05:48:27.633174] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.702 [2024-07-14 05:48:27.633387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.702 [2024-07-14 05:48:27.633412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.702 [2024-07-14 05:48:27.633427] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.702 [2024-07-14 05:48:27.633441] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.702 [2024-07-14 05:48:27.633470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.702 qpair failed and we were unable to recover it. 00:34:20.702 [2024-07-14 05:48:27.643202] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.702 [2024-07-14 05:48:27.643353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.703 [2024-07-14 05:48:27.643379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.703 [2024-07-14 05:48:27.643394] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.703 [2024-07-14 05:48:27.643408] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.703 [2024-07-14 05:48:27.643443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-14 05:48:27.653218] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.703 [2024-07-14 05:48:27.653377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.703 [2024-07-14 05:48:27.653404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.703 [2024-07-14 05:48:27.653419] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.703 [2024-07-14 05:48:27.653434] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.703 [2024-07-14 05:48:27.653462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-14 05:48:27.663252] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.703 [2024-07-14 05:48:27.663401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.703 [2024-07-14 05:48:27.663427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.703 [2024-07-14 05:48:27.663443] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.703 [2024-07-14 05:48:27.663457] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.703 [2024-07-14 05:48:27.663486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-14 05:48:27.673334] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.703 [2024-07-14 05:48:27.673544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.703 [2024-07-14 05:48:27.673570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.703 [2024-07-14 05:48:27.673584] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.703 [2024-07-14 05:48:27.673599] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.703 [2024-07-14 05:48:27.673628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-14 05:48:27.683359] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.703 [2024-07-14 05:48:27.683533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.703 [2024-07-14 05:48:27.683563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.703 [2024-07-14 05:48:27.683583] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.703 [2024-07-14 05:48:27.683613] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.703 [2024-07-14 05:48:27.683643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-14 05:48:27.693381] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.703 [2024-07-14 05:48:27.693542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.703 [2024-07-14 05:48:27.693574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.703 [2024-07-14 05:48:27.693591] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.703 [2024-07-14 05:48:27.693605] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.703 [2024-07-14 05:48:27.693636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-14 05:48:27.703414] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.703 [2024-07-14 05:48:27.703613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.703 [2024-07-14 05:48:27.703639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.703 [2024-07-14 05:48:27.703654] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.703 [2024-07-14 05:48:27.703684] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.703 [2024-07-14 05:48:27.703713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-14 05:48:27.713413] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.703 [2024-07-14 05:48:27.713576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.703 [2024-07-14 05:48:27.713602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.703 [2024-07-14 05:48:27.713618] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.703 [2024-07-14 05:48:27.713632] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.703 [2024-07-14 05:48:27.713661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-14 05:48:27.723433] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.703 [2024-07-14 05:48:27.723608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.703 [2024-07-14 05:48:27.723635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.703 [2024-07-14 05:48:27.723650] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.703 [2024-07-14 05:48:27.723667] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.703 [2024-07-14 05:48:27.723697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-14 05:48:27.733467] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.703 [2024-07-14 05:48:27.733641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.703 [2024-07-14 05:48:27.733667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.703 [2024-07-14 05:48:27.733683] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.703 [2024-07-14 05:48:27.733697] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.703 [2024-07-14 05:48:27.733732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-14 05:48:27.743504] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.703 [2024-07-14 05:48:27.743668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.703 [2024-07-14 05:48:27.743694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.703 [2024-07-14 05:48:27.743710] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.703 [2024-07-14 05:48:27.743724] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.703 [2024-07-14 05:48:27.743753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-14 05:48:27.753615] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.703 [2024-07-14 05:48:27.753776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.703 [2024-07-14 05:48:27.753802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.703 [2024-07-14 05:48:27.753817] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.703 [2024-07-14 05:48:27.753831] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.703 [2024-07-14 05:48:27.753874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-14 05:48:27.763549] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.703 [2024-07-14 05:48:27.763708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.703 [2024-07-14 05:48:27.763733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.703 [2024-07-14 05:48:27.763748] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.703 [2024-07-14 05:48:27.763763] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.703 [2024-07-14 05:48:27.763791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-14 05:48:27.773585] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.703 [2024-07-14 05:48:27.773750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.703 [2024-07-14 05:48:27.773776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.703 [2024-07-14 05:48:27.773792] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.703 [2024-07-14 05:48:27.773806] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.703 [2024-07-14 05:48:27.773834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-14 05:48:27.783581] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.703 [2024-07-14 05:48:27.783738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.703 [2024-07-14 05:48:27.783769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.703 [2024-07-14 05:48:27.783787] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.703 [2024-07-14 05:48:27.783802] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.703 [2024-07-14 05:48:27.783831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.704 qpair failed and we were unable to recover it. 00:34:20.704 [2024-07-14 05:48:27.793664] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.704 [2024-07-14 05:48:27.793873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.704 [2024-07-14 05:48:27.793900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.704 [2024-07-14 05:48:27.793916] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.704 [2024-07-14 05:48:27.793930] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.704 [2024-07-14 05:48:27.793959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.704 qpair failed and we were unable to recover it. 00:34:20.704 [2024-07-14 05:48:27.803688] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.704 [2024-07-14 05:48:27.803862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.704 [2024-07-14 05:48:27.803897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.704 [2024-07-14 05:48:27.803913] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.704 [2024-07-14 05:48:27.803927] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.704 [2024-07-14 05:48:27.803958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.704 qpair failed and we were unable to recover it. 00:34:20.961 [2024-07-14 05:48:27.813785] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.961 [2024-07-14 05:48:27.813995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.961 [2024-07-14 05:48:27.814024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.961 [2024-07-14 05:48:27.814040] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.961 [2024-07-14 05:48:27.814054] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.961 [2024-07-14 05:48:27.814084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.961 qpair failed and we were unable to recover it. 00:34:20.962 [2024-07-14 05:48:27.823778] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.962 [2024-07-14 05:48:27.823943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.962 [2024-07-14 05:48:27.823971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.962 [2024-07-14 05:48:27.823986] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.962 [2024-07-14 05:48:27.824000] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.962 [2024-07-14 05:48:27.824036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.962 qpair failed and we were unable to recover it. 00:34:20.962 [2024-07-14 05:48:27.833752] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.962 [2024-07-14 05:48:27.833929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.962 [2024-07-14 05:48:27.833955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.962 [2024-07-14 05:48:27.833970] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.962 [2024-07-14 05:48:27.833984] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.962 [2024-07-14 05:48:27.834014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.962 qpair failed and we were unable to recover it. 00:34:20.962 [2024-07-14 05:48:27.843762] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.962 [2024-07-14 05:48:27.843933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.962 [2024-07-14 05:48:27.843959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.962 [2024-07-14 05:48:27.843974] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.962 [2024-07-14 05:48:27.843989] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.962 [2024-07-14 05:48:27.844017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.962 qpair failed and we were unable to recover it. 00:34:20.962 [2024-07-14 05:48:27.853834] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.962 [2024-07-14 05:48:27.854003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.962 [2024-07-14 05:48:27.854030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.962 [2024-07-14 05:48:27.854045] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.962 [2024-07-14 05:48:27.854060] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.962 [2024-07-14 05:48:27.854089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.962 qpair failed and we were unable to recover it. 00:34:20.962 [2024-07-14 05:48:27.863830] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.962 [2024-07-14 05:48:27.864000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.962 [2024-07-14 05:48:27.864026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.962 [2024-07-14 05:48:27.864041] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.962 [2024-07-14 05:48:27.864055] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.962 [2024-07-14 05:48:27.864085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.962 qpair failed and we were unable to recover it. 00:34:20.962 [2024-07-14 05:48:27.873841] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.962 [2024-07-14 05:48:27.874015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.962 [2024-07-14 05:48:27.874047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.962 [2024-07-14 05:48:27.874063] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.962 [2024-07-14 05:48:27.874077] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.962 [2024-07-14 05:48:27.874106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.962 qpair failed and we were unable to recover it. 00:34:20.962 [2024-07-14 05:48:27.883984] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.962 [2024-07-14 05:48:27.884168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.962 [2024-07-14 05:48:27.884208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.962 [2024-07-14 05:48:27.884224] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.962 [2024-07-14 05:48:27.884238] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.962 [2024-07-14 05:48:27.884281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.962 qpair failed and we were unable to recover it. 00:34:20.962 [2024-07-14 05:48:27.893943] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.962 [2024-07-14 05:48:27.894129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.962 [2024-07-14 05:48:27.894155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.962 [2024-07-14 05:48:27.894170] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.962 [2024-07-14 05:48:27.894199] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.962 [2024-07-14 05:48:27.894229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.962 qpair failed and we were unable to recover it. 00:34:20.962 [2024-07-14 05:48:27.904009] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.962 [2024-07-14 05:48:27.904184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.962 [2024-07-14 05:48:27.904210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.962 [2024-07-14 05:48:27.904225] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.962 [2024-07-14 05:48:27.904240] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.962 [2024-07-14 05:48:27.904268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.962 qpair failed and we were unable to recover it. 00:34:20.962 [2024-07-14 05:48:27.913977] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.962 [2024-07-14 05:48:27.914138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.962 [2024-07-14 05:48:27.914165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.962 [2024-07-14 05:48:27.914180] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.962 [2024-07-14 05:48:27.914214] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.962 [2024-07-14 05:48:27.914243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.962 qpair failed and we were unable to recover it. 00:34:20.962 [2024-07-14 05:48:27.923977] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.962 [2024-07-14 05:48:27.924146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.962 [2024-07-14 05:48:27.924172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.962 [2024-07-14 05:48:27.924187] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.962 [2024-07-14 05:48:27.924201] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.962 [2024-07-14 05:48:27.924230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.962 qpair failed and we were unable to recover it. 00:34:20.962 [2024-07-14 05:48:27.934024] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.962 [2024-07-14 05:48:27.934191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.962 [2024-07-14 05:48:27.934218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.962 [2024-07-14 05:48:27.934233] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.962 [2024-07-14 05:48:27.934262] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.962 [2024-07-14 05:48:27.934291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.962 qpair failed and we were unable to recover it. 00:34:20.962 [2024-07-14 05:48:27.944051] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.962 [2024-07-14 05:48:27.944212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.962 [2024-07-14 05:48:27.944238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.962 [2024-07-14 05:48:27.944253] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.962 [2024-07-14 05:48:27.944268] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.962 [2024-07-14 05:48:27.944313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.962 qpair failed and we were unable to recover it. 00:34:20.962 [2024-07-14 05:48:27.954108] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.962 [2024-07-14 05:48:27.954275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.962 [2024-07-14 05:48:27.954301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.962 [2024-07-14 05:48:27.954316] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.962 [2024-07-14 05:48:27.954329] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.962 [2024-07-14 05:48:27.954358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.962 qpair failed and we were unable to recover it. 00:34:20.962 [2024-07-14 05:48:27.964251] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.962 [2024-07-14 05:48:27.964440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.962 [2024-07-14 05:48:27.964465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.963 [2024-07-14 05:48:27.964480] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.963 [2024-07-14 05:48:27.964493] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.963 [2024-07-14 05:48:27.964536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.963 qpair failed and we were unable to recover it. 00:34:20.963 [2024-07-14 05:48:27.974155] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.963 [2024-07-14 05:48:27.974341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.963 [2024-07-14 05:48:27.974367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.963 [2024-07-14 05:48:27.974382] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.963 [2024-07-14 05:48:27.974410] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.963 [2024-07-14 05:48:27.974439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.963 qpair failed and we were unable to recover it. 00:34:20.963 [2024-07-14 05:48:27.984136] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.963 [2024-07-14 05:48:27.984295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.963 [2024-07-14 05:48:27.984321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.963 [2024-07-14 05:48:27.984336] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.963 [2024-07-14 05:48:27.984351] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.963 [2024-07-14 05:48:27.984379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.963 qpair failed and we were unable to recover it. 00:34:20.963 [2024-07-14 05:48:27.994231] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.963 [2024-07-14 05:48:27.994397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.963 [2024-07-14 05:48:27.994422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.963 [2024-07-14 05:48:27.994437] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.963 [2024-07-14 05:48:27.994464] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.963 [2024-07-14 05:48:27.994492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.963 qpair failed and we were unable to recover it. 00:34:20.963 [2024-07-14 05:48:28.004224] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.963 [2024-07-14 05:48:28.004390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.963 [2024-07-14 05:48:28.004416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.963 [2024-07-14 05:48:28.004431] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.963 [2024-07-14 05:48:28.004465] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.963 [2024-07-14 05:48:28.004494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.963 qpair failed and we were unable to recover it. 00:34:20.963 [2024-07-14 05:48:28.014319] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.963 [2024-07-14 05:48:28.014466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.963 [2024-07-14 05:48:28.014492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.963 [2024-07-14 05:48:28.014508] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.963 [2024-07-14 05:48:28.014522] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.963 [2024-07-14 05:48:28.014566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.963 qpair failed and we were unable to recover it. 00:34:20.963 [2024-07-14 05:48:28.024259] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.963 [2024-07-14 05:48:28.024419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.963 [2024-07-14 05:48:28.024446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.963 [2024-07-14 05:48:28.024462] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.963 [2024-07-14 05:48:28.024475] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.963 [2024-07-14 05:48:28.024504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.963 qpair failed and we were unable to recover it. 00:34:20.963 [2024-07-14 05:48:28.034389] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.963 [2024-07-14 05:48:28.034544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.963 [2024-07-14 05:48:28.034570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.963 [2024-07-14 05:48:28.034586] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.963 [2024-07-14 05:48:28.034600] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.963 [2024-07-14 05:48:28.034628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.963 qpair failed and we were unable to recover it. 00:34:20.963 [2024-07-14 05:48:28.044421] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.963 [2024-07-14 05:48:28.044589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.963 [2024-07-14 05:48:28.044615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.963 [2024-07-14 05:48:28.044630] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.963 [2024-07-14 05:48:28.044642] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.963 [2024-07-14 05:48:28.044685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.963 qpair failed and we were unable to recover it. 00:34:20.963 [2024-07-14 05:48:28.054364] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.963 [2024-07-14 05:48:28.054523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.963 [2024-07-14 05:48:28.054549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.963 [2024-07-14 05:48:28.054564] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.963 [2024-07-14 05:48:28.054578] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.963 [2024-07-14 05:48:28.054606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.963 qpair failed and we were unable to recover it. 00:34:20.963 [2024-07-14 05:48:28.064366] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.963 [2024-07-14 05:48:28.064516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.963 [2024-07-14 05:48:28.064545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.963 [2024-07-14 05:48:28.064561] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.963 [2024-07-14 05:48:28.064575] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:20.963 [2024-07-14 05:48:28.064604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.963 qpair failed and we were unable to recover it. 00:34:21.221 [2024-07-14 05:48:28.074442] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.221 [2024-07-14 05:48:28.074623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.221 [2024-07-14 05:48:28.074653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.221 [2024-07-14 05:48:28.074669] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.221 [2024-07-14 05:48:28.074697] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.222 [2024-07-14 05:48:28.074727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.222 qpair failed and we were unable to recover it. 00:34:21.222 [2024-07-14 05:48:28.084464] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.222 [2024-07-14 05:48:28.084664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.222 [2024-07-14 05:48:28.084708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.222 [2024-07-14 05:48:28.084724] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.222 [2024-07-14 05:48:28.084737] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.222 [2024-07-14 05:48:28.084781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.222 qpair failed and we were unable to recover it. 00:34:21.222 [2024-07-14 05:48:28.094461] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.222 [2024-07-14 05:48:28.094623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.222 [2024-07-14 05:48:28.094650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.222 [2024-07-14 05:48:28.094671] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.222 [2024-07-14 05:48:28.094687] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.222 [2024-07-14 05:48:28.094716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.222 qpair failed and we were unable to recover it. 00:34:21.222 [2024-07-14 05:48:28.104581] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.222 [2024-07-14 05:48:28.104780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.222 [2024-07-14 05:48:28.104808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.222 [2024-07-14 05:48:28.104841] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.222 [2024-07-14 05:48:28.104856] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.222 [2024-07-14 05:48:28.104909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.222 qpair failed and we were unable to recover it. 00:34:21.222 [2024-07-14 05:48:28.114549] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.222 [2024-07-14 05:48:28.114747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.222 [2024-07-14 05:48:28.114775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.222 [2024-07-14 05:48:28.114791] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.222 [2024-07-14 05:48:28.114805] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.222 [2024-07-14 05:48:28.114836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.222 qpair failed and we were unable to recover it. 00:34:21.222 [2024-07-14 05:48:28.124547] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.222 [2024-07-14 05:48:28.124699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.222 [2024-07-14 05:48:28.124726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.222 [2024-07-14 05:48:28.124742] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.222 [2024-07-14 05:48:28.124756] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.222 [2024-07-14 05:48:28.124785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.222 qpair failed and we were unable to recover it. 00:34:21.222 [2024-07-14 05:48:28.134591] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.222 [2024-07-14 05:48:28.134738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.222 [2024-07-14 05:48:28.134765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.222 [2024-07-14 05:48:28.134781] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.222 [2024-07-14 05:48:28.134795] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.222 [2024-07-14 05:48:28.134825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.222 qpair failed and we were unable to recover it. 00:34:21.222 [2024-07-14 05:48:28.144670] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.222 [2024-07-14 05:48:28.144821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.222 [2024-07-14 05:48:28.144847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.222 [2024-07-14 05:48:28.144863] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.222 [2024-07-14 05:48:28.144887] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.222 [2024-07-14 05:48:28.144918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.222 qpair failed and we were unable to recover it. 00:34:21.222 [2024-07-14 05:48:28.154640] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.222 [2024-07-14 05:48:28.154798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.222 [2024-07-14 05:48:28.154825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.222 [2024-07-14 05:48:28.154841] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.222 [2024-07-14 05:48:28.154855] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.222 [2024-07-14 05:48:28.154907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.222 qpair failed and we were unable to recover it. 00:34:21.222 [2024-07-14 05:48:28.164668] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.222 [2024-07-14 05:48:28.164827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.222 [2024-07-14 05:48:28.164853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.222 [2024-07-14 05:48:28.164876] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.222 [2024-07-14 05:48:28.164892] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.222 [2024-07-14 05:48:28.164921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.222 qpair failed and we were unable to recover it. 00:34:21.222 [2024-07-14 05:48:28.174678] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.222 [2024-07-14 05:48:28.174829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.222 [2024-07-14 05:48:28.174856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.222 [2024-07-14 05:48:28.174876] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.222 [2024-07-14 05:48:28.174892] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.222 [2024-07-14 05:48:28.174921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.222 qpair failed and we were unable to recover it. 00:34:21.222 [2024-07-14 05:48:28.184731] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.222 [2024-07-14 05:48:28.184895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.222 [2024-07-14 05:48:28.184922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.222 [2024-07-14 05:48:28.184943] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.222 [2024-07-14 05:48:28.184958] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.222 [2024-07-14 05:48:28.184987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.222 qpair failed and we were unable to recover it. 00:34:21.222 [2024-07-14 05:48:28.194772] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.222 [2024-07-14 05:48:28.194971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.222 [2024-07-14 05:48:28.194998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.222 [2024-07-14 05:48:28.195018] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.222 [2024-07-14 05:48:28.195033] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.222 [2024-07-14 05:48:28.195063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.222 qpair failed and we were unable to recover it. 00:34:21.222 [2024-07-14 05:48:28.204816] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.222 [2024-07-14 05:48:28.205020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.222 [2024-07-14 05:48:28.205047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.222 [2024-07-14 05:48:28.205063] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.222 [2024-07-14 05:48:28.205077] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.222 [2024-07-14 05:48:28.205105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.222 qpair failed and we were unable to recover it. 00:34:21.222 [2024-07-14 05:48:28.214802] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.222 [2024-07-14 05:48:28.214956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.222 [2024-07-14 05:48:28.214982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.222 [2024-07-14 05:48:28.214997] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.222 [2024-07-14 05:48:28.215011] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.222 [2024-07-14 05:48:28.215040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.222 qpair failed and we were unable to recover it. 00:34:21.222 [2024-07-14 05:48:28.224837] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.223 [2024-07-14 05:48:28.224999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.223 [2024-07-14 05:48:28.225026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.223 [2024-07-14 05:48:28.225041] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.223 [2024-07-14 05:48:28.225055] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.223 [2024-07-14 05:48:28.225085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.223 qpair failed and we were unable to recover it. 00:34:21.223 [2024-07-14 05:48:28.234899] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.223 [2024-07-14 05:48:28.235066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.223 [2024-07-14 05:48:28.235093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.223 [2024-07-14 05:48:28.235109] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.223 [2024-07-14 05:48:28.235123] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.223 [2024-07-14 05:48:28.235153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.223 qpair failed and we were unable to recover it. 00:34:21.223 [2024-07-14 05:48:28.244884] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.223 [2024-07-14 05:48:28.245055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.223 [2024-07-14 05:48:28.245083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.223 [2024-07-14 05:48:28.245098] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.223 [2024-07-14 05:48:28.245112] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.223 [2024-07-14 05:48:28.245142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.223 qpair failed and we were unable to recover it. 00:34:21.223 [2024-07-14 05:48:28.254969] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.223 [2024-07-14 05:48:28.255128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.223 [2024-07-14 05:48:28.255155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.223 [2024-07-14 05:48:28.255171] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.223 [2024-07-14 05:48:28.255184] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.223 [2024-07-14 05:48:28.255214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.223 qpair failed and we were unable to recover it. 00:34:21.223 [2024-07-14 05:48:28.264941] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.223 [2024-07-14 05:48:28.265096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.223 [2024-07-14 05:48:28.265123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.223 [2024-07-14 05:48:28.265139] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.223 [2024-07-14 05:48:28.265153] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.223 [2024-07-14 05:48:28.265181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.223 qpair failed and we were unable to recover it. 00:34:21.223 [2024-07-14 05:48:28.274980] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.223 [2024-07-14 05:48:28.275139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.223 [2024-07-14 05:48:28.275165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.223 [2024-07-14 05:48:28.275186] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.223 [2024-07-14 05:48:28.275201] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.223 [2024-07-14 05:48:28.275230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.223 qpair failed and we were unable to recover it. 00:34:21.223 [2024-07-14 05:48:28.285010] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.223 [2024-07-14 05:48:28.285166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.223 [2024-07-14 05:48:28.285194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.223 [2024-07-14 05:48:28.285209] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.223 [2024-07-14 05:48:28.285223] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.223 [2024-07-14 05:48:28.285267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.223 qpair failed and we were unable to recover it. 00:34:21.223 [2024-07-14 05:48:28.295137] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.223 [2024-07-14 05:48:28.295302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.223 [2024-07-14 05:48:28.295329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.223 [2024-07-14 05:48:28.295344] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.223 [2024-07-14 05:48:28.295373] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.223 [2024-07-14 05:48:28.295402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.223 qpair failed and we were unable to recover it. 00:34:21.223 [2024-07-14 05:48:28.305076] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.223 [2024-07-14 05:48:28.305229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.223 [2024-07-14 05:48:28.305256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.223 [2024-07-14 05:48:28.305272] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.223 [2024-07-14 05:48:28.305300] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.223 [2024-07-14 05:48:28.305330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.223 qpair failed and we were unable to recover it. 00:34:21.223 [2024-07-14 05:48:28.315102] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.223 [2024-07-14 05:48:28.315264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.223 [2024-07-14 05:48:28.315291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.223 [2024-07-14 05:48:28.315306] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.223 [2024-07-14 05:48:28.315320] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.223 [2024-07-14 05:48:28.315365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.223 qpair failed and we were unable to recover it. 00:34:21.223 [2024-07-14 05:48:28.325235] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.223 [2024-07-14 05:48:28.325394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.223 [2024-07-14 05:48:28.325423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.223 [2024-07-14 05:48:28.325440] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.223 [2024-07-14 05:48:28.325454] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.223 [2024-07-14 05:48:28.325483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.223 qpair failed and we were unable to recover it. 00:34:21.484 [2024-07-14 05:48:28.335153] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.484 [2024-07-14 05:48:28.335305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.484 [2024-07-14 05:48:28.335334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.484 [2024-07-14 05:48:28.335350] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.484 [2024-07-14 05:48:28.335363] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.484 [2024-07-14 05:48:28.335393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.484 qpair failed and we were unable to recover it. 00:34:21.484 [2024-07-14 05:48:28.345161] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.484 [2024-07-14 05:48:28.345312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.484 [2024-07-14 05:48:28.345339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.484 [2024-07-14 05:48:28.345355] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.484 [2024-07-14 05:48:28.345369] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.484 [2024-07-14 05:48:28.345398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.484 qpair failed and we were unable to recover it. 00:34:21.484 [2024-07-14 05:48:28.355218] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.484 [2024-07-14 05:48:28.355377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.484 [2024-07-14 05:48:28.355404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.484 [2024-07-14 05:48:28.355420] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.484 [2024-07-14 05:48:28.355434] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.484 [2024-07-14 05:48:28.355463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.484 qpair failed and we were unable to recover it. 00:34:21.484 [2024-07-14 05:48:28.365232] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.484 [2024-07-14 05:48:28.365391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.484 [2024-07-14 05:48:28.365424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.484 [2024-07-14 05:48:28.365440] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.484 [2024-07-14 05:48:28.365454] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.484 [2024-07-14 05:48:28.365484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.484 qpair failed and we were unable to recover it. 00:34:21.484 [2024-07-14 05:48:28.375276] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.484 [2024-07-14 05:48:28.375430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.484 [2024-07-14 05:48:28.375458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.484 [2024-07-14 05:48:28.375473] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.484 [2024-07-14 05:48:28.375487] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.484 [2024-07-14 05:48:28.375516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.484 qpair failed and we were unable to recover it. 00:34:21.484 [2024-07-14 05:48:28.385298] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.484 [2024-07-14 05:48:28.385449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.484 [2024-07-14 05:48:28.385476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.484 [2024-07-14 05:48:28.385492] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.484 [2024-07-14 05:48:28.385506] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.484 [2024-07-14 05:48:28.385536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.484 qpair failed and we were unable to recover it. 00:34:21.484 [2024-07-14 05:48:28.395343] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.484 [2024-07-14 05:48:28.395505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.484 [2024-07-14 05:48:28.395532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.484 [2024-07-14 05:48:28.395547] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.484 [2024-07-14 05:48:28.395561] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.484 [2024-07-14 05:48:28.395591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.484 qpair failed and we were unable to recover it. 00:34:21.484 [2024-07-14 05:48:28.405344] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.484 [2024-07-14 05:48:28.405491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.484 [2024-07-14 05:48:28.405517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.484 [2024-07-14 05:48:28.405533] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.484 [2024-07-14 05:48:28.405547] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.484 [2024-07-14 05:48:28.405577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.484 qpair failed and we were unable to recover it. 00:34:21.484 [2024-07-14 05:48:28.415371] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.485 [2024-07-14 05:48:28.415523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.485 [2024-07-14 05:48:28.415549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.485 [2024-07-14 05:48:28.415565] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.485 [2024-07-14 05:48:28.415580] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.485 [2024-07-14 05:48:28.415608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.485 qpair failed and we were unable to recover it. 00:34:21.485 [2024-07-14 05:48:28.425392] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.485 [2024-07-14 05:48:28.425551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.485 [2024-07-14 05:48:28.425577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.485 [2024-07-14 05:48:28.425593] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.485 [2024-07-14 05:48:28.425606] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.485 [2024-07-14 05:48:28.425636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.485 qpair failed and we were unable to recover it. 00:34:21.485 [2024-07-14 05:48:28.435473] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.485 [2024-07-14 05:48:28.435634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.485 [2024-07-14 05:48:28.435661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.485 [2024-07-14 05:48:28.435676] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.485 [2024-07-14 05:48:28.435690] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.485 [2024-07-14 05:48:28.435720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.485 qpair failed and we were unable to recover it. 00:34:21.485 [2024-07-14 05:48:28.445466] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.485 [2024-07-14 05:48:28.445642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.485 [2024-07-14 05:48:28.445668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.485 [2024-07-14 05:48:28.445684] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.485 [2024-07-14 05:48:28.445698] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.485 [2024-07-14 05:48:28.445728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.485 qpair failed and we were unable to recover it. 00:34:21.485 [2024-07-14 05:48:28.455500] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.485 [2024-07-14 05:48:28.455703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.485 [2024-07-14 05:48:28.455735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.485 [2024-07-14 05:48:28.455752] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.485 [2024-07-14 05:48:28.455766] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.485 [2024-07-14 05:48:28.455795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.485 qpair failed and we were unable to recover it. 00:34:21.485 [2024-07-14 05:48:28.465653] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.485 [2024-07-14 05:48:28.465834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.485 [2024-07-14 05:48:28.465862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.485 [2024-07-14 05:48:28.465885] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.485 [2024-07-14 05:48:28.465909] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.485 [2024-07-14 05:48:28.465938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.485 qpair failed and we were unable to recover it. 00:34:21.485 [2024-07-14 05:48:28.475587] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.485 [2024-07-14 05:48:28.475791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.485 [2024-07-14 05:48:28.475818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.485 [2024-07-14 05:48:28.475834] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.485 [2024-07-14 05:48:28.475849] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.485 [2024-07-14 05:48:28.475887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.485 qpair failed and we were unable to recover it. 00:34:21.485 [2024-07-14 05:48:28.485615] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.485 [2024-07-14 05:48:28.485787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.485 [2024-07-14 05:48:28.485813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.485 [2024-07-14 05:48:28.485829] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.485 [2024-07-14 05:48:28.485843] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.485 [2024-07-14 05:48:28.485877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.485 qpair failed and we were unable to recover it. 00:34:21.485 [2024-07-14 05:48:28.495663] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.485 [2024-07-14 05:48:28.495824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.485 [2024-07-14 05:48:28.495850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.485 [2024-07-14 05:48:28.495872] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.485 [2024-07-14 05:48:28.495890] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.485 [2024-07-14 05:48:28.495924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.485 qpair failed and we were unable to recover it. 00:34:21.485 [2024-07-14 05:48:28.505647] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.485 [2024-07-14 05:48:28.505821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.485 [2024-07-14 05:48:28.505847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.485 [2024-07-14 05:48:28.505863] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.485 [2024-07-14 05:48:28.505887] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.485 [2024-07-14 05:48:28.505917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.485 qpair failed and we were unable to recover it. 00:34:21.485 [2024-07-14 05:48:28.515686] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.485 [2024-07-14 05:48:28.515871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.485 [2024-07-14 05:48:28.515909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.485 [2024-07-14 05:48:28.515924] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.485 [2024-07-14 05:48:28.515937] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.485 [2024-07-14 05:48:28.515967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.485 qpair failed and we were unable to recover it. 00:34:21.485 [2024-07-14 05:48:28.525694] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.485 [2024-07-14 05:48:28.525861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.485 [2024-07-14 05:48:28.525893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.485 [2024-07-14 05:48:28.525910] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.485 [2024-07-14 05:48:28.525924] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.485 [2024-07-14 05:48:28.525952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.485 qpair failed and we were unable to recover it. 00:34:21.485 [2024-07-14 05:48:28.535755] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.485 [2024-07-14 05:48:28.535922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.485 [2024-07-14 05:48:28.535948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.485 [2024-07-14 05:48:28.535963] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.485 [2024-07-14 05:48:28.535978] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.485 [2024-07-14 05:48:28.536007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.485 qpair failed and we were unable to recover it. 00:34:21.485 [2024-07-14 05:48:28.545773] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.485 [2024-07-14 05:48:28.545935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.485 [2024-07-14 05:48:28.545966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.485 [2024-07-14 05:48:28.545982] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.485 [2024-07-14 05:48:28.545997] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.485 [2024-07-14 05:48:28.546026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.485 qpair failed and we were unable to recover it. 00:34:21.485 [2024-07-14 05:48:28.555815] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.485 [2024-07-14 05:48:28.555987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.485 [2024-07-14 05:48:28.556013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.486 [2024-07-14 05:48:28.556029] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.486 [2024-07-14 05:48:28.556042] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.486 [2024-07-14 05:48:28.556070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.486 qpair failed and we were unable to recover it. 00:34:21.486 [2024-07-14 05:48:28.565822] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.486 [2024-07-14 05:48:28.566028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.486 [2024-07-14 05:48:28.566055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.486 [2024-07-14 05:48:28.566069] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.486 [2024-07-14 05:48:28.566083] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.486 [2024-07-14 05:48:28.566111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.486 qpair failed and we were unable to recover it. 00:34:21.486 [2024-07-14 05:48:28.575859] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.486 [2024-07-14 05:48:28.576018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.486 [2024-07-14 05:48:28.576045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.486 [2024-07-14 05:48:28.576060] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.486 [2024-07-14 05:48:28.576074] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.486 [2024-07-14 05:48:28.576104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.486 qpair failed and we were unable to recover it. 00:34:21.486 [2024-07-14 05:48:28.585901] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.486 [2024-07-14 05:48:28.586089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.486 [2024-07-14 05:48:28.586135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.486 [2024-07-14 05:48:28.586166] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.486 [2024-07-14 05:48:28.586195] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.486 [2024-07-14 05:48:28.586253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.486 qpair failed and we were unable to recover it. 00:34:21.744 [2024-07-14 05:48:28.595930] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.744 [2024-07-14 05:48:28.596103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.744 [2024-07-14 05:48:28.596132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.744 [2024-07-14 05:48:28.596151] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.745 [2024-07-14 05:48:28.596181] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.745 [2024-07-14 05:48:28.596212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.745 qpair failed and we were unable to recover it. 00:34:21.745 [2024-07-14 05:48:28.605939] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.745 [2024-07-14 05:48:28.606099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.745 [2024-07-14 05:48:28.606127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.745 [2024-07-14 05:48:28.606142] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.745 [2024-07-14 05:48:28.606156] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.745 [2024-07-14 05:48:28.606187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.745 qpair failed and we were unable to recover it. 00:34:21.745 [2024-07-14 05:48:28.615953] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.745 [2024-07-14 05:48:28.616109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.745 [2024-07-14 05:48:28.616135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.745 [2024-07-14 05:48:28.616151] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.745 [2024-07-14 05:48:28.616165] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.745 [2024-07-14 05:48:28.616194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.745 qpair failed and we were unable to recover it. 00:34:21.745 [2024-07-14 05:48:28.626001] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.745 [2024-07-14 05:48:28.626154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.745 [2024-07-14 05:48:28.626181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.745 [2024-07-14 05:48:28.626197] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.745 [2024-07-14 05:48:28.626212] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.745 [2024-07-14 05:48:28.626240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.745 qpair failed and we were unable to recover it. 00:34:21.745 [2024-07-14 05:48:28.636061] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.745 [2024-07-14 05:48:28.636227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.745 [2024-07-14 05:48:28.636260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.745 [2024-07-14 05:48:28.636277] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.745 [2024-07-14 05:48:28.636291] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.745 [2024-07-14 05:48:28.636321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.745 qpair failed and we were unable to recover it. 00:34:21.745 [2024-07-14 05:48:28.646110] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.745 [2024-07-14 05:48:28.646268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.745 [2024-07-14 05:48:28.646295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.745 [2024-07-14 05:48:28.646310] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.745 [2024-07-14 05:48:28.646325] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.745 [2024-07-14 05:48:28.646354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.745 qpair failed and we were unable to recover it. 00:34:21.745 [2024-07-14 05:48:28.656101] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.745 [2024-07-14 05:48:28.656289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.745 [2024-07-14 05:48:28.656331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.745 [2024-07-14 05:48:28.656350] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.745 [2024-07-14 05:48:28.656364] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.745 [2024-07-14 05:48:28.656409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.745 qpair failed and we were unable to recover it. 00:34:21.745 [2024-07-14 05:48:28.666122] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.745 [2024-07-14 05:48:28.666279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.745 [2024-07-14 05:48:28.666307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.745 [2024-07-14 05:48:28.666323] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.745 [2024-07-14 05:48:28.666337] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.745 [2024-07-14 05:48:28.666367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.745 qpair failed and we were unable to recover it. 00:34:21.745 [2024-07-14 05:48:28.676141] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.745 [2024-07-14 05:48:28.676301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.745 [2024-07-14 05:48:28.676328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.745 [2024-07-14 05:48:28.676344] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.745 [2024-07-14 05:48:28.676363] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.745 [2024-07-14 05:48:28.676393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.745 qpair failed and we were unable to recover it. 00:34:21.745 [2024-07-14 05:48:28.686173] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.745 [2024-07-14 05:48:28.686325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.745 [2024-07-14 05:48:28.686352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.745 [2024-07-14 05:48:28.686368] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.745 [2024-07-14 05:48:28.686382] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.745 [2024-07-14 05:48:28.686411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.745 qpair failed and we were unable to recover it. 00:34:21.745 [2024-07-14 05:48:28.696302] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.745 [2024-07-14 05:48:28.696454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.745 [2024-07-14 05:48:28.696481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.745 [2024-07-14 05:48:28.696496] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.745 [2024-07-14 05:48:28.696510] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.745 [2024-07-14 05:48:28.696539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.745 qpair failed and we were unable to recover it. 00:34:21.745 [2024-07-14 05:48:28.706226] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.745 [2024-07-14 05:48:28.706373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.745 [2024-07-14 05:48:28.706400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.745 [2024-07-14 05:48:28.706415] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.745 [2024-07-14 05:48:28.706429] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.745 [2024-07-14 05:48:28.706473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.745 qpair failed and we were unable to recover it. 00:34:21.745 [2024-07-14 05:48:28.716247] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.745 [2024-07-14 05:48:28.716406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.745 [2024-07-14 05:48:28.716433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.745 [2024-07-14 05:48:28.716449] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.745 [2024-07-14 05:48:28.716462] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.745 [2024-07-14 05:48:28.716491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.745 qpair failed and we were unable to recover it. 00:34:21.745 [2024-07-14 05:48:28.726283] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.745 [2024-07-14 05:48:28.726455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.745 [2024-07-14 05:48:28.726482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.745 [2024-07-14 05:48:28.726497] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.745 [2024-07-14 05:48:28.726512] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.745 [2024-07-14 05:48:28.726542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.745 qpair failed and we were unable to recover it. 00:34:21.745 [2024-07-14 05:48:28.736305] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.745 [2024-07-14 05:48:28.736468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.745 [2024-07-14 05:48:28.736495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.745 [2024-07-14 05:48:28.736511] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.745 [2024-07-14 05:48:28.736525] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.745 [2024-07-14 05:48:28.736554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.745 qpair failed and we were unable to recover it. 00:34:21.745 [2024-07-14 05:48:28.746345] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.746 [2024-07-14 05:48:28.746498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.746 [2024-07-14 05:48:28.746525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.746 [2024-07-14 05:48:28.746540] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.746 [2024-07-14 05:48:28.746555] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.746 [2024-07-14 05:48:28.746583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.746 qpair failed and we were unable to recover it. 00:34:21.746 [2024-07-14 05:48:28.756369] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.746 [2024-07-14 05:48:28.756570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.746 [2024-07-14 05:48:28.756597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.746 [2024-07-14 05:48:28.756612] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.746 [2024-07-14 05:48:28.756626] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.746 [2024-07-14 05:48:28.756654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.746 qpair failed and we were unable to recover it. 00:34:21.746 [2024-07-14 05:48:28.766420] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.746 [2024-07-14 05:48:28.766625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.746 [2024-07-14 05:48:28.766667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.746 [2024-07-14 05:48:28.766681] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.746 [2024-07-14 05:48:28.766699] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.746 [2024-07-14 05:48:28.766742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.746 qpair failed and we were unable to recover it. 00:34:21.746 [2024-07-14 05:48:28.776452] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.746 [2024-07-14 05:48:28.776623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.746 [2024-07-14 05:48:28.776651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.746 [2024-07-14 05:48:28.776666] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.746 [2024-07-14 05:48:28.776681] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.746 [2024-07-14 05:48:28.776710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.746 qpair failed and we were unable to recover it. 00:34:21.746 [2024-07-14 05:48:28.786437] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.746 [2024-07-14 05:48:28.786634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.746 [2024-07-14 05:48:28.786660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.746 [2024-07-14 05:48:28.786676] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.746 [2024-07-14 05:48:28.786689] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.746 [2024-07-14 05:48:28.786717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.746 qpair failed and we were unable to recover it. 00:34:21.746 [2024-07-14 05:48:28.796455] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.746 [2024-07-14 05:48:28.796608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.746 [2024-07-14 05:48:28.796634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.746 [2024-07-14 05:48:28.796649] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.746 [2024-07-14 05:48:28.796663] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.746 [2024-07-14 05:48:28.796694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.746 qpair failed and we were unable to recover it. 00:34:21.746 [2024-07-14 05:48:28.806493] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.746 [2024-07-14 05:48:28.806654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.746 [2024-07-14 05:48:28.806680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.746 [2024-07-14 05:48:28.806696] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.746 [2024-07-14 05:48:28.806709] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.746 [2024-07-14 05:48:28.806738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.746 qpair failed and we were unable to recover it. 00:34:21.746 [2024-07-14 05:48:28.816622] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.746 [2024-07-14 05:48:28.816784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.746 [2024-07-14 05:48:28.816811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.746 [2024-07-14 05:48:28.816826] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.746 [2024-07-14 05:48:28.816839] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.746 [2024-07-14 05:48:28.816874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.746 qpair failed and we were unable to recover it. 00:34:21.746 [2024-07-14 05:48:28.826542] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.746 [2024-07-14 05:48:28.826699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.746 [2024-07-14 05:48:28.826725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.746 [2024-07-14 05:48:28.826741] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.746 [2024-07-14 05:48:28.826754] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.746 [2024-07-14 05:48:28.826783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.746 qpair failed and we were unable to recover it. 00:34:21.746 [2024-07-14 05:48:28.836612] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.746 [2024-07-14 05:48:28.836771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.746 [2024-07-14 05:48:28.836796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.746 [2024-07-14 05:48:28.836811] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.746 [2024-07-14 05:48:28.836824] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.746 [2024-07-14 05:48:28.836853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.746 qpair failed and we were unable to recover it. 00:34:21.746 [2024-07-14 05:48:28.846626] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.746 [2024-07-14 05:48:28.846810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.746 [2024-07-14 05:48:28.846838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.746 [2024-07-14 05:48:28.846854] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.746 [2024-07-14 05:48:28.846879] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:21.746 [2024-07-14 05:48:28.846915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.746 qpair failed and we were unable to recover it. 00:34:22.005 [2024-07-14 05:48:28.856660] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.005 [2024-07-14 05:48:28.856822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.005 [2024-07-14 05:48:28.856850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.005 [2024-07-14 05:48:28.856874] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.005 [2024-07-14 05:48:28.856897] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.005 [2024-07-14 05:48:28.856928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.005 qpair failed and we were unable to recover it. 00:34:22.005 [2024-07-14 05:48:28.866663] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.005 [2024-07-14 05:48:28.866825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.005 [2024-07-14 05:48:28.866852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.005 [2024-07-14 05:48:28.866874] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.005 [2024-07-14 05:48:28.866890] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.005 [2024-07-14 05:48:28.866920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.005 qpair failed and we were unable to recover it. 00:34:22.005 [2024-07-14 05:48:28.876702] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.005 [2024-07-14 05:48:28.876864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.005 [2024-07-14 05:48:28.876899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.005 [2024-07-14 05:48:28.876922] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.005 [2024-07-14 05:48:28.876936] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.005 [2024-07-14 05:48:28.876965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.005 qpair failed and we were unable to recover it. 00:34:22.005 [2024-07-14 05:48:28.886746] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.005 [2024-07-14 05:48:28.886914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.005 [2024-07-14 05:48:28.886940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.005 [2024-07-14 05:48:28.886956] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.005 [2024-07-14 05:48:28.886970] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.005 [2024-07-14 05:48:28.886999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.005 qpair failed and we were unable to recover it. 00:34:22.005 [2024-07-14 05:48:28.896780] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.005 [2024-07-14 05:48:28.896939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.005 [2024-07-14 05:48:28.896967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.005 [2024-07-14 05:48:28.896982] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.005 [2024-07-14 05:48:28.896995] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.005 [2024-07-14 05:48:28.897025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.005 qpair failed and we were unable to recover it. 00:34:22.005 [2024-07-14 05:48:28.906808] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.005 [2024-07-14 05:48:28.906973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.005 [2024-07-14 05:48:28.906999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.005 [2024-07-14 05:48:28.907015] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.005 [2024-07-14 05:48:28.907028] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.005 [2024-07-14 05:48:28.907058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.005 qpair failed and we were unable to recover it. 00:34:22.005 [2024-07-14 05:48:28.916840] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.005 [2024-07-14 05:48:28.917007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.005 [2024-07-14 05:48:28.917034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.005 [2024-07-14 05:48:28.917049] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.005 [2024-07-14 05:48:28.917063] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.005 [2024-07-14 05:48:28.917092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.005 qpair failed and we were unable to recover it. 00:34:22.005 [2024-07-14 05:48:28.926853] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.005 [2024-07-14 05:48:28.927020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.005 [2024-07-14 05:48:28.927047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.005 [2024-07-14 05:48:28.927062] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.005 [2024-07-14 05:48:28.927076] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.005 [2024-07-14 05:48:28.927105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.005 qpair failed and we were unable to recover it. 00:34:22.005 [2024-07-14 05:48:28.936880] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.005 [2024-07-14 05:48:28.937040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.005 [2024-07-14 05:48:28.937066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.005 [2024-07-14 05:48:28.937082] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.005 [2024-07-14 05:48:28.937095] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.005 [2024-07-14 05:48:28.937124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.005 qpair failed and we were unable to recover it. 00:34:22.005 [2024-07-14 05:48:28.946899] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.005 [2024-07-14 05:48:28.947091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.005 [2024-07-14 05:48:28.947117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.005 [2024-07-14 05:48:28.947138] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.005 [2024-07-14 05:48:28.947152] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.005 [2024-07-14 05:48:28.947181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.005 qpair failed and we were unable to recover it. 00:34:22.005 [2024-07-14 05:48:28.957137] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.005 [2024-07-14 05:48:28.957295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.005 [2024-07-14 05:48:28.957322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.005 [2024-07-14 05:48:28.957337] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.005 [2024-07-14 05:48:28.957351] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.005 [2024-07-14 05:48:28.957394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.005 qpair failed and we were unable to recover it. 00:34:22.005 [2024-07-14 05:48:28.966970] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.005 [2024-07-14 05:48:28.967136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.006 [2024-07-14 05:48:28.967163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.006 [2024-07-14 05:48:28.967178] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.006 [2024-07-14 05:48:28.967192] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.006 [2024-07-14 05:48:28.967222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.006 qpair failed and we were unable to recover it. 00:34:22.006 [2024-07-14 05:48:28.976987] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.006 [2024-07-14 05:48:28.977145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.006 [2024-07-14 05:48:28.977172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.006 [2024-07-14 05:48:28.977188] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.006 [2024-07-14 05:48:28.977201] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.006 [2024-07-14 05:48:28.977245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.006 qpair failed and we were unable to recover it. 00:34:22.006 [2024-07-14 05:48:28.987037] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.006 [2024-07-14 05:48:28.987193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.006 [2024-07-14 05:48:28.987220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.006 [2024-07-14 05:48:28.987235] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.006 [2024-07-14 05:48:28.987249] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.006 [2024-07-14 05:48:28.987278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.006 qpair failed and we were unable to recover it. 00:34:22.006 [2024-07-14 05:48:28.997111] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.006 [2024-07-14 05:48:28.997272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.006 [2024-07-14 05:48:28.997298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.006 [2024-07-14 05:48:28.997312] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.006 [2024-07-14 05:48:28.997341] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.006 [2024-07-14 05:48:28.997369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.006 qpair failed and we were unable to recover it. 00:34:22.006 [2024-07-14 05:48:29.007135] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.006 [2024-07-14 05:48:29.007365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.006 [2024-07-14 05:48:29.007405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.006 [2024-07-14 05:48:29.007420] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.006 [2024-07-14 05:48:29.007433] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.006 [2024-07-14 05:48:29.007474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.006 qpair failed and we were unable to recover it. 00:34:22.006 [2024-07-14 05:48:29.017171] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.006 [2024-07-14 05:48:29.017360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.006 [2024-07-14 05:48:29.017401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.006 [2024-07-14 05:48:29.017417] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.006 [2024-07-14 05:48:29.017430] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.006 [2024-07-14 05:48:29.017458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.006 qpair failed and we were unable to recover it. 00:34:22.006 [2024-07-14 05:48:29.027172] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.006 [2024-07-14 05:48:29.027328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.006 [2024-07-14 05:48:29.027354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.006 [2024-07-14 05:48:29.027369] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.006 [2024-07-14 05:48:29.027383] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.006 [2024-07-14 05:48:29.027411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.006 qpair failed and we were unable to recover it. 00:34:22.006 [2024-07-14 05:48:29.037173] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.006 [2024-07-14 05:48:29.037327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.006 [2024-07-14 05:48:29.037353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.006 [2024-07-14 05:48:29.037375] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.006 [2024-07-14 05:48:29.037390] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.006 [2024-07-14 05:48:29.037420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.006 qpair failed and we were unable to recover it. 00:34:22.006 [2024-07-14 05:48:29.047195] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.006 [2024-07-14 05:48:29.047347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.006 [2024-07-14 05:48:29.047373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.006 [2024-07-14 05:48:29.047389] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.006 [2024-07-14 05:48:29.047402] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.006 [2024-07-14 05:48:29.047431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.006 qpair failed and we were unable to recover it. 00:34:22.006 [2024-07-14 05:48:29.057220] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.006 [2024-07-14 05:48:29.057374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.006 [2024-07-14 05:48:29.057399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.006 [2024-07-14 05:48:29.057414] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.006 [2024-07-14 05:48:29.057428] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.006 [2024-07-14 05:48:29.057457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.006 qpair failed and we were unable to recover it. 00:34:22.006 [2024-07-14 05:48:29.067335] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.006 [2024-07-14 05:48:29.067492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.006 [2024-07-14 05:48:29.067518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.006 [2024-07-14 05:48:29.067534] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.006 [2024-07-14 05:48:29.067563] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.006 [2024-07-14 05:48:29.067591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.006 qpair failed and we were unable to recover it. 00:34:22.006 [2024-07-14 05:48:29.077284] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.006 [2024-07-14 05:48:29.077447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.006 [2024-07-14 05:48:29.077474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.006 [2024-07-14 05:48:29.077489] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.006 [2024-07-14 05:48:29.077504] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.006 [2024-07-14 05:48:29.077533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.006 qpair failed and we were unable to recover it. 00:34:22.006 [2024-07-14 05:48:29.087310] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.006 [2024-07-14 05:48:29.087487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.006 [2024-07-14 05:48:29.087513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.006 [2024-07-14 05:48:29.087529] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.006 [2024-07-14 05:48:29.087542] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.006 [2024-07-14 05:48:29.087572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.006 qpair failed and we were unable to recover it. 00:34:22.006 [2024-07-14 05:48:29.097325] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.006 [2024-07-14 05:48:29.097483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.006 [2024-07-14 05:48:29.097509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.006 [2024-07-14 05:48:29.097524] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.006 [2024-07-14 05:48:29.097538] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.006 [2024-07-14 05:48:29.097568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.006 qpair failed and we were unable to recover it. 00:34:22.006 [2024-07-14 05:48:29.107415] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.006 [2024-07-14 05:48:29.107603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.006 [2024-07-14 05:48:29.107642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.006 [2024-07-14 05:48:29.107673] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.006 [2024-07-14 05:48:29.107698] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.006 [2024-07-14 05:48:29.107738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.006 qpair failed and we were unable to recover it. 00:34:22.265 [2024-07-14 05:48:29.117448] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.265 [2024-07-14 05:48:29.117610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.265 [2024-07-14 05:48:29.117637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.265 [2024-07-14 05:48:29.117654] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.265 [2024-07-14 05:48:29.117683] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.265 [2024-07-14 05:48:29.117713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.265 qpair failed and we were unable to recover it. 00:34:22.265 [2024-07-14 05:48:29.127432] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.265 [2024-07-14 05:48:29.127591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.265 [2024-07-14 05:48:29.127618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.265 [2024-07-14 05:48:29.127640] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.265 [2024-07-14 05:48:29.127655] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.265 [2024-07-14 05:48:29.127685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.265 qpair failed and we were unable to recover it. 00:34:22.265 [2024-07-14 05:48:29.137464] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.265 [2024-07-14 05:48:29.137617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.265 [2024-07-14 05:48:29.137643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.265 [2024-07-14 05:48:29.137658] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.265 [2024-07-14 05:48:29.137672] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.265 [2024-07-14 05:48:29.137717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.265 qpair failed and we were unable to recover it. 00:34:22.265 [2024-07-14 05:48:29.147488] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.265 [2024-07-14 05:48:29.147661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.265 [2024-07-14 05:48:29.147687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.265 [2024-07-14 05:48:29.147718] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.265 [2024-07-14 05:48:29.147732] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.265 [2024-07-14 05:48:29.147761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.265 qpair failed and we were unable to recover it. 00:34:22.265 [2024-07-14 05:48:29.157526] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.265 [2024-07-14 05:48:29.157690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.265 [2024-07-14 05:48:29.157717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.265 [2024-07-14 05:48:29.157732] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.265 [2024-07-14 05:48:29.157761] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.265 [2024-07-14 05:48:29.157790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.265 qpair failed and we were unable to recover it. 00:34:22.265 [2024-07-14 05:48:29.167523] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.265 [2024-07-14 05:48:29.167682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.265 [2024-07-14 05:48:29.167709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.265 [2024-07-14 05:48:29.167724] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.265 [2024-07-14 05:48:29.167738] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.265 [2024-07-14 05:48:29.167768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.265 qpair failed and we were unable to recover it. 00:34:22.265 [2024-07-14 05:48:29.177556] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.265 [2024-07-14 05:48:29.177714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.265 [2024-07-14 05:48:29.177740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.265 [2024-07-14 05:48:29.177755] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.265 [2024-07-14 05:48:29.177768] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.265 [2024-07-14 05:48:29.177814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.265 qpair failed and we were unable to recover it. 00:34:22.265 [2024-07-14 05:48:29.187587] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.265 [2024-07-14 05:48:29.187752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.265 [2024-07-14 05:48:29.187777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.265 [2024-07-14 05:48:29.187793] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.265 [2024-07-14 05:48:29.187823] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.265 [2024-07-14 05:48:29.187851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.265 qpair failed and we were unable to recover it. 00:34:22.265 [2024-07-14 05:48:29.197627] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.265 [2024-07-14 05:48:29.197802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.265 [2024-07-14 05:48:29.197827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.265 [2024-07-14 05:48:29.197842] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.265 [2024-07-14 05:48:29.197856] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.265 [2024-07-14 05:48:29.197892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.265 qpair failed and we were unable to recover it. 00:34:22.265 [2024-07-14 05:48:29.207675] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.265 [2024-07-14 05:48:29.207851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.265 [2024-07-14 05:48:29.207885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.265 [2024-07-14 05:48:29.207901] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.265 [2024-07-14 05:48:29.207915] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.265 [2024-07-14 05:48:29.207944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.265 qpair failed and we were unable to recover it. 00:34:22.265 [2024-07-14 05:48:29.217673] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.265 [2024-07-14 05:48:29.217836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.265 [2024-07-14 05:48:29.217872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.265 [2024-07-14 05:48:29.217892] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.265 [2024-07-14 05:48:29.217906] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.265 [2024-07-14 05:48:29.217937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.265 qpair failed and we were unable to recover it. 00:34:22.265 [2024-07-14 05:48:29.227762] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.265 [2024-07-14 05:48:29.227967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.265 [2024-07-14 05:48:29.227994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.265 [2024-07-14 05:48:29.228009] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.265 [2024-07-14 05:48:29.228023] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.265 [2024-07-14 05:48:29.228053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.265 qpair failed and we were unable to recover it. 00:34:22.265 [2024-07-14 05:48:29.237741] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.265 [2024-07-14 05:48:29.237900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.265 [2024-07-14 05:48:29.237926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.265 [2024-07-14 05:48:29.237942] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.266 [2024-07-14 05:48:29.237955] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.266 [2024-07-14 05:48:29.237985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.266 qpair failed and we were unable to recover it. 00:34:22.266 [2024-07-14 05:48:29.247735] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.266 [2024-07-14 05:48:29.247895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.266 [2024-07-14 05:48:29.247922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.266 [2024-07-14 05:48:29.247938] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.266 [2024-07-14 05:48:29.247951] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.266 [2024-07-14 05:48:29.247981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.266 qpair failed and we were unable to recover it. 00:34:22.266 [2024-07-14 05:48:29.257802] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.266 [2024-07-14 05:48:29.257962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.266 [2024-07-14 05:48:29.257988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.266 [2024-07-14 05:48:29.258003] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.266 [2024-07-14 05:48:29.258017] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.266 [2024-07-14 05:48:29.258052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.266 qpair failed and we were unable to recover it. 00:34:22.266 [2024-07-14 05:48:29.267824] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.266 [2024-07-14 05:48:29.268043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.266 [2024-07-14 05:48:29.268069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.266 [2024-07-14 05:48:29.268084] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.266 [2024-07-14 05:48:29.268097] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.266 [2024-07-14 05:48:29.268127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.266 qpair failed and we were unable to recover it. 00:34:22.266 [2024-07-14 05:48:29.277827] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.266 [2024-07-14 05:48:29.277998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.266 [2024-07-14 05:48:29.278025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.266 [2024-07-14 05:48:29.278040] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.266 [2024-07-14 05:48:29.278055] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.266 [2024-07-14 05:48:29.278084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.266 qpair failed and we were unable to recover it. 00:34:22.266 [2024-07-14 05:48:29.287910] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.266 [2024-07-14 05:48:29.288096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.266 [2024-07-14 05:48:29.288122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.266 [2024-07-14 05:48:29.288137] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.266 [2024-07-14 05:48:29.288152] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.266 [2024-07-14 05:48:29.288197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.266 qpair failed and we were unable to recover it. 00:34:22.266 [2024-07-14 05:48:29.297904] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.266 [2024-07-14 05:48:29.298063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.266 [2024-07-14 05:48:29.298089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.266 [2024-07-14 05:48:29.298105] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.266 [2024-07-14 05:48:29.298118] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.266 [2024-07-14 05:48:29.298149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.266 qpair failed and we were unable to recover it. 00:34:22.266 [2024-07-14 05:48:29.307922] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.266 [2024-07-14 05:48:29.308119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.266 [2024-07-14 05:48:29.308149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.266 [2024-07-14 05:48:29.308165] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.266 [2024-07-14 05:48:29.308180] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.266 [2024-07-14 05:48:29.308209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.266 qpair failed and we were unable to recover it. 00:34:22.266 [2024-07-14 05:48:29.317957] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.266 [2024-07-14 05:48:29.318121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.266 [2024-07-14 05:48:29.318147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.266 [2024-07-14 05:48:29.318163] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.266 [2024-07-14 05:48:29.318178] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.266 [2024-07-14 05:48:29.318222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.266 qpair failed and we were unable to recover it. 00:34:22.266 [2024-07-14 05:48:29.327998] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.266 [2024-07-14 05:48:29.328156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.266 [2024-07-14 05:48:29.328182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.266 [2024-07-14 05:48:29.328198] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.266 [2024-07-14 05:48:29.328211] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.266 [2024-07-14 05:48:29.328240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.266 qpair failed and we were unable to recover it. 00:34:22.266 [2024-07-14 05:48:29.337999] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.266 [2024-07-14 05:48:29.338148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.266 [2024-07-14 05:48:29.338174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.266 [2024-07-14 05:48:29.338190] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.266 [2024-07-14 05:48:29.338205] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.266 [2024-07-14 05:48:29.338233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.266 qpair failed and we were unable to recover it. 00:34:22.266 [2024-07-14 05:48:29.348040] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.266 [2024-07-14 05:48:29.348188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.266 [2024-07-14 05:48:29.348214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.266 [2024-07-14 05:48:29.348229] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.266 [2024-07-14 05:48:29.348242] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.266 [2024-07-14 05:48:29.348277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.266 qpair failed and we were unable to recover it. 00:34:22.266 [2024-07-14 05:48:29.358064] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.266 [2024-07-14 05:48:29.358229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.266 [2024-07-14 05:48:29.358255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.266 [2024-07-14 05:48:29.358270] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.266 [2024-07-14 05:48:29.358284] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.266 [2024-07-14 05:48:29.358313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.266 qpair failed and we were unable to recover it. 00:34:22.266 [2024-07-14 05:48:29.368137] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.266 [2024-07-14 05:48:29.368289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.266 [2024-07-14 05:48:29.368317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.266 [2024-07-14 05:48:29.368333] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.266 [2024-07-14 05:48:29.368348] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.266 [2024-07-14 05:48:29.368378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.266 qpair failed and we were unable to recover it. 00:34:22.524 [2024-07-14 05:48:29.378117] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.524 [2024-07-14 05:48:29.378271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.524 [2024-07-14 05:48:29.378300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.524 [2024-07-14 05:48:29.378316] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.524 [2024-07-14 05:48:29.378331] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.524 [2024-07-14 05:48:29.378361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.524 qpair failed and we were unable to recover it. 00:34:22.524 [2024-07-14 05:48:29.388136] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.524 [2024-07-14 05:48:29.388291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.524 [2024-07-14 05:48:29.388318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.524 [2024-07-14 05:48:29.388335] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.524 [2024-07-14 05:48:29.388348] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.524 [2024-07-14 05:48:29.388378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.524 qpair failed and we were unable to recover it. 00:34:22.524 [2024-07-14 05:48:29.398249] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.524 [2024-07-14 05:48:29.398429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.524 [2024-07-14 05:48:29.398461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.524 [2024-07-14 05:48:29.398477] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.524 [2024-07-14 05:48:29.398492] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.524 [2024-07-14 05:48:29.398522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.524 qpair failed and we were unable to recover it. 00:34:22.524 [2024-07-14 05:48:29.408206] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.524 [2024-07-14 05:48:29.408360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.524 [2024-07-14 05:48:29.408387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.524 [2024-07-14 05:48:29.408402] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.524 [2024-07-14 05:48:29.408415] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.524 [2024-07-14 05:48:29.408445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.524 qpair failed and we were unable to recover it. 00:34:22.524 [2024-07-14 05:48:29.418259] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.524 [2024-07-14 05:48:29.418414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.524 [2024-07-14 05:48:29.418440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.524 [2024-07-14 05:48:29.418456] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.524 [2024-07-14 05:48:29.418471] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.524 [2024-07-14 05:48:29.418501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.524 qpair failed and we were unable to recover it. 00:34:22.524 [2024-07-14 05:48:29.428289] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.524 [2024-07-14 05:48:29.428445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.524 [2024-07-14 05:48:29.428472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.524 [2024-07-14 05:48:29.428488] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.524 [2024-07-14 05:48:29.428503] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.524 [2024-07-14 05:48:29.428532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.524 qpair failed and we were unable to recover it. 00:34:22.524 [2024-07-14 05:48:29.438304] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.524 [2024-07-14 05:48:29.438511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.524 [2024-07-14 05:48:29.438538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.524 [2024-07-14 05:48:29.438553] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.524 [2024-07-14 05:48:29.438566] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.524 [2024-07-14 05:48:29.438601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.524 qpair failed and we were unable to recover it. 00:34:22.524 [2024-07-14 05:48:29.448352] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.524 [2024-07-14 05:48:29.448515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.524 [2024-07-14 05:48:29.448541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.524 [2024-07-14 05:48:29.448557] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.524 [2024-07-14 05:48:29.448571] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.524 [2024-07-14 05:48:29.448600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.524 qpair failed and we were unable to recover it. 00:34:22.524 [2024-07-14 05:48:29.458341] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.524 [2024-07-14 05:48:29.458490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.525 [2024-07-14 05:48:29.458516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.525 [2024-07-14 05:48:29.458531] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.525 [2024-07-14 05:48:29.458547] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.525 [2024-07-14 05:48:29.458575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.525 qpair failed and we were unable to recover it. 00:34:22.525 [2024-07-14 05:48:29.468373] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.525 [2024-07-14 05:48:29.468527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.525 [2024-07-14 05:48:29.468553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.525 [2024-07-14 05:48:29.468569] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.525 [2024-07-14 05:48:29.468583] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.525 [2024-07-14 05:48:29.468612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.525 qpair failed and we were unable to recover it. 00:34:22.525 [2024-07-14 05:48:29.478472] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.525 [2024-07-14 05:48:29.478663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.525 [2024-07-14 05:48:29.478705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.525 [2024-07-14 05:48:29.478723] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.525 [2024-07-14 05:48:29.478738] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.525 [2024-07-14 05:48:29.478767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.525 qpair failed and we were unable to recover it. 00:34:22.525 [2024-07-14 05:48:29.488463] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.525 [2024-07-14 05:48:29.488612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.525 [2024-07-14 05:48:29.488643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.525 [2024-07-14 05:48:29.488660] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.525 [2024-07-14 05:48:29.488675] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.525 [2024-07-14 05:48:29.488704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.525 qpair failed and we were unable to recover it. 00:34:22.525 [2024-07-14 05:48:29.498482] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.525 [2024-07-14 05:48:29.498630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.525 [2024-07-14 05:48:29.498657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.525 [2024-07-14 05:48:29.498672] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.525 [2024-07-14 05:48:29.498686] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.525 [2024-07-14 05:48:29.498716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.525 qpair failed and we were unable to recover it. 00:34:22.525 [2024-07-14 05:48:29.508575] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.525 [2024-07-14 05:48:29.508728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.525 [2024-07-14 05:48:29.508755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.525 [2024-07-14 05:48:29.508770] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.525 [2024-07-14 05:48:29.508785] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.525 [2024-07-14 05:48:29.508815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.525 qpair failed and we were unable to recover it. 00:34:22.525 [2024-07-14 05:48:29.518557] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.525 [2024-07-14 05:48:29.518741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.525 [2024-07-14 05:48:29.518766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.525 [2024-07-14 05:48:29.518782] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.525 [2024-07-14 05:48:29.518795] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.525 [2024-07-14 05:48:29.518825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.525 qpair failed and we were unable to recover it. 00:34:22.525 [2024-07-14 05:48:29.528544] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.525 [2024-07-14 05:48:29.528696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.525 [2024-07-14 05:48:29.528723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.525 [2024-07-14 05:48:29.528738] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.525 [2024-07-14 05:48:29.528759] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.525 [2024-07-14 05:48:29.528789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.525 qpair failed and we were unable to recover it. 00:34:22.525 [2024-07-14 05:48:29.538626] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.525 [2024-07-14 05:48:29.538785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.525 [2024-07-14 05:48:29.538811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.525 [2024-07-14 05:48:29.538827] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.525 [2024-07-14 05:48:29.538840] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.525 [2024-07-14 05:48:29.538877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.525 qpair failed and we were unable to recover it. 00:34:22.525 [2024-07-14 05:48:29.548656] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.525 [2024-07-14 05:48:29.548825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.525 [2024-07-14 05:48:29.548851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.525 [2024-07-14 05:48:29.548874] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.525 [2024-07-14 05:48:29.548891] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.525 [2024-07-14 05:48:29.548921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.525 qpair failed and we were unable to recover it. 00:34:22.525 [2024-07-14 05:48:29.558669] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.525 [2024-07-14 05:48:29.558835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.525 [2024-07-14 05:48:29.558861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.525 [2024-07-14 05:48:29.558883] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.525 [2024-07-14 05:48:29.558898] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.525 [2024-07-14 05:48:29.558928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.525 qpair failed and we were unable to recover it. 00:34:22.525 [2024-07-14 05:48:29.568676] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.525 [2024-07-14 05:48:29.568887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.525 [2024-07-14 05:48:29.568913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.525 [2024-07-14 05:48:29.568929] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.525 [2024-07-14 05:48:29.568944] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.525 [2024-07-14 05:48:29.568974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.525 qpair failed and we were unable to recover it. 00:34:22.525 [2024-07-14 05:48:29.578720] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.525 [2024-07-14 05:48:29.578888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.525 [2024-07-14 05:48:29.578915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.525 [2024-07-14 05:48:29.578930] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.525 [2024-07-14 05:48:29.578944] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.525 [2024-07-14 05:48:29.578974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.525 qpair failed and we were unable to recover it. 00:34:22.525 [2024-07-14 05:48:29.588720] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.525 [2024-07-14 05:48:29.588882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.525 [2024-07-14 05:48:29.588909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.525 [2024-07-14 05:48:29.588924] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.525 [2024-07-14 05:48:29.588938] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.525 [2024-07-14 05:48:29.588967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.525 qpair failed and we were unable to recover it. 00:34:22.525 [2024-07-14 05:48:29.598769] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.525 [2024-07-14 05:48:29.598944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.525 [2024-07-14 05:48:29.598970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.525 [2024-07-14 05:48:29.598985] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.525 [2024-07-14 05:48:29.599000] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.525 [2024-07-14 05:48:29.599029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.526 qpair failed and we were unable to recover it. 00:34:22.526 [2024-07-14 05:48:29.608775] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.526 [2024-07-14 05:48:29.608955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.526 [2024-07-14 05:48:29.608981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.526 [2024-07-14 05:48:29.608996] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.526 [2024-07-14 05:48:29.609010] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.526 [2024-07-14 05:48:29.609039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.526 qpair failed and we were unable to recover it. 00:34:22.526 [2024-07-14 05:48:29.618836] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.526 [2024-07-14 05:48:29.619024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.526 [2024-07-14 05:48:29.619050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.526 [2024-07-14 05:48:29.619066] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.526 [2024-07-14 05:48:29.619086] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.526 [2024-07-14 05:48:29.619116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.526 qpair failed and we were unable to recover it. 00:34:22.526 [2024-07-14 05:48:29.628870] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.526 [2024-07-14 05:48:29.629026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.526 [2024-07-14 05:48:29.629055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.526 [2024-07-14 05:48:29.629072] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.526 [2024-07-14 05:48:29.629086] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.526 [2024-07-14 05:48:29.629117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.526 qpair failed and we were unable to recover it. 00:34:22.783 [2024-07-14 05:48:29.638930] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.784 [2024-07-14 05:48:29.639130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.784 [2024-07-14 05:48:29.639169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.784 [2024-07-14 05:48:29.639185] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.784 [2024-07-14 05:48:29.639199] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.784 [2024-07-14 05:48:29.639230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.784 qpair failed and we were unable to recover it. 00:34:22.784 [2024-07-14 05:48:29.648887] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.784 [2024-07-14 05:48:29.649049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.784 [2024-07-14 05:48:29.649075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.784 [2024-07-14 05:48:29.649092] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.784 [2024-07-14 05:48:29.649106] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.784 [2024-07-14 05:48:29.649135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.784 qpair failed and we were unable to recover it. 00:34:22.784 [2024-07-14 05:48:29.658921] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.784 [2024-07-14 05:48:29.659073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.784 [2024-07-14 05:48:29.659099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.784 [2024-07-14 05:48:29.659114] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.784 [2024-07-14 05:48:29.659128] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.784 [2024-07-14 05:48:29.659158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.784 qpair failed and we were unable to recover it. 00:34:22.784 [2024-07-14 05:48:29.668963] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.784 [2024-07-14 05:48:29.669138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.784 [2024-07-14 05:48:29.669165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.784 [2024-07-14 05:48:29.669180] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.784 [2024-07-14 05:48:29.669194] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.784 [2024-07-14 05:48:29.669223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.784 qpair failed and we were unable to recover it. 00:34:22.784 [2024-07-14 05:48:29.679032] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.784 [2024-07-14 05:48:29.679230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.784 [2024-07-14 05:48:29.679256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.784 [2024-07-14 05:48:29.679271] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.784 [2024-07-14 05:48:29.679285] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.784 [2024-07-14 05:48:29.679314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.784 qpair failed and we were unable to recover it. 00:34:22.784 [2024-07-14 05:48:29.689032] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.784 [2024-07-14 05:48:29.689190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.784 [2024-07-14 05:48:29.689218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.784 [2024-07-14 05:48:29.689233] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.784 [2024-07-14 05:48:29.689247] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.784 [2024-07-14 05:48:29.689276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.784 qpair failed and we were unable to recover it. 00:34:22.784 [2024-07-14 05:48:29.699067] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.784 [2024-07-14 05:48:29.699221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.784 [2024-07-14 05:48:29.699247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.784 [2024-07-14 05:48:29.699263] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.784 [2024-07-14 05:48:29.699277] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.784 [2024-07-14 05:48:29.699321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.784 qpair failed and we were unable to recover it. 00:34:22.784 [2024-07-14 05:48:29.709103] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.784 [2024-07-14 05:48:29.709262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.784 [2024-07-14 05:48:29.709288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.784 [2024-07-14 05:48:29.709309] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.784 [2024-07-14 05:48:29.709324] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.784 [2024-07-14 05:48:29.709353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.784 qpair failed and we were unable to recover it. 00:34:22.784 [2024-07-14 05:48:29.719104] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.784 [2024-07-14 05:48:29.719268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.784 [2024-07-14 05:48:29.719294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.784 [2024-07-14 05:48:29.719309] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.784 [2024-07-14 05:48:29.719324] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.784 [2024-07-14 05:48:29.719352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.784 qpair failed and we were unable to recover it. 00:34:22.784 [2024-07-14 05:48:29.729202] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.784 [2024-07-14 05:48:29.729359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.784 [2024-07-14 05:48:29.729384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.784 [2024-07-14 05:48:29.729399] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.784 [2024-07-14 05:48:29.729413] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.784 [2024-07-14 05:48:29.729442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.784 qpair failed and we were unable to recover it. 00:34:22.784 [2024-07-14 05:48:29.739214] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.784 [2024-07-14 05:48:29.739382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.784 [2024-07-14 05:48:29.739408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.784 [2024-07-14 05:48:29.739423] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.784 [2024-07-14 05:48:29.739437] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.784 [2024-07-14 05:48:29.739480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.784 qpair failed and we were unable to recover it. 00:34:22.784 [2024-07-14 05:48:29.749163] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.784 [2024-07-14 05:48:29.749319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.784 [2024-07-14 05:48:29.749345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.784 [2024-07-14 05:48:29.749360] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.784 [2024-07-14 05:48:29.749374] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.784 [2024-07-14 05:48:29.749403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.784 qpair failed and we were unable to recover it. 00:34:22.784 [2024-07-14 05:48:29.759246] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.784 [2024-07-14 05:48:29.759420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.784 [2024-07-14 05:48:29.759445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.784 [2024-07-14 05:48:29.759460] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.784 [2024-07-14 05:48:29.759474] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.784 [2024-07-14 05:48:29.759503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.784 qpair failed and we were unable to recover it. 00:34:22.784 [2024-07-14 05:48:29.769232] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.784 [2024-07-14 05:48:29.769428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.784 [2024-07-14 05:48:29.769454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.784 [2024-07-14 05:48:29.769469] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.784 [2024-07-14 05:48:29.769484] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.784 [2024-07-14 05:48:29.769512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.784 qpair failed and we were unable to recover it. 00:34:22.784 [2024-07-14 05:48:29.779266] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.784 [2024-07-14 05:48:29.779428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.784 [2024-07-14 05:48:29.779454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.784 [2024-07-14 05:48:29.779470] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.785 [2024-07-14 05:48:29.779484] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.785 [2024-07-14 05:48:29.779512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.785 qpair failed and we were unable to recover it. 00:34:22.785 [2024-07-14 05:48:29.789305] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.785 [2024-07-14 05:48:29.789507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.785 [2024-07-14 05:48:29.789533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.785 [2024-07-14 05:48:29.789548] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.785 [2024-07-14 05:48:29.789563] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.785 [2024-07-14 05:48:29.789592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.785 qpair failed and we were unable to recover it. 00:34:22.785 [2024-07-14 05:48:29.799333] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.785 [2024-07-14 05:48:29.799499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.785 [2024-07-14 05:48:29.799525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.785 [2024-07-14 05:48:29.799549] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.785 [2024-07-14 05:48:29.799564] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.785 [2024-07-14 05:48:29.799594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.785 qpair failed and we were unable to recover it. 00:34:22.785 [2024-07-14 05:48:29.809332] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.785 [2024-07-14 05:48:29.809496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.785 [2024-07-14 05:48:29.809522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.785 [2024-07-14 05:48:29.809537] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.785 [2024-07-14 05:48:29.809551] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.785 [2024-07-14 05:48:29.809580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.785 qpair failed and we were unable to recover it. 00:34:22.785 [2024-07-14 05:48:29.819396] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.785 [2024-07-14 05:48:29.819562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.785 [2024-07-14 05:48:29.819588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.785 [2024-07-14 05:48:29.819603] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.785 [2024-07-14 05:48:29.819617] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.785 [2024-07-14 05:48:29.819646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.785 qpair failed and we were unable to recover it. 00:34:22.785 [2024-07-14 05:48:29.829431] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.785 [2024-07-14 05:48:29.829595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.785 [2024-07-14 05:48:29.829620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.785 [2024-07-14 05:48:29.829636] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.785 [2024-07-14 05:48:29.829649] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.785 [2024-07-14 05:48:29.829678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.785 qpair failed and we were unable to recover it. 00:34:22.785 [2024-07-14 05:48:29.839421] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.785 [2024-07-14 05:48:29.839592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.785 [2024-07-14 05:48:29.839617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.785 [2024-07-14 05:48:29.839633] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.785 [2024-07-14 05:48:29.839647] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.785 [2024-07-14 05:48:29.839675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.785 qpair failed and we were unable to recover it. 00:34:22.785 [2024-07-14 05:48:29.849441] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.785 [2024-07-14 05:48:29.849603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.785 [2024-07-14 05:48:29.849629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.785 [2024-07-14 05:48:29.849645] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.785 [2024-07-14 05:48:29.849659] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.785 [2024-07-14 05:48:29.849687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.785 qpair failed and we were unable to recover it. 00:34:22.785 [2024-07-14 05:48:29.859547] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.785 [2024-07-14 05:48:29.859738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.785 [2024-07-14 05:48:29.859764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.785 [2024-07-14 05:48:29.859779] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.785 [2024-07-14 05:48:29.859793] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.785 [2024-07-14 05:48:29.859823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.785 qpair failed and we were unable to recover it. 00:34:22.785 [2024-07-14 05:48:29.869612] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.785 [2024-07-14 05:48:29.869775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.785 [2024-07-14 05:48:29.869802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.785 [2024-07-14 05:48:29.869817] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.785 [2024-07-14 05:48:29.869831] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.785 [2024-07-14 05:48:29.869859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.785 qpair failed and we were unable to recover it. 00:34:22.785 [2024-07-14 05:48:29.879541] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.785 [2024-07-14 05:48:29.879710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.785 [2024-07-14 05:48:29.879735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.785 [2024-07-14 05:48:29.879751] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.785 [2024-07-14 05:48:29.879765] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:22.785 [2024-07-14 05:48:29.879793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.785 qpair failed and we were unable to recover it. 00:34:23.043 [2024-07-14 05:48:29.889682] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.043 [2024-07-14 05:48:29.889848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.043 [2024-07-14 05:48:29.889884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.043 [2024-07-14 05:48:29.889907] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.043 [2024-07-14 05:48:29.889922] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.043 [2024-07-14 05:48:29.889952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.043 qpair failed and we were unable to recover it. 00:34:23.043 [2024-07-14 05:48:29.899635] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.043 [2024-07-14 05:48:29.899840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.043 [2024-07-14 05:48:29.899874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.043 [2024-07-14 05:48:29.899891] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.044 [2024-07-14 05:48:29.899906] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.044 [2024-07-14 05:48:29.899937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.044 qpair failed and we were unable to recover it. 00:34:23.044 [2024-07-14 05:48:29.909619] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.044 [2024-07-14 05:48:29.909779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.044 [2024-07-14 05:48:29.909805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.044 [2024-07-14 05:48:29.909820] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.044 [2024-07-14 05:48:29.909834] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.044 [2024-07-14 05:48:29.909863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.044 qpair failed and we were unable to recover it. 00:34:23.044 [2024-07-14 05:48:29.919667] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.044 [2024-07-14 05:48:29.919853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.044 [2024-07-14 05:48:29.919887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.044 [2024-07-14 05:48:29.919903] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.044 [2024-07-14 05:48:29.919917] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.044 [2024-07-14 05:48:29.919946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.044 qpair failed and we were unable to recover it. 00:34:23.044 [2024-07-14 05:48:29.929703] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.044 [2024-07-14 05:48:29.929882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.044 [2024-07-14 05:48:29.929909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.044 [2024-07-14 05:48:29.929924] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.044 [2024-07-14 05:48:29.929938] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.044 [2024-07-14 05:48:29.929968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.044 qpair failed and we were unable to recover it. 00:34:23.044 [2024-07-14 05:48:29.939795] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.044 [2024-07-14 05:48:29.939975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.044 [2024-07-14 05:48:29.940002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.044 [2024-07-14 05:48:29.940017] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.044 [2024-07-14 05:48:29.940032] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.044 [2024-07-14 05:48:29.940062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.044 qpair failed and we were unable to recover it. 00:34:23.044 [2024-07-14 05:48:29.949754] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.044 [2024-07-14 05:48:29.949941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.044 [2024-07-14 05:48:29.949968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.044 [2024-07-14 05:48:29.949987] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.044 [2024-07-14 05:48:29.950002] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.044 [2024-07-14 05:48:29.950032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.044 qpair failed and we were unable to recover it. 00:34:23.044 [2024-07-14 05:48:29.959790] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.044 [2024-07-14 05:48:29.959971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.044 [2024-07-14 05:48:29.960000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.044 [2024-07-14 05:48:29.960018] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.044 [2024-07-14 05:48:29.960031] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.044 [2024-07-14 05:48:29.960061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.044 qpair failed and we were unable to recover it. 00:34:23.044 [2024-07-14 05:48:29.969815] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.044 [2024-07-14 05:48:29.970029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.044 [2024-07-14 05:48:29.970057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.044 [2024-07-14 05:48:29.970072] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.044 [2024-07-14 05:48:29.970086] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.044 [2024-07-14 05:48:29.970115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.044 qpair failed and we were unable to recover it. 00:34:23.044 [2024-07-14 05:48:29.979838] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.044 [2024-07-14 05:48:29.980008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.044 [2024-07-14 05:48:29.980040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.044 [2024-07-14 05:48:29.980056] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.044 [2024-07-14 05:48:29.980071] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.044 [2024-07-14 05:48:29.980100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.044 qpair failed and we were unable to recover it. 00:34:23.044 [2024-07-14 05:48:29.989861] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.044 [2024-07-14 05:48:29.990020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.044 [2024-07-14 05:48:29.990047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.044 [2024-07-14 05:48:29.990062] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.044 [2024-07-14 05:48:29.990076] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.044 [2024-07-14 05:48:29.990106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.044 qpair failed and we were unable to recover it. 00:34:23.044 [2024-07-14 05:48:29.999903] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.044 [2024-07-14 05:48:30.000093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.044 [2024-07-14 05:48:30.000119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.044 [2024-07-14 05:48:30.000134] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.044 [2024-07-14 05:48:30.000147] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.044 [2024-07-14 05:48:30.000178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.044 qpair failed and we were unable to recover it. 00:34:23.044 [2024-07-14 05:48:30.010028] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.044 [2024-07-14 05:48:30.010220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.044 [2024-07-14 05:48:30.010263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.044 [2024-07-14 05:48:30.010280] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.044 [2024-07-14 05:48:30.010293] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.044 [2024-07-14 05:48:30.010337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.044 qpair failed and we were unable to recover it. 00:34:23.044 [2024-07-14 05:48:30.020002] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.044 [2024-07-14 05:48:30.020202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.044 [2024-07-14 05:48:30.020242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.044 [2024-07-14 05:48:30.020258] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.044 [2024-07-14 05:48:30.020273] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.044 [2024-07-14 05:48:30.020318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.044 qpair failed and we were unable to recover it. 00:34:23.044 [2024-07-14 05:48:30.030030] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.044 [2024-07-14 05:48:30.030206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.044 [2024-07-14 05:48:30.030233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.044 [2024-07-14 05:48:30.030248] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.044 [2024-07-14 05:48:30.030263] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.044 [2024-07-14 05:48:30.030292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.044 qpair failed and we were unable to recover it. 00:34:23.044 [2024-07-14 05:48:30.040073] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.044 [2024-07-14 05:48:30.040251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.045 [2024-07-14 05:48:30.040278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.045 [2024-07-14 05:48:30.040294] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.045 [2024-07-14 05:48:30.040308] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.045 [2024-07-14 05:48:30.040338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.045 qpair failed and we were unable to recover it. 00:34:23.045 [2024-07-14 05:48:30.050061] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.045 [2024-07-14 05:48:30.050221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.045 [2024-07-14 05:48:30.050247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.045 [2024-07-14 05:48:30.050263] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.045 [2024-07-14 05:48:30.050276] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.045 [2024-07-14 05:48:30.050305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.045 qpair failed and we were unable to recover it. 00:34:23.045 [2024-07-14 05:48:30.060090] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.045 [2024-07-14 05:48:30.060251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.045 [2024-07-14 05:48:30.060278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.045 [2024-07-14 05:48:30.060293] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.045 [2024-07-14 05:48:30.060307] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.045 [2024-07-14 05:48:30.060336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.045 qpair failed and we were unable to recover it. 00:34:23.045 [2024-07-14 05:48:30.070097] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.045 [2024-07-14 05:48:30.070258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.045 [2024-07-14 05:48:30.070290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.045 [2024-07-14 05:48:30.070306] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.045 [2024-07-14 05:48:30.070321] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.045 [2024-07-14 05:48:30.070349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.045 qpair failed and we were unable to recover it. 00:34:23.045 [2024-07-14 05:48:30.080143] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.045 [2024-07-14 05:48:30.080306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.045 [2024-07-14 05:48:30.080332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.045 [2024-07-14 05:48:30.080347] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.045 [2024-07-14 05:48:30.080361] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.045 [2024-07-14 05:48:30.080390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.045 qpair failed and we were unable to recover it. 00:34:23.045 [2024-07-14 05:48:30.090292] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.045 [2024-07-14 05:48:30.090451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.045 [2024-07-14 05:48:30.090477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.045 [2024-07-14 05:48:30.090492] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.045 [2024-07-14 05:48:30.090506] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.045 [2024-07-14 05:48:30.090536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.045 qpair failed and we were unable to recover it. 00:34:23.045 [2024-07-14 05:48:30.100174] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.045 [2024-07-14 05:48:30.100331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.045 [2024-07-14 05:48:30.100358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.045 [2024-07-14 05:48:30.100373] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.045 [2024-07-14 05:48:30.100387] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.045 [2024-07-14 05:48:30.100416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.045 qpair failed and we were unable to recover it. 00:34:23.045 [2024-07-14 05:48:30.110260] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.045 [2024-07-14 05:48:30.110420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.045 [2024-07-14 05:48:30.110446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.045 [2024-07-14 05:48:30.110461] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.045 [2024-07-14 05:48:30.110475] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.045 [2024-07-14 05:48:30.110510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.045 qpair failed and we were unable to recover it. 00:34:23.045 [2024-07-14 05:48:30.120281] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.045 [2024-07-14 05:48:30.120477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.045 [2024-07-14 05:48:30.120504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.045 [2024-07-14 05:48:30.120519] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.045 [2024-07-14 05:48:30.120533] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.045 [2024-07-14 05:48:30.120562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.045 qpair failed and we were unable to recover it. 00:34:23.045 [2024-07-14 05:48:30.130356] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.045 [2024-07-14 05:48:30.130521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.045 [2024-07-14 05:48:30.130546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.045 [2024-07-14 05:48:30.130562] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.045 [2024-07-14 05:48:30.130576] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.045 [2024-07-14 05:48:30.130604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.045 qpair failed and we were unable to recover it. 00:34:23.045 [2024-07-14 05:48:30.140306] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.045 [2024-07-14 05:48:30.140463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.045 [2024-07-14 05:48:30.140489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.045 [2024-07-14 05:48:30.140504] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.045 [2024-07-14 05:48:30.140519] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.045 [2024-07-14 05:48:30.140548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.046 qpair failed and we were unable to recover it. 00:34:23.303 [2024-07-14 05:48:30.150341] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.303 [2024-07-14 05:48:30.150504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.303 [2024-07-14 05:48:30.150532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.303 [2024-07-14 05:48:30.150548] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.303 [2024-07-14 05:48:30.150562] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.303 [2024-07-14 05:48:30.150607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.303 qpair failed and we were unable to recover it. 00:34:23.303 [2024-07-14 05:48:30.160360] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.303 [2024-07-14 05:48:30.160531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.303 [2024-07-14 05:48:30.160564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.303 [2024-07-14 05:48:30.160581] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.303 [2024-07-14 05:48:30.160595] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.303 [2024-07-14 05:48:30.160625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.303 qpair failed and we were unable to recover it. 00:34:23.303 [2024-07-14 05:48:30.170426] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.303 [2024-07-14 05:48:30.170603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.303 [2024-07-14 05:48:30.170630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.303 [2024-07-14 05:48:30.170645] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.303 [2024-07-14 05:48:30.170673] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.303 [2024-07-14 05:48:30.170702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.303 qpair failed and we were unable to recover it. 00:34:23.303 [2024-07-14 05:48:30.180402] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.303 [2024-07-14 05:48:30.180565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.303 [2024-07-14 05:48:30.180591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.303 [2024-07-14 05:48:30.180607] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.303 [2024-07-14 05:48:30.180621] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.303 [2024-07-14 05:48:30.180651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.303 qpair failed and we were unable to recover it. 00:34:23.303 [2024-07-14 05:48:30.190455] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.303 [2024-07-14 05:48:30.190612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.303 [2024-07-14 05:48:30.190638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.303 [2024-07-14 05:48:30.190654] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.304 [2024-07-14 05:48:30.190667] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.304 [2024-07-14 05:48:30.190698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.304 qpair failed and we were unable to recover it. 00:34:23.304 [2024-07-14 05:48:30.200482] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.304 [2024-07-14 05:48:30.200658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.304 [2024-07-14 05:48:30.200685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.304 [2024-07-14 05:48:30.200700] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.304 [2024-07-14 05:48:30.200714] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.304 [2024-07-14 05:48:30.200749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.304 qpair failed and we were unable to recover it. 00:34:23.304 [2024-07-14 05:48:30.210516] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.304 [2024-07-14 05:48:30.210676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.304 [2024-07-14 05:48:30.210703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.304 [2024-07-14 05:48:30.210719] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.304 [2024-07-14 05:48:30.210732] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.304 [2024-07-14 05:48:30.210776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.304 qpair failed and we were unable to recover it. 00:34:23.304 [2024-07-14 05:48:30.220610] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.304 [2024-07-14 05:48:30.220785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.304 [2024-07-14 05:48:30.220812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.304 [2024-07-14 05:48:30.220828] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.304 [2024-07-14 05:48:30.220842] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.304 [2024-07-14 05:48:30.220878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.304 qpair failed and we were unable to recover it. 00:34:23.304 [2024-07-14 05:48:30.230562] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.304 [2024-07-14 05:48:30.230756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.304 [2024-07-14 05:48:30.230783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.304 [2024-07-14 05:48:30.230799] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.304 [2024-07-14 05:48:30.230812] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.304 [2024-07-14 05:48:30.230841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.304 qpair failed and we were unable to recover it. 00:34:23.304 [2024-07-14 05:48:30.240604] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.304 [2024-07-14 05:48:30.240791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.304 [2024-07-14 05:48:30.240820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.304 [2024-07-14 05:48:30.240837] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.304 [2024-07-14 05:48:30.240851] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.304 [2024-07-14 05:48:30.240892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.304 qpair failed and we were unable to recover it. 00:34:23.304 [2024-07-14 05:48:30.250638] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.304 [2024-07-14 05:48:30.250799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.304 [2024-07-14 05:48:30.250839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.304 [2024-07-14 05:48:30.250855] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.304 [2024-07-14 05:48:30.250876] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.304 [2024-07-14 05:48:30.250907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.304 qpair failed and we were unable to recover it. 00:34:23.304 [2024-07-14 05:48:30.260663] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.304 [2024-07-14 05:48:30.260832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.304 [2024-07-14 05:48:30.260859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.304 [2024-07-14 05:48:30.260882] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.304 [2024-07-14 05:48:30.260898] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.304 [2024-07-14 05:48:30.260926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.304 qpair failed and we were unable to recover it. 00:34:23.304 [2024-07-14 05:48:30.270675] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.304 [2024-07-14 05:48:30.270886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.304 [2024-07-14 05:48:30.270913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.304 [2024-07-14 05:48:30.270929] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.304 [2024-07-14 05:48:30.270944] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.304 [2024-07-14 05:48:30.270972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.304 qpair failed and we were unable to recover it. 00:34:23.304 [2024-07-14 05:48:30.280719] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.304 [2024-07-14 05:48:30.280887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.304 [2024-07-14 05:48:30.280914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.304 [2024-07-14 05:48:30.280930] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.304 [2024-07-14 05:48:30.280944] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.304 [2024-07-14 05:48:30.280973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.304 qpair failed and we were unable to recover it. 00:34:23.304 [2024-07-14 05:48:30.290716] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.304 [2024-07-14 05:48:30.290927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.304 [2024-07-14 05:48:30.290955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.304 [2024-07-14 05:48:30.290971] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.304 [2024-07-14 05:48:30.290991] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.304 [2024-07-14 05:48:30.291021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.304 qpair failed and we were unable to recover it. 00:34:23.304 [2024-07-14 05:48:30.300754] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.304 [2024-07-14 05:48:30.300942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.304 [2024-07-14 05:48:30.300970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.304 [2024-07-14 05:48:30.300987] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.304 [2024-07-14 05:48:30.301000] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.304 [2024-07-14 05:48:30.301030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.304 qpair failed and we were unable to recover it. 00:34:23.304 [2024-07-14 05:48:30.310781] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.304 [2024-07-14 05:48:30.310952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.304 [2024-07-14 05:48:30.310979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.304 [2024-07-14 05:48:30.310995] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.304 [2024-07-14 05:48:30.311009] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.304 [2024-07-14 05:48:30.311037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.304 qpair failed and we were unable to recover it. 00:34:23.304 [2024-07-14 05:48:30.320857] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.304 [2024-07-14 05:48:30.321027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.304 [2024-07-14 05:48:30.321053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.304 [2024-07-14 05:48:30.321069] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.304 [2024-07-14 05:48:30.321083] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.304 [2024-07-14 05:48:30.321112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.304 qpair failed and we were unable to recover it. 00:34:23.304 [2024-07-14 05:48:30.330890] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.304 [2024-07-14 05:48:30.331054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.304 [2024-07-14 05:48:30.331081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.304 [2024-07-14 05:48:30.331096] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.304 [2024-07-14 05:48:30.331110] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.304 [2024-07-14 05:48:30.331139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.304 qpair failed and we were unable to recover it. 00:34:23.305 [2024-07-14 05:48:30.340884] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.305 [2024-07-14 05:48:30.341079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.305 [2024-07-14 05:48:30.341109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.305 [2024-07-14 05:48:30.341125] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.305 [2024-07-14 05:48:30.341139] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.305 [2024-07-14 05:48:30.341169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.305 qpair failed and we were unable to recover it. 00:34:23.305 [2024-07-14 05:48:30.350902] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.305 [2024-07-14 05:48:30.351055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.305 [2024-07-14 05:48:30.351083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.305 [2024-07-14 05:48:30.351099] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.305 [2024-07-14 05:48:30.351113] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.305 [2024-07-14 05:48:30.351142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.305 qpair failed and we were unable to recover it. 00:34:23.305 [2024-07-14 05:48:30.360958] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.305 [2024-07-14 05:48:30.361118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.305 [2024-07-14 05:48:30.361145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.305 [2024-07-14 05:48:30.361160] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.305 [2024-07-14 05:48:30.361187] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.305 [2024-07-14 05:48:30.361218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.305 qpair failed and we were unable to recover it. 00:34:23.305 [2024-07-14 05:48:30.370968] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.305 [2024-07-14 05:48:30.371125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.305 [2024-07-14 05:48:30.371152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.305 [2024-07-14 05:48:30.371168] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.305 [2024-07-14 05:48:30.371182] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.305 [2024-07-14 05:48:30.371210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.305 qpair failed and we were unable to recover it. 00:34:23.305 [2024-07-14 05:48:30.381009] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.305 [2024-07-14 05:48:30.381162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.305 [2024-07-14 05:48:30.381189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.305 [2024-07-14 05:48:30.381204] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.305 [2024-07-14 05:48:30.381223] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.305 [2024-07-14 05:48:30.381253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.305 qpair failed and we were unable to recover it. 00:34:23.305 [2024-07-14 05:48:30.391015] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.305 [2024-07-14 05:48:30.391171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.305 [2024-07-14 05:48:30.391199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.305 [2024-07-14 05:48:30.391214] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.305 [2024-07-14 05:48:30.391228] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.305 [2024-07-14 05:48:30.391256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.305 qpair failed and we were unable to recover it. 00:34:23.305 [2024-07-14 05:48:30.401045] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.305 [2024-07-14 05:48:30.401206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.305 [2024-07-14 05:48:30.401233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.305 [2024-07-14 05:48:30.401248] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.305 [2024-07-14 05:48:30.401262] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.305 [2024-07-14 05:48:30.401291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.305 qpair failed and we were unable to recover it. 00:34:23.562 [2024-07-14 05:48:30.411112] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.562 [2024-07-14 05:48:30.411266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.562 [2024-07-14 05:48:30.411296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.562 [2024-07-14 05:48:30.411312] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.562 [2024-07-14 05:48:30.411326] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.562 [2024-07-14 05:48:30.411356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.562 qpair failed and we were unable to recover it. 00:34:23.562 [2024-07-14 05:48:30.421131] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.562 [2024-07-14 05:48:30.421288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.562 [2024-07-14 05:48:30.421316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.562 [2024-07-14 05:48:30.421333] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.562 [2024-07-14 05:48:30.421347] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.562 [2024-07-14 05:48:30.421392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.562 qpair failed and we were unable to recover it. 00:34:23.562 [2024-07-14 05:48:30.431117] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.562 [2024-07-14 05:48:30.431282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.562 [2024-07-14 05:48:30.431310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.562 [2024-07-14 05:48:30.431325] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.562 [2024-07-14 05:48:30.431339] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.562 [2024-07-14 05:48:30.431368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.562 qpair failed and we were unable to recover it. 00:34:23.562 [2024-07-14 05:48:30.441190] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.562 [2024-07-14 05:48:30.441352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.562 [2024-07-14 05:48:30.441380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.562 [2024-07-14 05:48:30.441396] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.562 [2024-07-14 05:48:30.441410] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.562 [2024-07-14 05:48:30.441439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.562 qpair failed and we were unable to recover it. 00:34:23.562 [2024-07-14 05:48:30.451186] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.563 [2024-07-14 05:48:30.451335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.563 [2024-07-14 05:48:30.451362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.563 [2024-07-14 05:48:30.451378] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.563 [2024-07-14 05:48:30.451391] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.563 [2024-07-14 05:48:30.451421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.563 qpair failed and we were unable to recover it. 00:34:23.563 [2024-07-14 05:48:30.461207] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.563 [2024-07-14 05:48:30.461380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.563 [2024-07-14 05:48:30.461407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.563 [2024-07-14 05:48:30.461423] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.563 [2024-07-14 05:48:30.461437] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.563 [2024-07-14 05:48:30.461465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.563 qpair failed and we were unable to recover it. 00:34:23.563 [2024-07-14 05:48:30.471229] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.563 [2024-07-14 05:48:30.471380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.563 [2024-07-14 05:48:30.471407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.563 [2024-07-14 05:48:30.471428] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.563 [2024-07-14 05:48:30.471442] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.563 [2024-07-14 05:48:30.471471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.563 qpair failed and we were unable to recover it. 00:34:23.563 [2024-07-14 05:48:30.481274] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.563 [2024-07-14 05:48:30.481433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.563 [2024-07-14 05:48:30.481460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.563 [2024-07-14 05:48:30.481476] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.563 [2024-07-14 05:48:30.481490] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.563 [2024-07-14 05:48:30.481519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.563 qpair failed and we were unable to recover it. 00:34:23.563 [2024-07-14 05:48:30.491322] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.563 [2024-07-14 05:48:30.491477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.563 [2024-07-14 05:48:30.491504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.563 [2024-07-14 05:48:30.491519] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.563 [2024-07-14 05:48:30.491533] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.563 [2024-07-14 05:48:30.491563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.563 qpair failed and we were unable to recover it. 00:34:23.563 [2024-07-14 05:48:30.501313] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.563 [2024-07-14 05:48:30.501467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.563 [2024-07-14 05:48:30.501495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.563 [2024-07-14 05:48:30.501511] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.563 [2024-07-14 05:48:30.501525] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.563 [2024-07-14 05:48:30.501554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.563 qpair failed and we were unable to recover it. 00:34:23.563 [2024-07-14 05:48:30.511362] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.563 [2024-07-14 05:48:30.511530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.563 [2024-07-14 05:48:30.511557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.563 [2024-07-14 05:48:30.511573] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.563 [2024-07-14 05:48:30.511587] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.563 [2024-07-14 05:48:30.511616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.563 qpair failed and we were unable to recover it. 00:34:23.563 [2024-07-14 05:48:30.521400] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.563 [2024-07-14 05:48:30.521561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.563 [2024-07-14 05:48:30.521588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.563 [2024-07-14 05:48:30.521603] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.563 [2024-07-14 05:48:30.521617] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.563 [2024-07-14 05:48:30.521648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.563 qpair failed and we were unable to recover it. 00:34:23.563 [2024-07-14 05:48:30.531423] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.563 [2024-07-14 05:48:30.531581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.563 [2024-07-14 05:48:30.531608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.563 [2024-07-14 05:48:30.531623] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.563 [2024-07-14 05:48:30.531637] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.563 [2024-07-14 05:48:30.531666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.563 qpair failed and we were unable to recover it. 00:34:23.563 [2024-07-14 05:48:30.541484] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.563 [2024-07-14 05:48:30.541684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.563 [2024-07-14 05:48:30.541710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.563 [2024-07-14 05:48:30.541726] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.563 [2024-07-14 05:48:30.541740] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.563 [2024-07-14 05:48:30.541769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.563 qpair failed and we were unable to recover it. 00:34:23.563 [2024-07-14 05:48:30.551559] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.563 [2024-07-14 05:48:30.551745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.563 [2024-07-14 05:48:30.551772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.563 [2024-07-14 05:48:30.551787] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.563 [2024-07-14 05:48:30.551801] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.563 [2024-07-14 05:48:30.551829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.563 qpair failed and we were unable to recover it. 00:34:23.563 [2024-07-14 05:48:30.561529] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.563 [2024-07-14 05:48:30.561687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.563 [2024-07-14 05:48:30.561715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.563 [2024-07-14 05:48:30.561739] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.563 [2024-07-14 05:48:30.561753] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.563 [2024-07-14 05:48:30.561783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.563 qpair failed and we were unable to recover it. 00:34:23.563 [2024-07-14 05:48:30.571541] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.563 [2024-07-14 05:48:30.571722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.563 [2024-07-14 05:48:30.571749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.563 [2024-07-14 05:48:30.571765] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.563 [2024-07-14 05:48:30.571780] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.563 [2024-07-14 05:48:30.571809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.563 qpair failed and we were unable to recover it. 00:34:23.563 [2024-07-14 05:48:30.581600] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.563 [2024-07-14 05:48:30.581786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.563 [2024-07-14 05:48:30.581814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.563 [2024-07-14 05:48:30.581849] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.563 [2024-07-14 05:48:30.581862] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.563 [2024-07-14 05:48:30.581916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.563 qpair failed and we were unable to recover it. 00:34:23.563 [2024-07-14 05:48:30.591596] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.563 [2024-07-14 05:48:30.591749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.563 [2024-07-14 05:48:30.591776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.563 [2024-07-14 05:48:30.591792] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.564 [2024-07-14 05:48:30.591807] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.564 [2024-07-14 05:48:30.591837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.564 qpair failed and we were unable to recover it. 00:34:23.564 [2024-07-14 05:48:30.601652] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.564 [2024-07-14 05:48:30.601814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.564 [2024-07-14 05:48:30.601841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.564 [2024-07-14 05:48:30.601857] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.564 [2024-07-14 05:48:30.601879] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.564 [2024-07-14 05:48:30.601909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.564 qpair failed and we were unable to recover it. 00:34:23.564 [2024-07-14 05:48:30.611719] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.564 [2024-07-14 05:48:30.611884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.564 [2024-07-14 05:48:30.611911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.564 [2024-07-14 05:48:30.611927] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.564 [2024-07-14 05:48:30.611941] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.564 [2024-07-14 05:48:30.611972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.564 qpair failed and we were unable to recover it. 00:34:23.564 [2024-07-14 05:48:30.621667] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.564 [2024-07-14 05:48:30.621823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.564 [2024-07-14 05:48:30.621850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.564 [2024-07-14 05:48:30.621875] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.564 [2024-07-14 05:48:30.621892] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.564 [2024-07-14 05:48:30.621921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.564 qpair failed and we were unable to recover it. 00:34:23.564 [2024-07-14 05:48:30.631698] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.564 [2024-07-14 05:48:30.631856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.564 [2024-07-14 05:48:30.631889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.564 [2024-07-14 05:48:30.631906] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.564 [2024-07-14 05:48:30.631921] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.564 [2024-07-14 05:48:30.631949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.564 qpair failed and we were unable to recover it. 00:34:23.564 [2024-07-14 05:48:30.641757] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.564 [2024-07-14 05:48:30.641931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.564 [2024-07-14 05:48:30.641957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.564 [2024-07-14 05:48:30.641972] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.564 [2024-07-14 05:48:30.641985] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.564 [2024-07-14 05:48:30.642015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.564 qpair failed and we were unable to recover it. 00:34:23.564 [2024-07-14 05:48:30.651797] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.564 [2024-07-14 05:48:30.651998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.564 [2024-07-14 05:48:30.652024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.564 [2024-07-14 05:48:30.652045] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.564 [2024-07-14 05:48:30.652061] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.564 [2024-07-14 05:48:30.652092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.564 qpair failed and we were unable to recover it. 00:34:23.564 [2024-07-14 05:48:30.661783] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.564 [2024-07-14 05:48:30.661938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.564 [2024-07-14 05:48:30.661965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.564 [2024-07-14 05:48:30.661981] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.564 [2024-07-14 05:48:30.661995] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.564 [2024-07-14 05:48:30.662024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.564 qpair failed and we were unable to recover it. 00:34:23.822 [2024-07-14 05:48:30.671926] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.822 [2024-07-14 05:48:30.672114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.822 [2024-07-14 05:48:30.672143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.822 [2024-07-14 05:48:30.672160] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.822 [2024-07-14 05:48:30.672174] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.822 [2024-07-14 05:48:30.672208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.822 qpair failed and we were unable to recover it. 00:34:23.822 [2024-07-14 05:48:30.681850] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.822 [2024-07-14 05:48:30.682017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.822 [2024-07-14 05:48:30.682044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.822 [2024-07-14 05:48:30.682061] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.822 [2024-07-14 05:48:30.682076] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.822 [2024-07-14 05:48:30.682105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.822 qpair failed and we were unable to recover it. 00:34:23.822 [2024-07-14 05:48:30.691931] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.822 [2024-07-14 05:48:30.692086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.822 [2024-07-14 05:48:30.692114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.822 [2024-07-14 05:48:30.692129] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.822 [2024-07-14 05:48:30.692143] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.822 [2024-07-14 05:48:30.692174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.822 qpair failed and we were unable to recover it. 00:34:23.822 [2024-07-14 05:48:30.701938] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.822 [2024-07-14 05:48:30.702095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.822 [2024-07-14 05:48:30.702122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.822 [2024-07-14 05:48:30.702138] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.822 [2024-07-14 05:48:30.702153] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.822 [2024-07-14 05:48:30.702182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.822 qpair failed and we were unable to recover it. 00:34:23.822 [2024-07-14 05:48:30.712038] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.822 [2024-07-14 05:48:30.712192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.822 [2024-07-14 05:48:30.712219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.822 [2024-07-14 05:48:30.712235] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.822 [2024-07-14 05:48:30.712249] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.822 [2024-07-14 05:48:30.712280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.822 qpair failed and we were unable to recover it. 00:34:23.822 [2024-07-14 05:48:30.722004] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.822 [2024-07-14 05:48:30.722159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.822 [2024-07-14 05:48:30.722185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.822 [2024-07-14 05:48:30.722200] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.822 [2024-07-14 05:48:30.722214] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.822 [2024-07-14 05:48:30.722244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.822 qpair failed and we were unable to recover it. 00:34:23.822 [2024-07-14 05:48:30.732041] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.822 [2024-07-14 05:48:30.732215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.822 [2024-07-14 05:48:30.732242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.822 [2024-07-14 05:48:30.732257] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.822 [2024-07-14 05:48:30.732270] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.822 [2024-07-14 05:48:30.732299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.822 qpair failed and we were unable to recover it. 00:34:23.822 [2024-07-14 05:48:30.742119] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.822 [2024-07-14 05:48:30.742311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.822 [2024-07-14 05:48:30.742343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.822 [2024-07-14 05:48:30.742360] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.822 [2024-07-14 05:48:30.742373] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.822 [2024-07-14 05:48:30.742402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.822 qpair failed and we were unable to recover it. 00:34:23.822 [2024-07-14 05:48:30.752093] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.822 [2024-07-14 05:48:30.752246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.822 [2024-07-14 05:48:30.752273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.822 [2024-07-14 05:48:30.752289] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.822 [2024-07-14 05:48:30.752303] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.822 [2024-07-14 05:48:30.752333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.822 qpair failed and we were unable to recover it. 00:34:23.822 [2024-07-14 05:48:30.762104] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.822 [2024-07-14 05:48:30.762300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.822 [2024-07-14 05:48:30.762326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.822 [2024-07-14 05:48:30.762342] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.822 [2024-07-14 05:48:30.762356] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.822 [2024-07-14 05:48:30.762385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.822 qpair failed and we were unable to recover it. 00:34:23.822 [2024-07-14 05:48:30.772118] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.822 [2024-07-14 05:48:30.772285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.822 [2024-07-14 05:48:30.772311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.822 [2024-07-14 05:48:30.772326] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.822 [2024-07-14 05:48:30.772339] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.822 [2024-07-14 05:48:30.772368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.822 qpair failed and we were unable to recover it. 00:34:23.822 [2024-07-14 05:48:30.782216] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.822 [2024-07-14 05:48:30.782365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.822 [2024-07-14 05:48:30.782392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.822 [2024-07-14 05:48:30.782408] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.822 [2024-07-14 05:48:30.782423] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.822 [2024-07-14 05:48:30.782452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.822 qpair failed and we were unable to recover it. 00:34:23.822 [2024-07-14 05:48:30.792198] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.822 [2024-07-14 05:48:30.792356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.822 [2024-07-14 05:48:30.792383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.822 [2024-07-14 05:48:30.792402] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.822 [2024-07-14 05:48:30.792417] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.822 [2024-07-14 05:48:30.792462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.822 qpair failed and we were unable to recover it. 00:34:23.822 [2024-07-14 05:48:30.802201] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.822 [2024-07-14 05:48:30.802360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.822 [2024-07-14 05:48:30.802387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.822 [2024-07-14 05:48:30.802403] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.822 [2024-07-14 05:48:30.802416] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.823 [2024-07-14 05:48:30.802446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.823 qpair failed and we were unable to recover it. 00:34:23.823 [2024-07-14 05:48:30.812355] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.823 [2024-07-14 05:48:30.812536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.823 [2024-07-14 05:48:30.812563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.823 [2024-07-14 05:48:30.812579] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.823 [2024-07-14 05:48:30.812592] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.823 [2024-07-14 05:48:30.812621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.823 qpair failed and we were unable to recover it. 00:34:23.823 [2024-07-14 05:48:30.822245] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.823 [2024-07-14 05:48:30.822410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.823 [2024-07-14 05:48:30.822437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.823 [2024-07-14 05:48:30.822453] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.823 [2024-07-14 05:48:30.822467] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.823 [2024-07-14 05:48:30.822495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.823 qpair failed and we were unable to recover it. 00:34:23.823 [2024-07-14 05:48:30.832316] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.823 [2024-07-14 05:48:30.832479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.823 [2024-07-14 05:48:30.832512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.823 [2024-07-14 05:48:30.832529] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.823 [2024-07-14 05:48:30.832543] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.823 [2024-07-14 05:48:30.832572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.823 qpair failed and we were unable to recover it. 00:34:23.823 [2024-07-14 05:48:30.842351] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.823 [2024-07-14 05:48:30.842543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.823 [2024-07-14 05:48:30.842588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.823 [2024-07-14 05:48:30.842605] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.823 [2024-07-14 05:48:30.842618] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.823 [2024-07-14 05:48:30.842661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.823 qpair failed and we were unable to recover it. 00:34:23.823 [2024-07-14 05:48:30.852344] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.823 [2024-07-14 05:48:30.852516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.823 [2024-07-14 05:48:30.852544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.823 [2024-07-14 05:48:30.852560] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.823 [2024-07-14 05:48:30.852574] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.823 [2024-07-14 05:48:30.852602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.823 qpair failed and we were unable to recover it. 00:34:23.823 [2024-07-14 05:48:30.862403] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.823 [2024-07-14 05:48:30.862563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.823 [2024-07-14 05:48:30.862590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.823 [2024-07-14 05:48:30.862605] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.823 [2024-07-14 05:48:30.862619] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.823 [2024-07-14 05:48:30.862647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.823 qpair failed and we were unable to recover it. 00:34:23.823 [2024-07-14 05:48:30.872430] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.823 [2024-07-14 05:48:30.872629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.823 [2024-07-14 05:48:30.872671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.823 [2024-07-14 05:48:30.872686] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.823 [2024-07-14 05:48:30.872700] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.823 [2024-07-14 05:48:30.872749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.823 qpair failed and we were unable to recover it. 00:34:23.823 [2024-07-14 05:48:30.882508] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.823 [2024-07-14 05:48:30.882700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.823 [2024-07-14 05:48:30.882741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.823 [2024-07-14 05:48:30.882756] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.823 [2024-07-14 05:48:30.882770] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.823 [2024-07-14 05:48:30.882799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.823 qpair failed and we were unable to recover it. 00:34:23.823 [2024-07-14 05:48:30.892459] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.823 [2024-07-14 05:48:30.892617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.823 [2024-07-14 05:48:30.892644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.823 [2024-07-14 05:48:30.892659] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.823 [2024-07-14 05:48:30.892673] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.823 [2024-07-14 05:48:30.892702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.823 qpair failed and we were unable to recover it. 00:34:23.823 [2024-07-14 05:48:30.902502] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.823 [2024-07-14 05:48:30.902685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.823 [2024-07-14 05:48:30.902712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.823 [2024-07-14 05:48:30.902728] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.823 [2024-07-14 05:48:30.902742] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.823 [2024-07-14 05:48:30.902770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.823 qpair failed and we were unable to recover it. 00:34:23.823 [2024-07-14 05:48:30.912579] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.823 [2024-07-14 05:48:30.912737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.823 [2024-07-14 05:48:30.912764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.823 [2024-07-14 05:48:30.912779] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.823 [2024-07-14 05:48:30.912793] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.823 [2024-07-14 05:48:30.912836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.823 qpair failed and we were unable to recover it. 00:34:23.823 [2024-07-14 05:48:30.922593] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.823 [2024-07-14 05:48:30.922828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.823 [2024-07-14 05:48:30.922859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.823 [2024-07-14 05:48:30.922900] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.823 [2024-07-14 05:48:30.922916] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:23.823 [2024-07-14 05:48:30.922947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.823 qpair failed and we were unable to recover it. 00:34:24.080 [2024-07-14 05:48:30.932618] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.080 [2024-07-14 05:48:30.932795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.080 [2024-07-14 05:48:30.932824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.080 [2024-07-14 05:48:30.932841] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.080 [2024-07-14 05:48:30.932855] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:24.080 [2024-07-14 05:48:30.932893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.080 qpair failed and we were unable to recover it. 00:34:24.080 [2024-07-14 05:48:30.942700] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.080 [2024-07-14 05:48:30.942880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.080 [2024-07-14 05:48:30.942908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.080 [2024-07-14 05:48:30.942924] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.080 [2024-07-14 05:48:30.942938] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:24.080 [2024-07-14 05:48:30.942968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.081 qpair failed and we were unable to recover it. 00:34:24.081 [2024-07-14 05:48:30.952643] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.081 [2024-07-14 05:48:30.952803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.081 [2024-07-14 05:48:30.952830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.081 [2024-07-14 05:48:30.952846] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.081 [2024-07-14 05:48:30.952861] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:24.081 [2024-07-14 05:48:30.952898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.081 qpair failed and we were unable to recover it. 00:34:24.081 [2024-07-14 05:48:30.962704] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.081 [2024-07-14 05:48:30.962860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.081 [2024-07-14 05:48:30.962893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.081 [2024-07-14 05:48:30.962909] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.081 [2024-07-14 05:48:30.962922] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:24.081 [2024-07-14 05:48:30.962957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.081 qpair failed and we were unable to recover it. 00:34:24.081 [2024-07-14 05:48:30.972715] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.081 [2024-07-14 05:48:30.972881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.081 [2024-07-14 05:48:30.972910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.081 [2024-07-14 05:48:30.972926] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.081 [2024-07-14 05:48:30.972940] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:24.081 [2024-07-14 05:48:30.972969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.081 qpair failed and we were unable to recover it. 00:34:24.081 [2024-07-14 05:48:30.982757] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.081 [2024-07-14 05:48:30.982926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.081 [2024-07-14 05:48:30.982953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.081 [2024-07-14 05:48:30.982970] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.081 [2024-07-14 05:48:30.982984] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:24.081 [2024-07-14 05:48:30.983013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.081 qpair failed and we were unable to recover it. 00:34:24.081 [2024-07-14 05:48:30.992751] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.081 [2024-07-14 05:48:30.992910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.081 [2024-07-14 05:48:30.992937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.081 [2024-07-14 05:48:30.992953] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.081 [2024-07-14 05:48:30.992967] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:24.081 [2024-07-14 05:48:30.992996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.081 qpair failed and we were unable to recover it. 00:34:24.081 [2024-07-14 05:48:31.002820] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.081 [2024-07-14 05:48:31.002988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.081 [2024-07-14 05:48:31.003014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.081 [2024-07-14 05:48:31.003029] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.081 [2024-07-14 05:48:31.003042] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:24.081 [2024-07-14 05:48:31.003071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.081 qpair failed and we were unable to recover it. 00:34:24.081 [2024-07-14 05:48:31.012824] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.081 [2024-07-14 05:48:31.013037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.081 [2024-07-14 05:48:31.013069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.081 [2024-07-14 05:48:31.013087] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.081 [2024-07-14 05:48:31.013105] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:24.081 [2024-07-14 05:48:31.013134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.081 qpair failed and we were unable to recover it. 00:34:24.081 [2024-07-14 05:48:31.022879] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.081 [2024-07-14 05:48:31.023032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.081 [2024-07-14 05:48:31.023059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.081 [2024-07-14 05:48:31.023075] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.081 [2024-07-14 05:48:31.023089] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:24.081 [2024-07-14 05:48:31.023117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.081 qpair failed and we were unable to recover it. 00:34:24.081 [2024-07-14 05:48:31.032886] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.081 [2024-07-14 05:48:31.033045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.081 [2024-07-14 05:48:31.033073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.081 [2024-07-14 05:48:31.033088] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.081 [2024-07-14 05:48:31.033102] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:24.081 [2024-07-14 05:48:31.033131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.081 qpair failed and we were unable to recover it. 00:34:24.081 [2024-07-14 05:48:31.042941] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.081 [2024-07-14 05:48:31.043137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.081 [2024-07-14 05:48:31.043163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.081 [2024-07-14 05:48:31.043179] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.081 [2024-07-14 05:48:31.043193] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:24.081 [2024-07-14 05:48:31.043222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.081 qpair failed and we were unable to recover it. 00:34:24.081 [2024-07-14 05:48:31.052975] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.081 [2024-07-14 05:48:31.053130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.081 [2024-07-14 05:48:31.053157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.081 [2024-07-14 05:48:31.053173] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.081 [2024-07-14 05:48:31.053194] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:24.081 [2024-07-14 05:48:31.053223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.081 qpair failed and we were unable to recover it. 00:34:24.081 [2024-07-14 05:48:31.062955] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.081 [2024-07-14 05:48:31.063115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.081 [2024-07-14 05:48:31.063141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.081 [2024-07-14 05:48:31.063156] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.081 [2024-07-14 05:48:31.063170] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:24.081 [2024-07-14 05:48:31.063198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.081 qpair failed and we were unable to recover it. 00:34:24.081 [2024-07-14 05:48:31.073079] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.081 [2024-07-14 05:48:31.073247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.081 [2024-07-14 05:48:31.073274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.081 [2024-07-14 05:48:31.073291] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.081 [2024-07-14 05:48:31.073305] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:24.081 [2024-07-14 05:48:31.073338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.081 qpair failed and we were unable to recover it. 00:34:24.081 [2024-07-14 05:48:31.083036] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.081 [2024-07-14 05:48:31.083240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.081 [2024-07-14 05:48:31.083282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.081 [2024-07-14 05:48:31.083297] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.081 [2024-07-14 05:48:31.083310] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:24.081 [2024-07-14 05:48:31.083353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.081 qpair failed and we were unable to recover it. 00:34:24.081 [2024-07-14 05:48:31.093022] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.081 [2024-07-14 05:48:31.093179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.081 [2024-07-14 05:48:31.093206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.081 [2024-07-14 05:48:31.093221] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.081 [2024-07-14 05:48:31.093234] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:24.081 [2024-07-14 05:48:31.093263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.081 qpair failed and we were unable to recover it. 00:34:24.081 [2024-07-14 05:48:31.103110] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.081 [2024-07-14 05:48:31.103277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.081 [2024-07-14 05:48:31.103303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.081 [2024-07-14 05:48:31.103319] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.081 [2024-07-14 05:48:31.103332] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:24.081 [2024-07-14 05:48:31.103361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.081 qpair failed and we were unable to recover it. 00:34:24.081 [2024-07-14 05:48:31.113092] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.081 [2024-07-14 05:48:31.113250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.081 [2024-07-14 05:48:31.113277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.081 [2024-07-14 05:48:31.113292] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.081 [2024-07-14 05:48:31.113305] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:24.081 [2024-07-14 05:48:31.113333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.081 qpair failed and we were unable to recover it. 00:34:24.081 [2024-07-14 05:48:31.123212] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.081 [2024-07-14 05:48:31.123413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.081 [2024-07-14 05:48:31.123454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.081 [2024-07-14 05:48:31.123469] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.081 [2024-07-14 05:48:31.123482] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:24.081 [2024-07-14 05:48:31.123510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.081 qpair failed and we were unable to recover it. 00:34:24.081 [2024-07-14 05:48:31.133165] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.081 [2024-07-14 05:48:31.133320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.081 [2024-07-14 05:48:31.133346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.081 [2024-07-14 05:48:31.133361] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.081 [2024-07-14 05:48:31.133375] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:24.081 [2024-07-14 05:48:31.133403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.081 qpair failed and we were unable to recover it. 00:34:24.081 [2024-07-14 05:48:31.143183] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.081 [2024-07-14 05:48:31.143357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.081 [2024-07-14 05:48:31.143384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.081 [2024-07-14 05:48:31.143400] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.081 [2024-07-14 05:48:31.143421] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:24.081 [2024-07-14 05:48:31.143451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.081 qpair failed and we were unable to recover it. 00:34:24.081 [2024-07-14 05:48:31.153233] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.081 [2024-07-14 05:48:31.153408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.081 [2024-07-14 05:48:31.153434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.081 [2024-07-14 05:48:31.153449] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.081 [2024-07-14 05:48:31.153463] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:24.081 [2024-07-14 05:48:31.153491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.081 qpair failed and we were unable to recover it. 00:34:24.081 [2024-07-14 05:48:31.163273] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.081 [2024-07-14 05:48:31.163448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.081 [2024-07-14 05:48:31.163474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.081 [2024-07-14 05:48:31.163504] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.081 [2024-07-14 05:48:31.163518] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:24.081 [2024-07-14 05:48:31.163548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.081 qpair failed and we were unable to recover it. 00:34:24.081 [2024-07-14 05:48:31.173259] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.081 [2024-07-14 05:48:31.173415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.081 [2024-07-14 05:48:31.173441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.081 [2024-07-14 05:48:31.173457] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.081 [2024-07-14 05:48:31.173470] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:24.081 [2024-07-14 05:48:31.173499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.081 qpair failed and we were unable to recover it. 00:34:24.081 [2024-07-14 05:48:31.183341] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.081 [2024-07-14 05:48:31.183507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.081 [2024-07-14 05:48:31.183536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.081 [2024-07-14 05:48:31.183552] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.081 [2024-07-14 05:48:31.183565] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:24.081 [2024-07-14 05:48:31.183595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.081 qpair failed and we were unable to recover it. 00:34:24.338 [2024-07-14 05:48:31.193417] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.338 [2024-07-14 05:48:31.193610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.338 [2024-07-14 05:48:31.193654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.338 [2024-07-14 05:48:31.193670] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.338 [2024-07-14 05:48:31.193692] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:24.338 [2024-07-14 05:48:31.193736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.338 qpair failed and we were unable to recover it. 00:34:24.338 [2024-07-14 05:48:31.203383] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.338 [2024-07-14 05:48:31.203576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.338 [2024-07-14 05:48:31.203618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.338 [2024-07-14 05:48:31.203634] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.338 [2024-07-14 05:48:31.203648] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:24.338 [2024-07-14 05:48:31.203691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.338 qpair failed and we were unable to recover it. 00:34:24.338 [2024-07-14 05:48:31.213385] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.338 [2024-07-14 05:48:31.213540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.338 [2024-07-14 05:48:31.213566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.338 [2024-07-14 05:48:31.213581] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.338 [2024-07-14 05:48:31.213594] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:24.338 [2024-07-14 05:48:31.213624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.338 qpair failed and we were unable to recover it. 00:34:24.338 [2024-07-14 05:48:31.223392] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.338 [2024-07-14 05:48:31.223555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.338 [2024-07-14 05:48:31.223581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.338 [2024-07-14 05:48:31.223597] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.338 [2024-07-14 05:48:31.223610] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:24.338 [2024-07-14 05:48:31.223640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.338 qpair failed and we were unable to recover it. 00:34:24.338 [2024-07-14 05:48:31.233461] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.338 [2024-07-14 05:48:31.233639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.338 [2024-07-14 05:48:31.233664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.338 [2024-07-14 05:48:31.233680] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.338 [2024-07-14 05:48:31.233699] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:24.338 [2024-07-14 05:48:31.233729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.338 qpair failed and we were unable to recover it. 00:34:24.338 [2024-07-14 05:48:31.243445] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.338 [2024-07-14 05:48:31.243606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.338 [2024-07-14 05:48:31.243632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.338 [2024-07-14 05:48:31.243647] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.338 [2024-07-14 05:48:31.243662] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:24.338 [2024-07-14 05:48:31.243691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.338 qpair failed and we were unable to recover it. 00:34:24.338 [2024-07-14 05:48:31.253507] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.338 [2024-07-14 05:48:31.253669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.338 [2024-07-14 05:48:31.253695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.338 [2024-07-14 05:48:31.253725] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.338 [2024-07-14 05:48:31.253739] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:24.339 [2024-07-14 05:48:31.253767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.339 qpair failed and we were unable to recover it. 00:34:24.339 [2024-07-14 05:48:31.263510] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.339 [2024-07-14 05:48:31.263686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.339 [2024-07-14 05:48:31.263713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.339 [2024-07-14 05:48:31.263728] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.339 [2024-07-14 05:48:31.263742] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:24.339 [2024-07-14 05:48:31.263771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.339 qpair failed and we were unable to recover it. 00:34:24.339 [2024-07-14 05:48:31.273525] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.339 [2024-07-14 05:48:31.273678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.339 [2024-07-14 05:48:31.273704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.339 [2024-07-14 05:48:31.273719] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.339 [2024-07-14 05:48:31.273732] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:24.339 [2024-07-14 05:48:31.273763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.339 qpair failed and we were unable to recover it. 00:34:24.339 [2024-07-14 05:48:31.283599] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.339 [2024-07-14 05:48:31.283765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.339 [2024-07-14 05:48:31.283799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.339 [2024-07-14 05:48:31.283816] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.339 [2024-07-14 05:48:31.283832] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5d00000b90 00:34:24.339 [2024-07-14 05:48:31.283873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.339 qpair failed and we were unable to recover it. 00:34:24.339 [2024-07-14 05:48:31.293619] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.339 [2024-07-14 05:48:31.293795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.339 [2024-07-14 05:48:31.293824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.339 [2024-07-14 05:48:31.293840] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.339 [2024-07-14 05:48:31.293855] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5d00000b90 00:34:24.339 [2024-07-14 05:48:31.293892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.339 qpair failed and we were unable to recover it. 00:34:24.339 [2024-07-14 05:48:31.303624] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.339 [2024-07-14 05:48:31.303780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.339 [2024-07-14 05:48:31.303814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.339 [2024-07-14 05:48:31.303831] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.339 [2024-07-14 05:48:31.303846] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5d08000b90 00:34:24.339 [2024-07-14 05:48:31.303885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:24.339 qpair failed and we were unable to recover it. 00:34:24.339 [2024-07-14 05:48:31.313700] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.339 [2024-07-14 05:48:31.313886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.339 [2024-07-14 05:48:31.313915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.339 [2024-07-14 05:48:31.313931] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.339 [2024-07-14 05:48:31.313946] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5d08000b90 00:34:24.339 [2024-07-14 05:48:31.313978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:24.339 qpair failed and we were unable to recover it. 00:34:24.339 [2024-07-14 05:48:31.323683] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.339 [2024-07-14 05:48:31.323844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.339 [2024-07-14 05:48:31.323888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.339 [2024-07-14 05:48:31.323912] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.339 [2024-07-14 05:48:31.323927] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5cf8000b90 00:34:24.339 [2024-07-14 05:48:31.323961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.339 qpair failed and we were unable to recover it. 00:34:24.339 [2024-07-14 05:48:31.333732] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.339 [2024-07-14 05:48:31.333902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.339 [2024-07-14 05:48:31.333931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.339 [2024-07-14 05:48:31.333947] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.339 [2024-07-14 05:48:31.333962] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5cf8000b90 00:34:24.339 [2024-07-14 05:48:31.333994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.339 qpair failed and we were unable to recover it. 00:34:24.339 [2024-07-14 05:48:31.343756] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.339 [2024-07-14 05:48:31.343923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.339 [2024-07-14 05:48:31.343956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.339 [2024-07-14 05:48:31.343973] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.339 [2024-07-14 05:48:31.343988] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:24.339 [2024-07-14 05:48:31.344018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.339 qpair failed and we were unable to recover it. 00:34:24.339 [2024-07-14 05:48:31.353768] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.339 [2024-07-14 05:48:31.353932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.339 [2024-07-14 05:48:31.353960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.339 [2024-07-14 05:48:31.353976] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.339 [2024-07-14 05:48:31.353989] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1405840 00:34:24.339 [2024-07-14 05:48:31.354020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.339 qpair failed and we were unable to recover it. 00:34:24.339 [2024-07-14 05:48:31.354114] nvme_ctrlr.c:4353:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:34:24.339 A controller has encountered a failure and is being reset. 00:34:24.339 Controller properly reset. 00:34:24.339 Initializing NVMe Controllers 00:34:24.339 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:24.339 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:24.339 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:34:24.339 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:34:24.339 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:34:24.339 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:34:24.339 Initialization complete. Launching workers. 00:34:24.339 Starting thread on core 1 00:34:24.339 Starting thread on core 2 00:34:24.339 Starting thread on core 3 00:34:24.339 Starting thread on core 0 00:34:24.339 05:48:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:34:24.339 00:34:24.339 real 0m10.895s 00:34:24.339 user 0m17.127s 00:34:24.339 sys 0m5.669s 00:34:24.339 05:48:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:24.339 05:48:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:24.339 ************************************ 00:34:24.339 END TEST nvmf_target_disconnect_tc2 00:34:24.339 ************************************ 00:34:24.339 05:48:31 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:34:24.339 05:48:31 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:34:24.339 05:48:31 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:34:24.339 05:48:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:24.339 05:48:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:34:24.339 05:48:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:24.339 05:48:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:34:24.339 05:48:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:24.339 05:48:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:24.596 rmmod nvme_tcp 00:34:24.596 rmmod nvme_fabrics 00:34:24.596 rmmod nvme_keyring 00:34:24.596 05:48:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:24.596 05:48:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:34:24.596 05:48:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:34:24.596 05:48:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 3393917 ']' 00:34:24.596 05:48:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 3393917 00:34:24.596 05:48:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@946 -- # '[' -z 3393917 ']' 00:34:24.596 05:48:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # kill -0 3393917 00:34:24.596 05:48:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # uname 00:34:24.596 05:48:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:24.596 05:48:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3393917 00:34:24.596 05:48:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_4 00:34:24.596 05:48:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_4 = sudo ']' 00:34:24.596 05:48:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3393917' 00:34:24.596 killing process with pid 3393917 00:34:24.596 05:48:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@965 -- # kill 3393917 00:34:24.596 05:48:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # wait 3393917 00:34:24.854 05:48:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:24.854 05:48:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:24.854 05:48:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:24.854 05:48:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:24.854 05:48:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:24.854 05:48:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:24.854 05:48:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:24.855 05:48:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:26.756 05:48:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:26.756 00:34:26.756 real 0m15.646s 00:34:26.756 user 0m43.793s 00:34:26.756 sys 0m7.648s 00:34:26.756 05:48:33 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:26.756 05:48:33 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:26.756 ************************************ 00:34:26.756 END TEST nvmf_target_disconnect 00:34:26.756 ************************************ 00:34:26.756 05:48:33 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:34:26.756 05:48:33 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:26.756 05:48:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:26.756 05:48:33 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:34:26.756 00:34:26.756 real 27m8.127s 00:34:26.756 user 74m28.342s 00:34:26.756 sys 6m26.014s 00:34:26.756 05:48:33 nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:26.756 05:48:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:26.756 ************************************ 00:34:26.756 END TEST nvmf_tcp 00:34:26.756 ************************************ 00:34:26.756 05:48:33 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:34:26.756 05:48:33 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:26.756 05:48:33 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:34:26.756 05:48:33 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:26.756 05:48:33 -- common/autotest_common.sh@10 -- # set +x 00:34:27.014 ************************************ 00:34:27.014 START TEST spdkcli_nvmf_tcp 00:34:27.014 ************************************ 00:34:27.014 05:48:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:27.014 * Looking for test storage... 00:34:27.014 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:34:27.014 05:48:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:34:27.014 05:48:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:34:27.014 05:48:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:34:27.014 05:48:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:27.014 05:48:33 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:34:27.014 05:48:33 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:27.014 05:48:33 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:27.014 05:48:33 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:27.014 05:48:33 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:27.014 05:48:33 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:27.014 05:48:33 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:27.014 05:48:33 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:27.014 05:48:33 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:27.014 05:48:33 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:27.014 05:48:33 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:27.014 05:48:33 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:27.014 05:48:33 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:27.014 05:48:33 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:27.014 05:48:33 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:27.014 05:48:33 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:27.014 05:48:33 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:27.014 05:48:33 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:27.014 05:48:33 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:27.014 05:48:33 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:27.014 05:48:33 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:27.014 05:48:33 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:27.014 05:48:33 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:27.014 05:48:33 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:27.014 05:48:33 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:34:27.014 05:48:33 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:27.014 05:48:33 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:34:27.014 05:48:33 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:27.014 05:48:33 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:27.014 05:48:33 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:27.014 05:48:33 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:27.014 05:48:33 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:27.014 05:48:33 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:27.014 05:48:33 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:27.014 05:48:33 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:27.014 05:48:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:34:27.014 05:48:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:34:27.014 05:48:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:34:27.014 05:48:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:34:27.014 05:48:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:27.014 05:48:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:27.014 05:48:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:34:27.014 05:48:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3395126 00:34:27.014 05:48:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:34:27.014 05:48:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3395126 00:34:27.014 05:48:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@827 -- # '[' -z 3395126 ']' 00:34:27.014 05:48:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:27.014 05:48:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:27.014 05:48:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:27.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:27.014 05:48:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:27.014 05:48:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:27.014 [2024-07-14 05:48:33.991073] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:34:27.014 [2024-07-14 05:48:33.991173] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3395126 ] 00:34:27.014 EAL: No free 2048 kB hugepages reported on node 1 00:34:27.014 [2024-07-14 05:48:34.048575] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:27.272 [2024-07-14 05:48:34.138885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:27.272 [2024-07-14 05:48:34.138895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:27.272 05:48:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:27.272 05:48:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # return 0 00:34:27.272 05:48:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:34:27.272 05:48:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:27.272 05:48:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:27.272 05:48:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:34:27.272 05:48:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:34:27.272 05:48:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:34:27.272 05:48:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:27.272 05:48:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:27.272 05:48:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:34:27.272 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:34:27.272 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:34:27.272 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:34:27.272 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:34:27.272 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:34:27.272 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:34:27.272 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:27.272 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:34:27.272 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:34:27.272 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:27.272 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:27.272 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:34:27.272 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:27.272 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:27.272 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:34:27.272 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:27.272 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:27.272 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:27.272 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:27.272 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:34:27.272 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:34:27.272 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:27.272 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:34:27.272 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:27.272 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:34:27.272 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:34:27.272 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:34:27.272 ' 00:34:29.800 [2024-07-14 05:48:36.839370] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:31.188 [2024-07-14 05:48:38.079739] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:34:33.717 [2024-07-14 05:48:40.367009] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:34:35.653 [2024-07-14 05:48:42.341309] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:34:37.025 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:34:37.025 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:34:37.025 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:34:37.025 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:34:37.025 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:34:37.025 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:34:37.025 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:34:37.025 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:37.025 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:34:37.025 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:34:37.025 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:37.025 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:37.025 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:34:37.025 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:37.025 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:37.025 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:34:37.025 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:37.025 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:37.025 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:37.025 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:37.025 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:34:37.025 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:34:37.025 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:37.025 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:34:37.025 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:37.025 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:34:37.025 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:34:37.025 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:34:37.025 05:48:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:34:37.025 05:48:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:37.025 05:48:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:37.025 05:48:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:34:37.025 05:48:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:37.025 05:48:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:37.025 05:48:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:34:37.025 05:48:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:34:37.590 05:48:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:34:37.590 05:48:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:34:37.590 05:48:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:34:37.590 05:48:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:37.590 05:48:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:37.590 05:48:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:34:37.590 05:48:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:37.590 05:48:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:37.590 05:48:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:34:37.590 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:34:37.590 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:37.590 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:34:37.590 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:34:37.590 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:34:37.590 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:34:37.590 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:37.590 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:34:37.590 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:34:37.590 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:34:37.590 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:34:37.590 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:34:37.590 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:34:37.590 ' 00:34:42.855 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:34:42.855 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:34:42.855 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:42.855 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:34:42.855 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:34:42.855 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:34:42.855 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:34:42.855 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:42.855 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:34:42.855 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:34:42.855 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:34:42.855 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:34:42.855 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:34:42.855 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:34:42.855 05:48:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:34:42.855 05:48:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:42.855 05:48:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:42.855 05:48:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3395126 00:34:42.855 05:48:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 3395126 ']' 00:34:42.855 05:48:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 3395126 00:34:42.855 05:48:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # uname 00:34:42.855 05:48:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:42.855 05:48:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3395126 00:34:42.855 05:48:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:34:42.855 05:48:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:34:42.855 05:48:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3395126' 00:34:42.856 killing process with pid 3395126 00:34:42.856 05:48:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@965 -- # kill 3395126 00:34:42.856 05:48:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # wait 3395126 00:34:43.114 05:48:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:34:43.114 05:48:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:34:43.114 05:48:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3395126 ']' 00:34:43.114 05:48:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3395126 00:34:43.114 05:48:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 3395126 ']' 00:34:43.114 05:48:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 3395126 00:34:43.114 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3395126) - No such process 00:34:43.114 05:48:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # echo 'Process with pid 3395126 is not found' 00:34:43.114 Process with pid 3395126 is not found 00:34:43.114 05:48:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:34:43.114 05:48:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:34:43.114 05:48:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:34:43.114 00:34:43.114 real 0m16.087s 00:34:43.114 user 0m34.064s 00:34:43.114 sys 0m0.838s 00:34:43.114 05:48:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:43.114 05:48:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:43.114 ************************************ 00:34:43.114 END TEST spdkcli_nvmf_tcp 00:34:43.114 ************************************ 00:34:43.114 05:48:49 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:43.114 05:48:49 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:34:43.114 05:48:49 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:43.114 05:48:49 -- common/autotest_common.sh@10 -- # set +x 00:34:43.114 ************************************ 00:34:43.114 START TEST nvmf_identify_passthru 00:34:43.114 ************************************ 00:34:43.114 05:48:50 nvmf_identify_passthru -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:43.114 * Looking for test storage... 00:34:43.114 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:43.114 05:48:50 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:43.114 05:48:50 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:34:43.114 05:48:50 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:43.114 05:48:50 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:43.114 05:48:50 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:43.115 05:48:50 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:43.115 05:48:50 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:43.115 05:48:50 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:43.115 05:48:50 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:43.115 05:48:50 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:43.115 05:48:50 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:43.115 05:48:50 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:43.115 05:48:50 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:43.115 05:48:50 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:43.115 05:48:50 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:43.115 05:48:50 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:43.115 05:48:50 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:43.115 05:48:50 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:43.115 05:48:50 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:43.115 05:48:50 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:43.115 05:48:50 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:43.115 05:48:50 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:43.115 05:48:50 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:43.115 05:48:50 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:43.115 05:48:50 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:43.115 05:48:50 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:43.115 05:48:50 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:43.115 05:48:50 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:34:43.115 05:48:50 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:43.115 05:48:50 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:43.115 05:48:50 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:43.115 05:48:50 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:43.115 05:48:50 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:43.115 05:48:50 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:43.115 05:48:50 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:43.115 05:48:50 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:43.115 05:48:50 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:43.115 05:48:50 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:43.115 05:48:50 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:43.115 05:48:50 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:43.115 05:48:50 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:43.115 05:48:50 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:43.115 05:48:50 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:43.115 05:48:50 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:43.115 05:48:50 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:43.115 05:48:50 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:34:43.115 05:48:50 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:43.115 05:48:50 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:43.115 05:48:50 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:43.115 05:48:50 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:43.115 05:48:50 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:43.115 05:48:50 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:43.115 05:48:50 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:43.115 05:48:50 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:43.115 05:48:50 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:43.115 05:48:50 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:43.115 05:48:50 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:34:43.115 05:48:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:45.017 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:45.017 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:34:45.017 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:45.017 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:45.017 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:45.017 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:45.017 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:45.017 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:34:45.017 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:45.017 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:34:45.017 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:34:45.017 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:34:45.017 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:34:45.017 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:34:45.017 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:34:45.017 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:45.017 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:45.017 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:45.017 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:45.017 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:45.017 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:45.017 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:45.017 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:45.017 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:45.017 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:45.017 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:45.017 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:45.017 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:45.017 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:45.017 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:45.017 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:45.017 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:45.017 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:45.017 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:45.017 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:45.017 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:45.017 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:45.017 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:45.017 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:45.017 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:45.017 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:45.017 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:45.017 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:45.017 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:45.017 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:45.017 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:45.017 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:45.017 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:45.017 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:45.017 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:45.017 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:45.017 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:45.017 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:45.017 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:45.017 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:45.017 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:45.017 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:45.017 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:45.017 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:45.017 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:45.017 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:45.017 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:45.017 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:45.017 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:45.017 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:45.017 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:45.017 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:45.017 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:45.017 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:45.018 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:45.018 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:45.018 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:45.018 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:34:45.018 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:45.018 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:45.018 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:45.018 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:45.018 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:45.018 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:45.018 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:45.018 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:45.018 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:45.018 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:45.018 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:45.018 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:45.018 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:45.018 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:45.018 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:45.018 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:45.277 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:45.277 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:45.277 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:45.277 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:45.277 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:45.277 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:45.277 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:45.277 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:45.277 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:34:45.277 00:34:45.277 --- 10.0.0.2 ping statistics --- 00:34:45.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:45.277 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:34:45.277 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:45.277 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:45.277 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:34:45.277 00:34:45.277 --- 10.0.0.1 ping statistics --- 00:34:45.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:45.277 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:34:45.277 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:45.277 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:34:45.277 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:45.277 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:45.277 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:45.277 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:45.277 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:45.277 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:45.277 05:48:52 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:45.277 05:48:52 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:34:45.277 05:48:52 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:45.277 05:48:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:45.277 05:48:52 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:34:45.277 05:48:52 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # bdfs=() 00:34:45.277 05:48:52 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # local bdfs 00:34:45.277 05:48:52 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:34:45.277 05:48:52 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:34:45.277 05:48:52 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:34:45.277 05:48:52 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:34:45.277 05:48:52 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:34:45.277 05:48:52 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:34:45.277 05:48:52 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:34:45.277 05:48:52 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:34:45.277 05:48:52 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:88:00.0 00:34:45.277 05:48:52 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # echo 0000:88:00.0 00:34:45.277 05:48:52 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:34:45.277 05:48:52 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:34:45.277 05:48:52 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:34:45.277 05:48:52 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:34:45.277 05:48:52 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:34:45.277 EAL: No free 2048 kB hugepages reported on node 1 00:34:49.460 05:48:56 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:34:49.460 05:48:56 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:34:49.460 05:48:56 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:34:49.460 05:48:56 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:34:49.460 EAL: No free 2048 kB hugepages reported on node 1 00:34:53.644 05:49:00 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:34:53.644 05:49:00 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:34:53.644 05:49:00 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:53.644 05:49:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:53.644 05:49:00 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:34:53.644 05:49:00 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:53.644 05:49:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:53.644 05:49:00 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3399739 00:34:53.644 05:49:00 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:34:53.644 05:49:00 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:53.644 05:49:00 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3399739 00:34:53.644 05:49:00 nvmf_identify_passthru -- common/autotest_common.sh@827 -- # '[' -z 3399739 ']' 00:34:53.644 05:49:00 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:53.644 05:49:00 nvmf_identify_passthru -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:53.644 05:49:00 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:53.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:53.644 05:49:00 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:53.644 05:49:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:53.903 [2024-07-14 05:49:00.777438] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:34:53.903 [2024-07-14 05:49:00.777510] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:53.903 EAL: No free 2048 kB hugepages reported on node 1 00:34:53.903 [2024-07-14 05:49:00.843452] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:53.903 [2024-07-14 05:49:00.928321] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:53.903 [2024-07-14 05:49:00.928371] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:53.903 [2024-07-14 05:49:00.928385] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:53.903 [2024-07-14 05:49:00.928396] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:53.903 [2024-07-14 05:49:00.928406] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:53.903 [2024-07-14 05:49:00.928486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:53.903 [2024-07-14 05:49:00.928510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:34:53.903 [2024-07-14 05:49:00.928569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:34:53.903 [2024-07-14 05:49:00.928571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:54.161 05:49:01 nvmf_identify_passthru -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:54.161 05:49:01 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # return 0 00:34:54.161 05:49:01 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:34:54.161 05:49:01 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:54.161 05:49:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:54.161 INFO: Log level set to 20 00:34:54.161 INFO: Requests: 00:34:54.161 { 00:34:54.161 "jsonrpc": "2.0", 00:34:54.161 "method": "nvmf_set_config", 00:34:54.161 "id": 1, 00:34:54.161 "params": { 00:34:54.161 "admin_cmd_passthru": { 00:34:54.161 "identify_ctrlr": true 00:34:54.161 } 00:34:54.161 } 00:34:54.161 } 00:34:54.161 00:34:54.161 INFO: response: 00:34:54.161 { 00:34:54.161 "jsonrpc": "2.0", 00:34:54.161 "id": 1, 00:34:54.161 "result": true 00:34:54.161 } 00:34:54.161 00:34:54.161 05:49:01 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:54.161 05:49:01 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:34:54.161 05:49:01 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:54.161 05:49:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:54.161 INFO: Setting log level to 20 00:34:54.161 INFO: Setting log level to 20 00:34:54.161 INFO: Log level set to 20 00:34:54.161 INFO: Log level set to 20 00:34:54.161 INFO: Requests: 00:34:54.161 { 00:34:54.161 "jsonrpc": "2.0", 00:34:54.161 "method": "framework_start_init", 00:34:54.161 "id": 1 00:34:54.161 } 00:34:54.161 00:34:54.161 INFO: Requests: 00:34:54.161 { 00:34:54.161 "jsonrpc": "2.0", 00:34:54.161 "method": "framework_start_init", 00:34:54.161 "id": 1 00:34:54.161 } 00:34:54.161 00:34:54.161 [2024-07-14 05:49:01.119079] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:34:54.161 INFO: response: 00:34:54.161 { 00:34:54.161 "jsonrpc": "2.0", 00:34:54.161 "id": 1, 00:34:54.161 "result": true 00:34:54.161 } 00:34:54.161 00:34:54.161 INFO: response: 00:34:54.161 { 00:34:54.161 "jsonrpc": "2.0", 00:34:54.161 "id": 1, 00:34:54.161 "result": true 00:34:54.161 } 00:34:54.161 00:34:54.161 05:49:01 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:54.161 05:49:01 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:54.162 05:49:01 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:54.162 05:49:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:54.162 INFO: Setting log level to 40 00:34:54.162 INFO: Setting log level to 40 00:34:54.162 INFO: Setting log level to 40 00:34:54.162 [2024-07-14 05:49:01.128892] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:54.162 05:49:01 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:54.162 05:49:01 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:34:54.162 05:49:01 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:54.162 05:49:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:54.162 05:49:01 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:34:54.162 05:49:01 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:54.162 05:49:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:57.446 Nvme0n1 00:34:57.446 05:49:03 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:57.446 05:49:03 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:34:57.446 05:49:03 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:57.446 05:49:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:57.446 05:49:03 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:57.446 05:49:03 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:34:57.446 05:49:03 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:57.446 05:49:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:57.446 05:49:04 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:57.446 05:49:04 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:57.446 05:49:04 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:57.446 05:49:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:57.446 [2024-07-14 05:49:04.015375] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:57.446 05:49:04 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:57.446 05:49:04 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:34:57.446 05:49:04 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:57.446 05:49:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:57.446 [ 00:34:57.446 { 00:34:57.446 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:34:57.446 "subtype": "Discovery", 00:34:57.446 "listen_addresses": [], 00:34:57.446 "allow_any_host": true, 00:34:57.446 "hosts": [] 00:34:57.446 }, 00:34:57.446 { 00:34:57.446 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:34:57.446 "subtype": "NVMe", 00:34:57.446 "listen_addresses": [ 00:34:57.446 { 00:34:57.446 "trtype": "TCP", 00:34:57.446 "adrfam": "IPv4", 00:34:57.446 "traddr": "10.0.0.2", 00:34:57.446 "trsvcid": "4420" 00:34:57.446 } 00:34:57.446 ], 00:34:57.446 "allow_any_host": true, 00:34:57.446 "hosts": [], 00:34:57.446 "serial_number": "SPDK00000000000001", 00:34:57.446 "model_number": "SPDK bdev Controller", 00:34:57.446 "max_namespaces": 1, 00:34:57.446 "min_cntlid": 1, 00:34:57.446 "max_cntlid": 65519, 00:34:57.446 "namespaces": [ 00:34:57.446 { 00:34:57.446 "nsid": 1, 00:34:57.446 "bdev_name": "Nvme0n1", 00:34:57.446 "name": "Nvme0n1", 00:34:57.446 "nguid": "2B3E2844104648B18D8B29855806A017", 00:34:57.446 "uuid": "2b3e2844-1046-48b1-8d8b-29855806a017" 00:34:57.446 } 00:34:57.446 ] 00:34:57.446 } 00:34:57.446 ] 00:34:57.446 05:49:04 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:57.446 05:49:04 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:57.446 05:49:04 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:34:57.446 05:49:04 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:34:57.446 EAL: No free 2048 kB hugepages reported on node 1 00:34:57.446 05:49:04 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:34:57.447 05:49:04 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:57.447 05:49:04 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:34:57.447 05:49:04 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:34:57.447 EAL: No free 2048 kB hugepages reported on node 1 00:34:57.447 05:49:04 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:34:57.447 05:49:04 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:34:57.447 05:49:04 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:34:57.447 05:49:04 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:57.447 05:49:04 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:57.447 05:49:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:57.447 05:49:04 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:57.447 05:49:04 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:34:57.447 05:49:04 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:34:57.447 05:49:04 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:57.447 05:49:04 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:34:57.447 05:49:04 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:57.447 05:49:04 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:34:57.447 05:49:04 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:57.447 05:49:04 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:57.447 rmmod nvme_tcp 00:34:57.447 rmmod nvme_fabrics 00:34:57.447 rmmod nvme_keyring 00:34:57.447 05:49:04 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:57.447 05:49:04 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:34:57.447 05:49:04 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:34:57.447 05:49:04 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 3399739 ']' 00:34:57.447 05:49:04 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 3399739 00:34:57.447 05:49:04 nvmf_identify_passthru -- common/autotest_common.sh@946 -- # '[' -z 3399739 ']' 00:34:57.447 05:49:04 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # kill -0 3399739 00:34:57.447 05:49:04 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # uname 00:34:57.447 05:49:04 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:57.447 05:49:04 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3399739 00:34:57.447 05:49:04 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:34:57.447 05:49:04 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:34:57.447 05:49:04 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3399739' 00:34:57.447 killing process with pid 3399739 00:34:57.447 05:49:04 nvmf_identify_passthru -- common/autotest_common.sh@965 -- # kill 3399739 00:34:57.447 05:49:04 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # wait 3399739 00:34:59.347 05:49:06 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:59.347 05:49:06 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:59.347 05:49:06 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:59.347 05:49:06 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:59.347 05:49:06 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:59.347 05:49:06 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:59.347 05:49:06 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:59.347 05:49:06 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:01.248 05:49:08 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:01.248 00:35:01.248 real 0m18.101s 00:35:01.248 user 0m27.010s 00:35:01.248 sys 0m2.346s 00:35:01.248 05:49:08 nvmf_identify_passthru -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:01.248 05:49:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:01.248 ************************************ 00:35:01.248 END TEST nvmf_identify_passthru 00:35:01.248 ************************************ 00:35:01.248 05:49:08 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:01.248 05:49:08 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:01.248 05:49:08 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:01.248 05:49:08 -- common/autotest_common.sh@10 -- # set +x 00:35:01.248 ************************************ 00:35:01.248 START TEST nvmf_dif 00:35:01.248 ************************************ 00:35:01.248 05:49:08 nvmf_dif -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:01.248 * Looking for test storage... 00:35:01.248 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:01.248 05:49:08 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:01.248 05:49:08 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:35:01.248 05:49:08 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:01.248 05:49:08 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:01.248 05:49:08 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:01.248 05:49:08 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:01.248 05:49:08 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:01.248 05:49:08 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:01.248 05:49:08 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:01.248 05:49:08 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:01.248 05:49:08 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:01.248 05:49:08 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:01.248 05:49:08 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:01.248 05:49:08 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:01.248 05:49:08 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:01.248 05:49:08 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:01.249 05:49:08 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:01.249 05:49:08 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:01.249 05:49:08 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:01.249 05:49:08 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:01.249 05:49:08 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:01.249 05:49:08 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:01.249 05:49:08 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:01.249 05:49:08 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:01.249 05:49:08 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:01.249 05:49:08 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:35:01.249 05:49:08 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:01.249 05:49:08 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:35:01.249 05:49:08 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:01.249 05:49:08 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:01.249 05:49:08 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:01.249 05:49:08 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:01.249 05:49:08 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:01.249 05:49:08 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:01.249 05:49:08 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:01.249 05:49:08 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:01.249 05:49:08 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:35:01.249 05:49:08 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:35:01.249 05:49:08 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:35:01.249 05:49:08 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:35:01.249 05:49:08 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:35:01.249 05:49:08 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:01.249 05:49:08 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:01.249 05:49:08 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:01.249 05:49:08 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:01.249 05:49:08 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:01.249 05:49:08 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:01.249 05:49:08 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:01.249 05:49:08 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:01.249 05:49:08 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:01.249 05:49:08 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:01.249 05:49:08 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:35:01.249 05:49:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:03.153 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:03.153 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:03.153 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:03.153 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:03.153 05:49:10 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:03.154 05:49:10 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:03.154 05:49:10 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:03.154 05:49:10 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:03.154 05:49:10 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:03.154 05:49:10 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:03.154 05:49:10 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:03.154 05:49:10 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:03.154 05:49:10 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:03.154 05:49:10 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:03.154 05:49:10 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:03.154 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:03.154 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:35:03.154 00:35:03.154 --- 10.0.0.2 ping statistics --- 00:35:03.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:03.154 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:35:03.154 05:49:10 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:03.154 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:03.154 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:35:03.154 00:35:03.154 --- 10.0.0.1 ping statistics --- 00:35:03.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:03.154 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:35:03.411 05:49:10 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:03.411 05:49:10 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:35:03.411 05:49:10 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:35:03.411 05:49:10 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:04.345 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:35:04.345 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:35:04.345 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:35:04.345 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:35:04.345 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:35:04.345 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:35:04.345 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:35:04.345 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:35:04.345 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:35:04.345 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:35:04.345 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:35:04.345 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:35:04.345 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:35:04.345 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:35:04.345 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:35:04.345 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:35:04.345 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:35:04.604 05:49:11 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:04.604 05:49:11 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:04.604 05:49:11 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:04.604 05:49:11 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:04.604 05:49:11 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:04.604 05:49:11 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:04.604 05:49:11 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:35:04.604 05:49:11 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:35:04.604 05:49:11 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:04.604 05:49:11 nvmf_dif -- common/autotest_common.sh@720 -- # xtrace_disable 00:35:04.604 05:49:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:04.604 05:49:11 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=3402880 00:35:04.604 05:49:11 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:35:04.604 05:49:11 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 3402880 00:35:04.604 05:49:11 nvmf_dif -- common/autotest_common.sh@827 -- # '[' -z 3402880 ']' 00:35:04.604 05:49:11 nvmf_dif -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:04.604 05:49:11 nvmf_dif -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:04.604 05:49:11 nvmf_dif -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:04.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:04.604 05:49:11 nvmf_dif -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:04.604 05:49:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:04.604 [2024-07-14 05:49:11.650709] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:35:04.604 [2024-07-14 05:49:11.650787] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:04.604 EAL: No free 2048 kB hugepages reported on node 1 00:35:04.862 [2024-07-14 05:49:11.715262] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:04.863 [2024-07-14 05:49:11.799616] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:04.863 [2024-07-14 05:49:11.799668] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:04.863 [2024-07-14 05:49:11.799681] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:04.863 [2024-07-14 05:49:11.799691] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:04.863 [2024-07-14 05:49:11.799701] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:04.863 [2024-07-14 05:49:11.799731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:04.863 05:49:11 nvmf_dif -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:04.863 05:49:11 nvmf_dif -- common/autotest_common.sh@860 -- # return 0 00:35:04.863 05:49:11 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:04.863 05:49:11 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:04.863 05:49:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:04.863 05:49:11 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:04.863 05:49:11 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:35:04.863 05:49:11 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:35:04.863 05:49:11 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:04.863 05:49:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:04.863 [2024-07-14 05:49:11.936096] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:04.863 05:49:11 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:04.863 05:49:11 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:35:04.863 05:49:11 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:04.863 05:49:11 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:04.863 05:49:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:04.863 ************************************ 00:35:04.863 START TEST fio_dif_1_default 00:35:04.863 ************************************ 00:35:04.863 05:49:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1121 -- # fio_dif_1 00:35:04.863 05:49:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:35:04.863 05:49:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:35:04.863 05:49:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:35:04.863 05:49:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:35:04.863 05:49:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:35:04.863 05:49:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:04.863 05:49:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:05.122 05:49:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:05.122 bdev_null0 00:35:05.122 05:49:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:05.122 05:49:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:05.122 05:49:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:05.122 05:49:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:05.122 05:49:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:05.122 05:49:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:05.122 05:49:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:05.122 05:49:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:05.122 05:49:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:05.122 05:49:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:05.122 05:49:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:05.122 05:49:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:05.122 [2024-07-14 05:49:11.996414] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:05.122 05:49:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:05.122 05:49:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:35:05.122 05:49:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:35:05.122 05:49:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:05.122 05:49:12 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:35:05.122 05:49:12 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:35:05.122 05:49:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:05.122 05:49:12 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:05.122 05:49:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:05.122 05:49:12 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:05.122 { 00:35:05.122 "params": { 00:35:05.122 "name": "Nvme$subsystem", 00:35:05.122 "trtype": "$TEST_TRANSPORT", 00:35:05.122 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:05.122 "adrfam": "ipv4", 00:35:05.122 "trsvcid": "$NVMF_PORT", 00:35:05.122 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:05.122 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:05.122 "hdgst": ${hdgst:-false}, 00:35:05.122 "ddgst": ${ddgst:-false} 00:35:05.122 }, 00:35:05.122 "method": "bdev_nvme_attach_controller" 00:35:05.122 } 00:35:05.122 EOF 00:35:05.122 )") 00:35:05.122 05:49:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:35:05.122 05:49:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:05.122 05:49:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:05.122 05:49:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:35:05.122 05:49:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:05.122 05:49:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:35:05.122 05:49:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:05.122 05:49:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # shift 00:35:05.122 05:49:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:05.122 05:49:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:05.122 05:49:12 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:35:05.122 05:49:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:05.122 05:49:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:35:05.122 05:49:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libasan 00:35:05.122 05:49:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:35:05.122 05:49:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:05.122 05:49:12 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:35:05.122 05:49:12 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:35:05.122 05:49:12 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:05.122 "params": { 00:35:05.122 "name": "Nvme0", 00:35:05.122 "trtype": "tcp", 00:35:05.122 "traddr": "10.0.0.2", 00:35:05.122 "adrfam": "ipv4", 00:35:05.122 "trsvcid": "4420", 00:35:05.122 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:05.122 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:05.122 "hdgst": false, 00:35:05.122 "ddgst": false 00:35:05.122 }, 00:35:05.122 "method": "bdev_nvme_attach_controller" 00:35:05.122 }' 00:35:05.122 05:49:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:05.122 05:49:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:05.122 05:49:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:05.122 05:49:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:05.122 05:49:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:05.122 05:49:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:05.122 05:49:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:05.122 05:49:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:05.122 05:49:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:05.122 05:49:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:05.407 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:05.407 fio-3.35 00:35:05.407 Starting 1 thread 00:35:05.407 EAL: No free 2048 kB hugepages reported on node 1 00:35:17.600 00:35:17.600 filename0: (groupid=0, jobs=1): err= 0: pid=3403105: Sun Jul 14 05:49:22 2024 00:35:17.600 read: IOPS=95, BW=383KiB/s (392kB/s)(3840KiB/10028msec) 00:35:17.600 slat (nsec): min=4809, max=73681, avg=10402.38, stdev=5313.48 00:35:17.600 clat (usec): min=40890, max=44168, avg=41747.04, stdev=451.90 00:35:17.600 lat (usec): min=40897, max=44184, avg=41757.44, stdev=451.94 00:35:17.600 clat percentiles (usec): 00:35:17.600 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:35:17.600 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:35:17.600 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:17.600 | 99.00th=[42206], 99.50th=[42730], 99.90th=[44303], 99.95th=[44303], 00:35:17.600 | 99.99th=[44303] 00:35:17.600 bw ( KiB/s): min= 352, max= 384, per=99.76%, avg=382.40, stdev= 7.16, samples=20 00:35:17.600 iops : min= 88, max= 96, avg=95.60, stdev= 1.79, samples=20 00:35:17.600 lat (msec) : 50=100.00% 00:35:17.600 cpu : usr=89.82%, sys=9.91%, ctx=9, majf=0, minf=202 00:35:17.600 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:17.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:17.600 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:17.600 issued rwts: total=960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:17.600 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:17.600 00:35:17.600 Run status group 0 (all jobs): 00:35:17.600 READ: bw=383KiB/s (392kB/s), 383KiB/s-383KiB/s (392kB/s-392kB/s), io=3840KiB (3932kB), run=10028-10028msec 00:35:17.600 05:49:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:35:17.600 05:49:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:35:17.600 05:49:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:35:17.600 05:49:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:17.600 05:49:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:35:17.600 05:49:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:17.600 05:49:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.600 05:49:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:17.600 05:49:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.600 05:49:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:17.600 05:49:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.600 05:49:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:17.600 05:49:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.600 00:35:17.600 real 0m11.190s 00:35:17.600 user 0m10.175s 00:35:17.600 sys 0m1.262s 00:35:17.600 05:49:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:17.601 ************************************ 00:35:17.601 END TEST fio_dif_1_default 00:35:17.601 ************************************ 00:35:17.601 05:49:23 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:35:17.601 05:49:23 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:17.601 05:49:23 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:17.601 05:49:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:17.601 ************************************ 00:35:17.601 START TEST fio_dif_1_multi_subsystems 00:35:17.601 ************************************ 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1121 -- # fio_dif_1_multi_subsystems 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:17.601 bdev_null0 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:17.601 [2024-07-14 05:49:23.228276] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:17.601 bdev_null1 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:17.601 { 00:35:17.601 "params": { 00:35:17.601 "name": "Nvme$subsystem", 00:35:17.601 "trtype": "$TEST_TRANSPORT", 00:35:17.601 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:17.601 "adrfam": "ipv4", 00:35:17.601 "trsvcid": "$NVMF_PORT", 00:35:17.601 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:17.601 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:17.601 "hdgst": ${hdgst:-false}, 00:35:17.601 "ddgst": ${ddgst:-false} 00:35:17.601 }, 00:35:17.601 "method": "bdev_nvme_attach_controller" 00:35:17.601 } 00:35:17.601 EOF 00:35:17.601 )") 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # shift 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libasan 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:17.601 { 00:35:17.601 "params": { 00:35:17.601 "name": "Nvme$subsystem", 00:35:17.601 "trtype": "$TEST_TRANSPORT", 00:35:17.601 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:17.601 "adrfam": "ipv4", 00:35:17.601 "trsvcid": "$NVMF_PORT", 00:35:17.601 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:17.601 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:17.601 "hdgst": ${hdgst:-false}, 00:35:17.601 "ddgst": ${ddgst:-false} 00:35:17.601 }, 00:35:17.601 "method": "bdev_nvme_attach_controller" 00:35:17.601 } 00:35:17.601 EOF 00:35:17.601 )") 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:17.601 "params": { 00:35:17.601 "name": "Nvme0", 00:35:17.601 "trtype": "tcp", 00:35:17.601 "traddr": "10.0.0.2", 00:35:17.601 "adrfam": "ipv4", 00:35:17.601 "trsvcid": "4420", 00:35:17.601 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:17.601 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:17.601 "hdgst": false, 00:35:17.601 "ddgst": false 00:35:17.601 }, 00:35:17.601 "method": "bdev_nvme_attach_controller" 00:35:17.601 },{ 00:35:17.601 "params": { 00:35:17.601 "name": "Nvme1", 00:35:17.601 "trtype": "tcp", 00:35:17.601 "traddr": "10.0.0.2", 00:35:17.601 "adrfam": "ipv4", 00:35:17.601 "trsvcid": "4420", 00:35:17.601 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:17.601 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:17.601 "hdgst": false, 00:35:17.601 "ddgst": false 00:35:17.601 }, 00:35:17.601 "method": "bdev_nvme_attach_controller" 00:35:17.601 }' 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:17.601 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:17.602 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:17.602 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:17.602 05:49:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:17.602 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:17.602 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:17.602 fio-3.35 00:35:17.602 Starting 2 threads 00:35:17.602 EAL: No free 2048 kB hugepages reported on node 1 00:35:27.576 00:35:27.576 filename0: (groupid=0, jobs=1): err= 0: pid=3404503: Sun Jul 14 05:49:34 2024 00:35:27.576 read: IOPS=96, BW=386KiB/s (396kB/s)(3872KiB/10021msec) 00:35:27.576 slat (nsec): min=7938, max=31129, avg=9781.31, stdev=2756.34 00:35:27.576 clat (usec): min=40887, max=43372, avg=41376.02, stdev=502.29 00:35:27.576 lat (usec): min=40895, max=43403, avg=41385.80, stdev=502.51 00:35:27.576 clat percentiles (usec): 00:35:27.576 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:35:27.576 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:35:27.576 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:27.576 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:35:27.576 | 99.99th=[43254] 00:35:27.576 bw ( KiB/s): min= 384, max= 416, per=34.08%, avg=385.60, stdev= 7.16, samples=20 00:35:27.576 iops : min= 96, max= 104, avg=96.40, stdev= 1.79, samples=20 00:35:27.576 lat (msec) : 50=100.00% 00:35:27.576 cpu : usr=94.61%, sys=5.13%, ctx=13, majf=0, minf=101 00:35:27.576 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:27.576 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.576 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.576 issued rwts: total=968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:27.577 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:27.577 filename1: (groupid=0, jobs=1): err= 0: pid=3404504: Sun Jul 14 05:49:34 2024 00:35:27.577 read: IOPS=185, BW=744KiB/s (761kB/s)(7456KiB/10028msec) 00:35:27.577 slat (nsec): min=7989, max=36128, avg=9640.03, stdev=2558.47 00:35:27.577 clat (usec): min=858, max=43418, avg=21488.19, stdev=20441.72 00:35:27.577 lat (usec): min=867, max=43449, avg=21497.83, stdev=20441.61 00:35:27.577 clat percentiles (usec): 00:35:27.577 | 1.00th=[ 889], 5.00th=[ 914], 10.00th=[ 922], 20.00th=[ 938], 00:35:27.577 | 30.00th=[ 955], 40.00th=[ 1004], 50.00th=[41157], 60.00th=[41681], 00:35:27.577 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:27.577 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:35:27.577 | 99.99th=[43254] 00:35:27.577 bw ( KiB/s): min= 672, max= 768, per=65.86%, avg=744.00, stdev=34.24, samples=20 00:35:27.577 iops : min= 168, max= 192, avg=186.00, stdev= 8.56, samples=20 00:35:27.577 lat (usec) : 1000=40.02% 00:35:27.577 lat (msec) : 2=9.76%, 50=50.21% 00:35:27.577 cpu : usr=94.76%, sys=4.97%, ctx=13, majf=0, minf=155 00:35:27.577 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:27.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.577 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.577 issued rwts: total=1864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:27.577 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:27.577 00:35:27.577 Run status group 0 (all jobs): 00:35:27.577 READ: bw=1130KiB/s (1157kB/s), 386KiB/s-744KiB/s (396kB/s-761kB/s), io=11.1MiB (11.6MB), run=10021-10028msec 00:35:27.577 05:49:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:35:27.577 05:49:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:35:27.577 05:49:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:27.577 05:49:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:27.577 05:49:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:35:27.577 05:49:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:27.577 05:49:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:27.577 05:49:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:27.577 05:49:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:27.577 05:49:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:27.577 05:49:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:27.577 05:49:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:27.577 05:49:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:27.577 05:49:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:27.577 05:49:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:27.577 05:49:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:35:27.577 05:49:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:27.577 05:49:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:27.577 05:49:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:27.577 05:49:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:27.577 05:49:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:27.577 05:49:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:27.577 05:49:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:27.577 05:49:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:27.577 00:35:27.577 real 0m11.195s 00:35:27.577 user 0m20.116s 00:35:27.577 sys 0m1.273s 00:35:27.577 05:49:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:27.577 05:49:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:27.577 ************************************ 00:35:27.577 END TEST fio_dif_1_multi_subsystems 00:35:27.577 ************************************ 00:35:27.577 05:49:34 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:35:27.577 05:49:34 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:27.577 05:49:34 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:27.577 05:49:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:27.577 ************************************ 00:35:27.577 START TEST fio_dif_rand_params 00:35:27.577 ************************************ 00:35:27.577 05:49:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1121 -- # fio_dif_rand_params 00:35:27.577 05:49:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:35:27.577 05:49:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:35:27.577 05:49:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:35:27.577 05:49:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:35:27.577 05:49:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:35:27.577 05:49:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:35:27.577 05:49:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:35:27.577 05:49:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:35:27.577 05:49:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:27.577 05:49:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:27.577 05:49:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:27.577 05:49:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:27.577 05:49:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:27.577 05:49:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:27.577 05:49:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:27.577 bdev_null0 00:35:27.577 05:49:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:27.577 05:49:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:27.577 05:49:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:27.577 05:49:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:27.577 05:49:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:27.577 05:49:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:27.577 05:49:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:27.577 05:49:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:27.577 05:49:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:27.577 05:49:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:27.577 05:49:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:27.577 05:49:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:27.577 [2024-07-14 05:49:34.466845] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:27.577 05:49:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:27.577 05:49:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:35:27.577 05:49:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:35:27.577 05:49:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:27.577 05:49:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:27.577 05:49:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:27.577 05:49:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:27.577 05:49:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:27.577 { 00:35:27.577 "params": { 00:35:27.577 "name": "Nvme$subsystem", 00:35:27.577 "trtype": "$TEST_TRANSPORT", 00:35:27.577 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:27.577 "adrfam": "ipv4", 00:35:27.577 "trsvcid": "$NVMF_PORT", 00:35:27.577 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:27.577 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:27.577 "hdgst": ${hdgst:-false}, 00:35:27.577 "ddgst": ${ddgst:-false} 00:35:27.577 }, 00:35:27.577 "method": "bdev_nvme_attach_controller" 00:35:27.577 } 00:35:27.577 EOF 00:35:27.577 )") 00:35:27.577 05:49:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:27.577 05:49:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:27.577 05:49:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:27.577 05:49:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:27.577 05:49:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:27.577 05:49:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:27.577 05:49:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:27.577 05:49:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:27.577 05:49:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:27.577 05:49:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:35:27.577 05:49:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:27.577 05:49:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:27.577 05:49:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:27.577 05:49:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:27.577 05:49:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:27.577 05:49:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:27.577 05:49:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:35:27.577 05:49:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:27.577 05:49:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:27.577 05:49:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:27.577 05:49:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:27.577 "params": { 00:35:27.577 "name": "Nvme0", 00:35:27.577 "trtype": "tcp", 00:35:27.577 "traddr": "10.0.0.2", 00:35:27.578 "adrfam": "ipv4", 00:35:27.578 "trsvcid": "4420", 00:35:27.578 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:27.578 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:27.578 "hdgst": false, 00:35:27.578 "ddgst": false 00:35:27.578 }, 00:35:27.578 "method": "bdev_nvme_attach_controller" 00:35:27.578 }' 00:35:27.578 05:49:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:27.578 05:49:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:27.578 05:49:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:27.578 05:49:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:27.578 05:49:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:27.578 05:49:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:27.578 05:49:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:27.578 05:49:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:27.578 05:49:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:27.578 05:49:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:27.866 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:27.867 ... 00:35:27.867 fio-3.35 00:35:27.867 Starting 3 threads 00:35:27.867 EAL: No free 2048 kB hugepages reported on node 1 00:35:34.421 00:35:34.421 filename0: (groupid=0, jobs=1): err= 0: pid=3405896: Sun Jul 14 05:49:40 2024 00:35:34.421 read: IOPS=174, BW=21.8MiB/s (22.9MB/s)(109MiB/5007msec) 00:35:34.421 slat (nsec): min=5225, max=32435, avg=14212.76, stdev=2984.67 00:35:34.421 clat (usec): min=4559, max=97757, avg=17163.69, stdev=17013.80 00:35:34.421 lat (usec): min=4572, max=97767, avg=17177.90, stdev=17014.14 00:35:34.421 clat percentiles (usec): 00:35:34.421 | 1.00th=[ 5669], 5.00th=[ 6063], 10.00th=[ 6652], 20.00th=[ 7963], 00:35:34.421 | 30.00th=[ 8848], 40.00th=[10028], 50.00th=[11731], 60.00th=[12649], 00:35:34.421 | 70.00th=[13698], 80.00th=[14746], 90.00th=[51643], 95.00th=[53740], 00:35:34.421 | 99.00th=[93848], 99.50th=[93848], 99.90th=[98042], 99.95th=[98042], 00:35:34.421 | 99.99th=[98042] 00:35:34.421 bw ( KiB/s): min=13824, max=36864, per=30.69%, avg=22301.40, stdev=7153.57, samples=10 00:35:34.421 iops : min= 108, max= 288, avg=174.20, stdev=55.90, samples=10 00:35:34.421 lat (msec) : 10=39.47%, 20=46.11%, 50=1.37%, 100=13.04% 00:35:34.421 cpu : usr=90.51%, sys=7.53%, ctx=279, majf=0, minf=74 00:35:34.421 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:34.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.421 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.421 issued rwts: total=874,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:34.421 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:34.421 filename0: (groupid=0, jobs=1): err= 0: pid=3405897: Sun Jul 14 05:49:40 2024 00:35:34.421 read: IOPS=157, BW=19.7MiB/s (20.6MB/s)(99.0MiB/5035msec) 00:35:34.421 slat (nsec): min=4588, max=29809, avg=13364.94, stdev=1997.47 00:35:34.421 clat (usec): min=7295, max=94896, avg=19050.99, stdev=15999.15 00:35:34.421 lat (usec): min=7308, max=94910, avg=19064.36, stdev=15999.04 00:35:34.421 clat percentiles (usec): 00:35:34.421 | 1.00th=[ 7963], 5.00th=[ 8717], 10.00th=[ 9634], 20.00th=[10290], 00:35:34.421 | 30.00th=[10814], 40.00th=[11338], 50.00th=[12387], 60.00th=[13173], 00:35:34.421 | 70.00th=[14091], 80.00th=[15533], 90.00th=[51119], 95.00th=[52691], 00:35:34.421 | 99.00th=[55313], 99.50th=[56361], 99.90th=[94897], 99.95th=[94897], 00:35:34.421 | 99.99th=[94897] 00:35:34.421 bw ( KiB/s): min=10752, max=25344, per=27.80%, avg=20202.60, stdev=4225.46, samples=10 00:35:34.421 iops : min= 84, max= 198, avg=157.80, stdev=33.00, samples=10 00:35:34.421 lat (msec) : 10=14.39%, 20=67.80%, 50=2.27%, 100=15.53% 00:35:34.421 cpu : usr=93.09%, sys=6.52%, ctx=5, majf=0, minf=79 00:35:34.421 IO depths : 1=0.9%, 2=99.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:34.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.421 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.421 issued rwts: total=792,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:34.421 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:34.421 filename0: (groupid=0, jobs=1): err= 0: pid=3405898: Sun Jul 14 05:49:40 2024 00:35:34.421 read: IOPS=237, BW=29.7MiB/s (31.2MB/s)(150MiB/5048msec) 00:35:34.421 slat (nsec): min=5042, max=31372, avg=12551.17, stdev=2161.49 00:35:34.421 clat (usec): min=5113, max=92326, avg=12566.05, stdev=12250.10 00:35:34.421 lat (usec): min=5125, max=92340, avg=12578.60, stdev=12250.08 00:35:34.421 clat percentiles (usec): 00:35:34.421 | 1.00th=[ 5932], 5.00th=[ 6259], 10.00th=[ 6390], 20.00th=[ 7046], 00:35:34.421 | 30.00th=[ 7570], 40.00th=[ 8225], 50.00th=[ 8848], 60.00th=[ 9503], 00:35:34.421 | 70.00th=[10552], 80.00th=[11338], 90.00th=[12911], 95.00th=[50594], 00:35:34.421 | 99.00th=[52691], 99.50th=[53216], 99.90th=[54264], 99.95th=[92799], 00:35:34.421 | 99.99th=[92799] 00:35:34.421 bw ( KiB/s): min=17664, max=38656, per=42.17%, avg=30643.20, stdev=6476.40, samples=10 00:35:34.421 iops : min= 138, max= 302, avg=239.40, stdev=50.60, samples=10 00:35:34.421 lat (msec) : 10=64.42%, 20=26.75%, 50=3.00%, 100=5.83% 00:35:34.421 cpu : usr=92.31%, sys=7.05%, ctx=8, majf=0, minf=108 00:35:34.421 IO depths : 1=0.9%, 2=99.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:34.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.421 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.421 issued rwts: total=1200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:34.421 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:34.421 00:35:34.421 Run status group 0 (all jobs): 00:35:34.421 READ: bw=71.0MiB/s (74.4MB/s), 19.7MiB/s-29.7MiB/s (20.6MB/s-31.2MB/s), io=358MiB (376MB), run=5007-5048msec 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:34.421 bdev_null0 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:34.421 [2024-07-14 05:49:40.662726] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:34.421 bdev_null1 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:34.421 bdev_null2 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:34.421 { 00:35:34.421 "params": { 00:35:34.421 "name": "Nvme$subsystem", 00:35:34.421 "trtype": "$TEST_TRANSPORT", 00:35:34.421 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:34.421 "adrfam": "ipv4", 00:35:34.421 "trsvcid": "$NVMF_PORT", 00:35:34.421 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:34.421 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:34.421 "hdgst": ${hdgst:-false}, 00:35:34.421 "ddgst": ${ddgst:-false} 00:35:34.421 }, 00:35:34.421 "method": "bdev_nvme_attach_controller" 00:35:34.421 } 00:35:34.421 EOF 00:35:34.421 )") 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:34.421 { 00:35:34.421 "params": { 00:35:34.421 "name": "Nvme$subsystem", 00:35:34.421 "trtype": "$TEST_TRANSPORT", 00:35:34.421 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:34.421 "adrfam": "ipv4", 00:35:34.421 "trsvcid": "$NVMF_PORT", 00:35:34.421 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:34.421 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:34.421 "hdgst": ${hdgst:-false}, 00:35:34.421 "ddgst": ${ddgst:-false} 00:35:34.421 }, 00:35:34.421 "method": "bdev_nvme_attach_controller" 00:35:34.421 } 00:35:34.421 EOF 00:35:34.421 )") 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:34.421 05:49:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:34.421 { 00:35:34.421 "params": { 00:35:34.421 "name": "Nvme$subsystem", 00:35:34.421 "trtype": "$TEST_TRANSPORT", 00:35:34.421 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:34.421 "adrfam": "ipv4", 00:35:34.422 "trsvcid": "$NVMF_PORT", 00:35:34.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:34.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:34.422 "hdgst": ${hdgst:-false}, 00:35:34.422 "ddgst": ${ddgst:-false} 00:35:34.422 }, 00:35:34.422 "method": "bdev_nvme_attach_controller" 00:35:34.422 } 00:35:34.422 EOF 00:35:34.422 )") 00:35:34.422 05:49:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:34.422 05:49:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:34.422 05:49:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:34.422 05:49:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:34.422 05:49:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:34.422 "params": { 00:35:34.422 "name": "Nvme0", 00:35:34.422 "trtype": "tcp", 00:35:34.422 "traddr": "10.0.0.2", 00:35:34.422 "adrfam": "ipv4", 00:35:34.422 "trsvcid": "4420", 00:35:34.422 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:34.422 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:34.422 "hdgst": false, 00:35:34.422 "ddgst": false 00:35:34.422 }, 00:35:34.422 "method": "bdev_nvme_attach_controller" 00:35:34.422 },{ 00:35:34.422 "params": { 00:35:34.422 "name": "Nvme1", 00:35:34.422 "trtype": "tcp", 00:35:34.422 "traddr": "10.0.0.2", 00:35:34.422 "adrfam": "ipv4", 00:35:34.422 "trsvcid": "4420", 00:35:34.422 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:34.422 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:34.422 "hdgst": false, 00:35:34.422 "ddgst": false 00:35:34.422 }, 00:35:34.422 "method": "bdev_nvme_attach_controller" 00:35:34.422 },{ 00:35:34.422 "params": { 00:35:34.422 "name": "Nvme2", 00:35:34.422 "trtype": "tcp", 00:35:34.422 "traddr": "10.0.0.2", 00:35:34.422 "adrfam": "ipv4", 00:35:34.422 "trsvcid": "4420", 00:35:34.422 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:35:34.422 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:35:34.422 "hdgst": false, 00:35:34.422 "ddgst": false 00:35:34.422 }, 00:35:34.422 "method": "bdev_nvme_attach_controller" 00:35:34.422 }' 00:35:34.422 05:49:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:34.422 05:49:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:34.422 05:49:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:34.422 05:49:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:34.422 05:49:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:34.422 05:49:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:34.422 05:49:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:34.422 05:49:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:34.422 05:49:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:34.422 05:49:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:34.422 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:34.422 ... 00:35:34.422 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:34.422 ... 00:35:34.422 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:34.422 ... 00:35:34.422 fio-3.35 00:35:34.422 Starting 24 threads 00:35:34.422 EAL: No free 2048 kB hugepages reported on node 1 00:35:46.633 00:35:46.633 filename0: (groupid=0, jobs=1): err= 0: pid=3406761: Sun Jul 14 05:49:52 2024 00:35:46.633 read: IOPS=119, BW=477KiB/s (488kB/s)(4800KiB/10064msec) 00:35:46.633 slat (nsec): min=4436, max=95939, avg=33058.71, stdev=12508.47 00:35:46.633 clat (msec): min=32, max=518, avg=133.89, stdev=174.54 00:35:46.633 lat (msec): min=32, max=518, avg=133.92, stdev=174.53 00:35:46.633 clat percentiles (msec): 00:35:46.633 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:35:46.633 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 35], 60.00th=[ 37], 00:35:46.633 | 70.00th=[ 37], 80.00th=[ 347], 90.00th=[ 493], 95.00th=[ 506], 00:35:46.633 | 99.00th=[ 518], 99.50th=[ 518], 99.90th=[ 518], 99.95th=[ 518], 00:35:46.633 | 99.99th=[ 518] 00:35:46.633 bw ( KiB/s): min= 128, max= 1920, per=3.60%, avg=473.55, stdev=680.91, samples=20 00:35:46.633 iops : min= 32, max= 480, avg=118.35, stdev=170.17, samples=20 00:35:46.633 lat (msec) : 50=72.00%, 100=2.67%, 250=0.33%, 500=18.33%, 750=6.67% 00:35:46.633 cpu : usr=96.01%, sys=2.47%, ctx=123, majf=0, minf=43 00:35:46.633 IO depths : 1=5.9%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.6%, 32=0.0%, >=64=0.0% 00:35:46.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:46.633 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:46.633 issued rwts: total=1200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:46.633 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:46.633 filename0: (groupid=0, jobs=1): err= 0: pid=3406762: Sun Jul 14 05:49:52 2024 00:35:46.633 read: IOPS=135, BW=541KiB/s (554kB/s)(5424KiB/10028msec) 00:35:46.633 slat (nsec): min=6562, max=99586, avg=37927.60, stdev=23588.19 00:35:46.633 clat (msec): min=15, max=484, avg=118.14, stdev=125.35 00:35:46.633 lat (msec): min=15, max=484, avg=118.18, stdev=125.34 00:35:46.633 clat percentiles (msec): 00:35:46.633 | 1.00th=[ 20], 5.00th=[ 29], 10.00th=[ 34], 20.00th=[ 34], 00:35:46.633 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 36], 60.00th=[ 37], 00:35:46.633 | 70.00th=[ 232], 80.00th=[ 255], 90.00th=[ 309], 95.00th=[ 359], 00:35:46.633 | 99.00th=[ 443], 99.50th=[ 443], 99.90th=[ 485], 99.95th=[ 485], 00:35:46.633 | 99.99th=[ 485] 00:35:46.633 bw ( KiB/s): min= 128, max= 1888, per=4.08%, avg=535.80, stdev=660.63, samples=20 00:35:46.633 iops : min= 32, max= 472, avg=133.95, stdev=165.16, samples=20 00:35:46.633 lat (msec) : 20=1.92%, 50=63.05%, 100=2.43%, 250=9.29%, 500=23.30% 00:35:46.633 cpu : usr=98.10%, sys=1.41%, ctx=40, majf=0, minf=54 00:35:46.633 IO depths : 1=1.7%, 2=4.0%, 4=9.8%, 8=70.4%, 16=14.1%, 32=0.0%, >=64=0.0% 00:35:46.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:46.633 complete : 0=0.0%, 4=91.0%, 8=6.4%, 16=2.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:46.633 issued rwts: total=1356,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:46.633 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:46.633 filename0: (groupid=0, jobs=1): err= 0: pid=3406763: Sun Jul 14 05:49:52 2024 00:35:46.633 read: IOPS=138, BW=553KiB/s (567kB/s)(5568KiB/10061msec) 00:35:46.633 slat (usec): min=8, max=116, avg=50.66, stdev=27.41 00:35:46.633 clat (msec): min=31, max=434, avg=114.84, stdev=107.64 00:35:46.633 lat (msec): min=31, max=434, avg=114.89, stdev=107.62 00:35:46.633 clat percentiles (msec): 00:35:46.633 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:35:46.633 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 36], 60.00th=[ 37], 00:35:46.633 | 70.00th=[ 234], 80.00th=[ 245], 90.00th=[ 257], 95.00th=[ 275], 00:35:46.633 | 99.00th=[ 347], 99.50th=[ 347], 99.90th=[ 435], 99.95th=[ 435], 00:35:46.633 | 99.99th=[ 435] 00:35:46.633 bw ( KiB/s): min= 144, max= 1920, per=4.19%, avg=550.40, stdev=626.41, samples=20 00:35:46.633 iops : min= 36, max= 480, avg=137.60, stdev=156.60, samples=20 00:35:46.633 lat (msec) : 50=61.93%, 100=1.29%, 250=19.54%, 500=17.24% 00:35:46.633 cpu : usr=97.78%, sys=1.41%, ctx=58, majf=0, minf=41 00:35:46.633 IO depths : 1=2.8%, 2=9.1%, 4=25.0%, 8=53.4%, 16=9.7%, 32=0.0%, >=64=0.0% 00:35:46.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:46.633 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:46.633 issued rwts: total=1392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:46.633 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:46.633 filename0: (groupid=0, jobs=1): err= 0: pid=3406764: Sun Jul 14 05:49:52 2024 00:35:46.633 read: IOPS=150, BW=601KiB/s (615kB/s)(6080KiB/10118msec) 00:35:46.633 slat (nsec): min=5720, max=96340, avg=18789.16, stdev=18257.70 00:35:46.633 clat (msec): min=2, max=335, avg=105.97, stdev=101.17 00:35:46.633 lat (msec): min=2, max=335, avg=105.99, stdev=101.17 00:35:46.633 clat percentiles (msec): 00:35:46.633 | 1.00th=[ 3], 5.00th=[ 24], 10.00th=[ 34], 20.00th=[ 34], 00:35:46.633 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 36], 60.00th=[ 37], 00:35:46.633 | 70.00th=[ 220], 80.00th=[ 241], 90.00th=[ 255], 95.00th=[ 259], 00:35:46.633 | 99.00th=[ 292], 99.50th=[ 292], 99.90th=[ 334], 99.95th=[ 334], 00:35:46.633 | 99.99th=[ 334] 00:35:46.633 bw ( KiB/s): min= 256, max= 2176, per=4.58%, avg=601.60, stdev=696.14, samples=20 00:35:46.633 iops : min= 64, max= 544, avg=150.40, stdev=174.03, samples=20 00:35:46.633 lat (msec) : 4=1.05%, 10=1.97%, 20=1.18%, 50=60.00%, 100=1.05% 00:35:46.633 lat (msec) : 250=21.05%, 500=13.68% 00:35:46.633 cpu : usr=98.12%, sys=1.47%, ctx=14, majf=0, minf=41 00:35:46.633 IO depths : 1=3.5%, 2=9.7%, 4=24.5%, 8=53.4%, 16=9.0%, 32=0.0%, >=64=0.0% 00:35:46.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:46.633 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:46.633 issued rwts: total=1520,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:46.633 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:46.633 filename0: (groupid=0, jobs=1): err= 0: pid=3406765: Sun Jul 14 05:49:52 2024 00:35:46.633 read: IOPS=140, BW=562KiB/s (576kB/s)(5672KiB/10087msec) 00:35:46.633 slat (usec): min=8, max=101, avg=43.88, stdev=27.81 00:35:46.633 clat (msec): min=21, max=399, avg=113.21, stdev=105.46 00:35:46.633 lat (msec): min=21, max=399, avg=113.25, stdev=105.43 00:35:46.633 clat percentiles (msec): 00:35:46.633 | 1.00th=[ 23], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:35:46.634 | 30.00th=[ 34], 40.00th=[ 35], 50.00th=[ 37], 60.00th=[ 44], 00:35:46.634 | 70.00th=[ 224], 80.00th=[ 247], 90.00th=[ 257], 95.00th=[ 262], 00:35:46.634 | 99.00th=[ 368], 99.50th=[ 368], 99.90th=[ 401], 99.95th=[ 401], 00:35:46.634 | 99.99th=[ 401] 00:35:46.634 bw ( KiB/s): min= 128, max= 1920, per=4.27%, avg=560.60, stdev=637.00, samples=20 00:35:46.634 iops : min= 32, max= 480, avg=140.15, stdev=159.25, samples=20 00:35:46.634 lat (msec) : 50=60.93%, 100=2.26%, 250=22.43%, 500=14.39% 00:35:46.634 cpu : usr=97.53%, sys=1.65%, ctx=53, majf=0, minf=33 00:35:46.634 IO depths : 1=3.2%, 2=7.4%, 4=18.2%, 8=61.8%, 16=9.3%, 32=0.0%, >=64=0.0% 00:35:46.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:46.634 complete : 0=0.0%, 4=92.3%, 8=2.3%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:46.634 issued rwts: total=1418,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:46.634 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:46.634 filename0: (groupid=0, jobs=1): err= 0: pid=3406766: Sun Jul 14 05:49:52 2024 00:35:46.634 read: IOPS=145, BW=582KiB/s (596kB/s)(5888KiB/10110msec) 00:35:46.634 slat (usec): min=8, max=411, avg=30.55, stdev=21.31 00:35:46.634 clat (msec): min=8, max=359, avg=109.25, stdev=105.27 00:35:46.634 lat (msec): min=8, max=359, avg=109.28, stdev=105.27 00:35:46.634 clat percentiles (msec): 00:35:46.634 | 1.00th=[ 11], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:35:46.634 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 36], 60.00th=[ 37], 00:35:46.634 | 70.00th=[ 232], 80.00th=[ 243], 90.00th=[ 257], 95.00th=[ 266], 00:35:46.634 | 99.00th=[ 359], 99.50th=[ 359], 99.90th=[ 359], 99.95th=[ 359], 00:35:46.634 | 99.99th=[ 359] 00:35:46.634 bw ( KiB/s): min= 144, max= 1923, per=4.43%, avg=582.55, stdev=671.56, samples=20 00:35:46.634 iops : min= 36, max= 480, avg=145.60, stdev=167.81, samples=20 00:35:46.634 lat (msec) : 10=0.27%, 20=2.85%, 50=61.01%, 100=1.09%, 250=18.34% 00:35:46.634 lat (msec) : 500=16.44% 00:35:46.634 cpu : usr=95.01%, sys=2.51%, ctx=93, majf=0, minf=50 00:35:46.634 IO depths : 1=4.0%, 2=10.2%, 4=24.9%, 8=52.4%, 16=8.5%, 32=0.0%, >=64=0.0% 00:35:46.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:46.634 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:46.634 issued rwts: total=1472,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:46.634 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:46.634 filename0: (groupid=0, jobs=1): err= 0: pid=3406767: Sun Jul 14 05:49:52 2024 00:35:46.634 read: IOPS=136, BW=546KiB/s (559kB/s)(5504KiB/10086msec) 00:35:46.634 slat (usec): min=8, max=114, avg=50.35, stdev=28.94 00:35:46.634 clat (msec): min=32, max=473, avg=116.45, stdev=117.95 00:35:46.634 lat (msec): min=32, max=473, avg=116.50, stdev=117.95 00:35:46.634 clat percentiles (msec): 00:35:46.634 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:35:46.634 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 36], 60.00th=[ 37], 00:35:46.634 | 70.00th=[ 228], 80.00th=[ 249], 90.00th=[ 275], 95.00th=[ 309], 00:35:46.634 | 99.00th=[ 435], 99.50th=[ 435], 99.90th=[ 472], 99.95th=[ 472], 00:35:46.634 | 99.99th=[ 472] 00:35:46.634 bw ( KiB/s): min= 128, max= 1916, per=4.14%, avg=543.80, stdev=643.77, samples=20 00:35:46.634 iops : min= 32, max= 479, avg=135.95, stdev=160.94, samples=20 00:35:46.634 lat (msec) : 50=63.95%, 100=1.16%, 250=15.92%, 500=18.97% 00:35:46.634 cpu : usr=98.19%, sys=1.39%, ctx=18, majf=0, minf=45 00:35:46.634 IO depths : 1=4.7%, 2=10.9%, 4=24.8%, 8=51.8%, 16=7.8%, 32=0.0%, >=64=0.0% 00:35:46.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:46.634 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:46.634 issued rwts: total=1376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:46.634 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:46.634 filename0: (groupid=0, jobs=1): err= 0: pid=3406768: Sun Jul 14 05:49:52 2024 00:35:46.634 read: IOPS=141, BW=565KiB/s (578kB/s)(5696KiB/10086msec) 00:35:46.634 slat (usec): min=8, max=112, avg=47.36, stdev=27.97 00:35:46.634 clat (msec): min=27, max=408, avg=112.55, stdev=105.62 00:35:46.634 lat (msec): min=27, max=408, avg=112.59, stdev=105.60 00:35:46.634 clat percentiles (msec): 00:35:46.634 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:35:46.634 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 36], 60.00th=[ 37], 00:35:46.634 | 70.00th=[ 232], 80.00th=[ 243], 90.00th=[ 257], 95.00th=[ 266], 00:35:46.634 | 99.00th=[ 347], 99.50th=[ 347], 99.90th=[ 409], 99.95th=[ 409], 00:35:46.634 | 99.99th=[ 409] 00:35:46.634 bw ( KiB/s): min= 128, max= 1916, per=4.29%, avg=563.00, stdev=633.53, samples=20 00:35:46.634 iops : min= 32, max= 479, avg=140.75, stdev=158.38, samples=20 00:35:46.634 lat (msec) : 50=62.92%, 100=1.12%, 250=20.08%, 500=15.87% 00:35:46.634 cpu : usr=98.29%, sys=1.29%, ctx=13, majf=0, minf=40 00:35:46.634 IO depths : 1=3.4%, 2=9.6%, 4=25.0%, 8=52.9%, 16=9.1%, 32=0.0%, >=64=0.0% 00:35:46.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:46.634 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:46.634 issued rwts: total=1424,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:46.634 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:46.634 filename1: (groupid=0, jobs=1): err= 0: pid=3406769: Sun Jul 14 05:49:52 2024 00:35:46.634 read: IOPS=127, BW=509KiB/s (521kB/s)(5120KiB/10063msec) 00:35:46.634 slat (nsec): min=8211, max=99624, avg=32984.51, stdev=16446.27 00:35:46.634 clat (msec): min=32, max=513, avg=125.47, stdev=145.01 00:35:46.634 lat (msec): min=32, max=513, avg=125.51, stdev=145.00 00:35:46.634 clat percentiles (msec): 00:35:46.634 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:35:46.634 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 36], 60.00th=[ 37], 00:35:46.634 | 70.00th=[ 61], 80.00th=[ 305], 90.00th=[ 347], 95.00th=[ 426], 00:35:46.634 | 99.00th=[ 451], 99.50th=[ 451], 99.90th=[ 514], 99.95th=[ 514], 00:35:46.634 | 99.99th=[ 514] 00:35:46.634 bw ( KiB/s): min= 128, max= 1920, per=3.85%, avg=505.75, stdev=664.05, samples=20 00:35:46.634 iops : min= 32, max= 480, avg=126.40, stdev=165.94, samples=20 00:35:46.634 lat (msec) : 50=67.50%, 100=3.75%, 250=1.41%, 500=27.03%, 750=0.31% 00:35:46.634 cpu : usr=98.19%, sys=1.40%, ctx=18, majf=0, minf=44 00:35:46.634 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:46.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:46.634 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:46.634 issued rwts: total=1280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:46.634 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:46.634 filename1: (groupid=0, jobs=1): err= 0: pid=3406770: Sun Jul 14 05:49:52 2024 00:35:46.634 read: IOPS=146, BW=588KiB/s (602kB/s)(5940KiB/10105msec) 00:35:46.634 slat (nsec): min=5912, max=89903, avg=29992.05, stdev=21556.22 00:35:46.634 clat (msec): min=13, max=307, avg=108.27, stdev=99.67 00:35:46.634 lat (msec): min=13, max=307, avg=108.30, stdev=99.65 00:35:46.634 clat percentiles (msec): 00:35:46.634 | 1.00th=[ 16], 5.00th=[ 32], 10.00th=[ 34], 20.00th=[ 34], 00:35:46.634 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 36], 60.00th=[ 37], 00:35:46.634 | 70.00th=[ 226], 80.00th=[ 239], 90.00th=[ 253], 95.00th=[ 257], 00:35:46.634 | 99.00th=[ 275], 99.50th=[ 275], 99.90th=[ 309], 99.95th=[ 309], 00:35:46.634 | 99.99th=[ 309] 00:35:46.634 bw ( KiB/s): min= 256, max= 1920, per=4.47%, avg=587.40, stdev=664.50, samples=20 00:35:46.634 iops : min= 64, max= 480, avg=146.85, stdev=166.13, samples=20 00:35:46.634 lat (msec) : 20=2.69%, 50=60.67%, 250=24.65%, 500=11.99% 00:35:46.634 cpu : usr=98.27%, sys=1.21%, ctx=70, majf=0, minf=56 00:35:46.634 IO depths : 1=3.5%, 2=9.2%, 4=22.8%, 8=55.3%, 16=9.2%, 32=0.0%, >=64=0.0% 00:35:46.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:46.634 complete : 0=0.0%, 4=93.7%, 8=0.8%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:46.634 issued rwts: total=1485,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:46.634 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:46.634 filename1: (groupid=0, jobs=1): err= 0: pid=3406771: Sun Jul 14 05:49:52 2024 00:35:46.634 read: IOPS=139, BW=558KiB/s (572kB/s)(5632KiB/10086msec) 00:35:46.634 slat (usec): min=8, max=116, avg=50.54, stdev=28.46 00:35:46.634 clat (msec): min=19, max=390, avg=113.79, stdev=107.14 00:35:46.634 lat (msec): min=19, max=390, avg=113.84, stdev=107.12 00:35:46.634 clat percentiles (msec): 00:35:46.634 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:35:46.634 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 36], 60.00th=[ 37], 00:35:46.634 | 70.00th=[ 234], 80.00th=[ 247], 90.00th=[ 259], 95.00th=[ 275], 00:35:46.634 | 99.00th=[ 342], 99.50th=[ 342], 99.90th=[ 393], 99.95th=[ 393], 00:35:46.634 | 99.99th=[ 393] 00:35:46.634 bw ( KiB/s): min= 128, max= 1916, per=4.24%, avg=556.60, stdev=637.26, samples=20 00:35:46.634 iops : min= 32, max= 479, avg=139.15, stdev=159.32, samples=20 00:35:46.634 lat (msec) : 20=0.28%, 50=62.22%, 100=1.14%, 250=18.04%, 500=18.32% 00:35:46.634 cpu : usr=98.09%, sys=1.40%, ctx=99, majf=0, minf=34 00:35:46.634 IO depths : 1=4.4%, 2=10.6%, 4=24.9%, 8=52.0%, 16=8.1%, 32=0.0%, >=64=0.0% 00:35:46.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:46.634 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:46.634 issued rwts: total=1408,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:46.634 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:46.634 filename1: (groupid=0, jobs=1): err= 0: pid=3406772: Sun Jul 14 05:49:52 2024 00:35:46.634 read: IOPS=138, BW=553KiB/s (566kB/s)(5568KiB/10072msec) 00:35:46.634 slat (nsec): min=8186, max=88588, avg=21210.33, stdev=19186.11 00:35:46.634 clat (msec): min=26, max=371, avg=115.21, stdev=110.12 00:35:46.634 lat (msec): min=26, max=371, avg=115.23, stdev=110.11 00:35:46.634 clat percentiles (msec): 00:35:46.634 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:35:46.634 | 30.00th=[ 34], 40.00th=[ 35], 50.00th=[ 36], 60.00th=[ 37], 00:35:46.634 | 70.00th=[ 234], 80.00th=[ 249], 90.00th=[ 262], 95.00th=[ 321], 00:35:46.634 | 99.00th=[ 359], 99.50th=[ 368], 99.90th=[ 372], 99.95th=[ 372], 00:35:46.634 | 99.99th=[ 372] 00:35:46.634 bw ( KiB/s): min= 128, max= 1900, per=4.18%, avg=550.00, stdev=637.48, samples=20 00:35:46.634 iops : min= 32, max= 475, avg=137.50, stdev=159.37, samples=20 00:35:46.634 lat (msec) : 50=63.07%, 100=1.15%, 250=16.09%, 500=19.68% 00:35:46.634 cpu : usr=98.39%, sys=1.21%, ctx=18, majf=0, minf=43 00:35:46.634 IO depths : 1=1.1%, 2=7.2%, 4=24.6%, 8=55.7%, 16=11.4%, 32=0.0%, >=64=0.0% 00:35:46.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:46.634 complete : 0=0.0%, 4=94.3%, 8=0.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:46.634 issued rwts: total=1392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:46.634 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:46.634 filename1: (groupid=0, jobs=1): err= 0: pid=3406773: Sun Jul 14 05:49:52 2024 00:35:46.634 read: IOPS=128, BW=516KiB/s (528kB/s)(5192KiB/10063msec) 00:35:46.634 slat (nsec): min=8217, max=94939, avg=43085.02, stdev=24384.69 00:35:46.634 clat (msec): min=19, max=458, avg=123.68, stdev=135.30 00:35:46.634 lat (msec): min=19, max=458, avg=123.72, stdev=135.28 00:35:46.635 clat percentiles (msec): 00:35:46.635 | 1.00th=[ 28], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:35:46.635 | 30.00th=[ 34], 40.00th=[ 35], 50.00th=[ 36], 60.00th=[ 37], 00:35:46.635 | 70.00th=[ 197], 80.00th=[ 296], 90.00th=[ 355], 95.00th=[ 380], 00:35:46.635 | 99.00th=[ 435], 99.50th=[ 447], 99.90th=[ 460], 99.95th=[ 460], 00:35:46.635 | 99.99th=[ 460] 00:35:46.635 bw ( KiB/s): min= 176, max= 1904, per=3.90%, avg=512.95, stdev=639.90, samples=20 00:35:46.635 iops : min= 44, max= 476, avg=128.20, stdev=159.91, samples=20 00:35:46.635 lat (msec) : 20=0.31%, 50=64.25%, 100=3.70%, 250=6.32%, 500=25.42% 00:35:46.635 cpu : usr=98.23%, sys=1.34%, ctx=24, majf=0, minf=49 00:35:46.635 IO depths : 1=0.2%, 2=2.7%, 4=11.1%, 8=71.0%, 16=15.1%, 32=0.0%, >=64=0.0% 00:35:46.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:46.635 complete : 0=0.0%, 4=91.2%, 8=5.8%, 16=3.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:46.635 issued rwts: total=1298,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:46.635 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:46.635 filename1: (groupid=0, jobs=1): err= 0: pid=3406774: Sun Jul 14 05:49:52 2024 00:35:46.635 read: IOPS=147, BW=589KiB/s (603kB/s)(5952KiB/10110msec) 00:35:46.635 slat (usec): min=7, max=107, avg=17.33, stdev=16.40 00:35:46.635 clat (msec): min=10, max=333, avg=108.17, stdev=101.84 00:35:46.635 lat (msec): min=10, max=333, avg=108.19, stdev=101.84 00:35:46.635 clat percentiles (msec): 00:35:46.635 | 1.00th=[ 11], 5.00th=[ 27], 10.00th=[ 33], 20.00th=[ 34], 00:35:46.635 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 36], 60.00th=[ 37], 00:35:46.635 | 70.00th=[ 232], 80.00th=[ 241], 90.00th=[ 255], 95.00th=[ 259], 00:35:46.635 | 99.00th=[ 313], 99.50th=[ 330], 99.90th=[ 334], 99.95th=[ 334], 00:35:46.635 | 99.99th=[ 334] 00:35:46.635 bw ( KiB/s): min= 208, max= 2036, per=4.49%, avg=589.00, stdev=669.21, samples=20 00:35:46.635 iops : min= 52, max= 509, avg=147.25, stdev=167.30, samples=20 00:35:46.635 lat (msec) : 20=3.23%, 50=61.16%, 250=22.72%, 500=12.90% 00:35:46.635 cpu : usr=96.04%, sys=2.36%, ctx=121, majf=0, minf=102 00:35:46.635 IO depths : 1=3.3%, 2=9.4%, 4=24.6%, 8=53.5%, 16=9.2%, 32=0.0%, >=64=0.0% 00:35:46.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:46.635 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:46.635 issued rwts: total=1488,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:46.635 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:46.635 filename1: (groupid=0, jobs=1): err= 0: pid=3406775: Sun Jul 14 05:49:52 2024 00:35:46.635 read: IOPS=144, BW=576KiB/s (590kB/s)(5800KiB/10067msec) 00:35:46.635 slat (usec): min=7, max=113, avg=47.52, stdev=29.10 00:35:46.635 clat (msec): min=18, max=419, avg=110.46, stdev=107.51 00:35:46.635 lat (msec): min=18, max=419, avg=110.51, stdev=107.48 00:35:46.635 clat percentiles (msec): 00:35:46.635 | 1.00th=[ 20], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:35:46.635 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 36], 60.00th=[ 37], 00:35:46.635 | 70.00th=[ 222], 80.00th=[ 245], 90.00th=[ 262], 95.00th=[ 271], 00:35:46.635 | 99.00th=[ 355], 99.50th=[ 384], 99.90th=[ 418], 99.95th=[ 418], 00:35:46.635 | 99.99th=[ 418] 00:35:46.635 bw ( KiB/s): min= 176, max= 1920, per=4.36%, avg=573.40, stdev=658.60, samples=20 00:35:46.635 iops : min= 44, max= 480, avg=143.35, stdev=164.65, samples=20 00:35:46.635 lat (msec) : 20=1.45%, 50=62.55%, 100=1.10%, 250=17.24%, 500=17.66% 00:35:46.635 cpu : usr=97.28%, sys=1.64%, ctx=22, majf=0, minf=67 00:35:46.635 IO depths : 1=3.7%, 2=8.0%, 4=18.8%, 8=60.6%, 16=8.9%, 32=0.0%, >=64=0.0% 00:35:46.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:46.635 complete : 0=0.0%, 4=92.3%, 8=2.2%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:46.635 issued rwts: total=1450,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:46.635 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:46.635 filename1: (groupid=0, jobs=1): err= 0: pid=3406776: Sun Jul 14 05:49:52 2024 00:35:46.635 read: IOPS=124, BW=496KiB/s (508kB/s)(4992KiB/10061msec) 00:35:46.635 slat (usec): min=8, max=151, avg=30.79, stdev=15.43 00:35:46.635 clat (msec): min=20, max=510, avg=128.30, stdev=149.54 00:35:46.635 lat (msec): min=20, max=510, avg=128.33, stdev=149.53 00:35:46.635 clat percentiles (msec): 00:35:46.635 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:35:46.635 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 36], 60.00th=[ 37], 00:35:46.635 | 70.00th=[ 59], 80.00th=[ 317], 90.00th=[ 359], 95.00th=[ 439], 00:35:46.635 | 99.00th=[ 460], 99.50th=[ 498], 99.90th=[ 510], 99.95th=[ 510], 00:35:46.635 | 99.99th=[ 510] 00:35:46.635 bw ( KiB/s): min= 128, max= 1904, per=3.75%, avg=492.80, stdev=653.90, samples=20 00:35:46.635 iops : min= 32, max= 476, avg=123.20, stdev=163.48, samples=20 00:35:46.635 lat (msec) : 50=69.23%, 100=1.28%, 250=1.76%, 500=27.40%, 750=0.32% 00:35:46.635 cpu : usr=97.08%, sys=1.95%, ctx=93, majf=0, minf=44 00:35:46.635 IO depths : 1=4.0%, 2=9.8%, 4=23.3%, 8=54.1%, 16=8.8%, 32=0.0%, >=64=0.0% 00:35:46.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:46.635 complete : 0=0.0%, 4=93.9%, 8=0.6%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:46.635 issued rwts: total=1248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:46.635 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:46.635 filename2: (groupid=0, jobs=1): err= 0: pid=3406777: Sun Jul 14 05:49:52 2024 00:35:46.635 read: IOPS=141, BW=566KiB/s (580kB/s)(5712KiB/10089msec) 00:35:46.635 slat (nsec): min=8213, max=93924, avg=37833.00, stdev=24294.41 00:35:46.635 clat (msec): min=31, max=400, avg=112.42, stdev=106.43 00:35:46.635 lat (msec): min=31, max=400, avg=112.46, stdev=106.41 00:35:46.635 clat percentiles (msec): 00:35:46.635 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:35:46.635 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 36], 60.00th=[ 37], 00:35:46.635 | 70.00th=[ 215], 80.00th=[ 247], 90.00th=[ 259], 95.00th=[ 271], 00:35:46.635 | 99.00th=[ 363], 99.50th=[ 388], 99.90th=[ 401], 99.95th=[ 401], 00:35:46.635 | 99.99th=[ 401] 00:35:46.635 bw ( KiB/s): min= 176, max= 1916, per=4.30%, avg=564.60, stdev=631.76, samples=20 00:35:46.635 iops : min= 44, max= 479, avg=141.15, stdev=157.94, samples=20 00:35:46.635 lat (msec) : 50=62.75%, 100=1.54%, 250=19.61%, 500=16.11% 00:35:46.635 cpu : usr=97.42%, sys=1.76%, ctx=60, majf=0, minf=64 00:35:46.635 IO depths : 1=4.2%, 2=8.6%, 4=19.1%, 8=59.6%, 16=8.5%, 32=0.0%, >=64=0.0% 00:35:46.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:46.635 complete : 0=0.0%, 4=92.4%, 8=2.2%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:46.635 issued rwts: total=1428,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:46.635 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:46.635 filename2: (groupid=0, jobs=1): err= 0: pid=3406778: Sun Jul 14 05:49:52 2024 00:35:46.635 read: IOPS=136, BW=545KiB/s (558kB/s)(5488KiB/10075msec) 00:35:46.635 slat (usec): min=8, max=100, avg=49.33, stdev=25.07 00:35:46.635 clat (msec): min=21, max=424, avg=116.90, stdev=116.76 00:35:46.635 lat (msec): min=22, max=424, avg=116.95, stdev=116.74 00:35:46.635 clat percentiles (msec): 00:35:46.635 | 1.00th=[ 23], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:35:46.635 | 30.00th=[ 34], 40.00th=[ 35], 50.00th=[ 36], 60.00th=[ 38], 00:35:46.635 | 70.00th=[ 234], 80.00th=[ 253], 90.00th=[ 271], 95.00th=[ 330], 00:35:46.635 | 99.00th=[ 418], 99.50th=[ 426], 99.90th=[ 426], 99.95th=[ 426], 00:35:46.635 | 99.99th=[ 426] 00:35:46.635 bw ( KiB/s): min= 128, max= 1904, per=4.13%, avg=542.40, stdev=635.28, samples=20 00:35:46.635 iops : min= 32, max= 476, avg=135.60, stdev=158.82, samples=20 00:35:46.635 lat (msec) : 50=63.34%, 100=1.97%, 250=13.05%, 500=21.65% 00:35:46.635 cpu : usr=97.91%, sys=1.42%, ctx=26, majf=0, minf=52 00:35:46.635 IO depths : 1=1.7%, 2=6.8%, 4=21.3%, 8=59.3%, 16=10.9%, 32=0.0%, >=64=0.0% 00:35:46.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:46.635 complete : 0=0.0%, 4=93.3%, 8=1.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:46.635 issued rwts: total=1372,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:46.635 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:46.635 filename2: (groupid=0, jobs=1): err= 0: pid=3406779: Sun Jul 14 05:49:52 2024 00:35:46.635 read: IOPS=139, BW=557KiB/s (570kB/s)(5616KiB/10086msec) 00:35:46.635 slat (usec): min=8, max=106, avg=46.25, stdev=29.26 00:35:46.635 clat (msec): min=18, max=403, avg=114.16, stdev=108.39 00:35:46.635 lat (msec): min=18, max=403, avg=114.21, stdev=108.37 00:35:46.635 clat percentiles (msec): 00:35:46.635 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:35:46.635 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 36], 60.00th=[ 37], 00:35:46.635 | 70.00th=[ 234], 80.00th=[ 247], 90.00th=[ 262], 95.00th=[ 275], 00:35:46.635 | 99.00th=[ 338], 99.50th=[ 380], 99.90th=[ 405], 99.95th=[ 405], 00:35:46.635 | 99.99th=[ 405] 00:35:46.635 bw ( KiB/s): min= 128, max= 1916, per=4.23%, avg=555.00, stdev=638.27, samples=20 00:35:46.635 iops : min= 32, max= 479, avg=138.75, stdev=159.57, samples=20 00:35:46.635 lat (msec) : 20=0.43%, 50=62.11%, 100=1.28%, 250=18.52%, 500=17.66% 00:35:46.635 cpu : usr=98.32%, sys=1.26%, ctx=16, majf=0, minf=44 00:35:46.635 IO depths : 1=4.0%, 2=9.5%, 4=22.8%, 8=55.1%, 16=8.6%, 32=0.0%, >=64=0.0% 00:35:46.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:46.635 complete : 0=0.0%, 4=93.6%, 8=0.8%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:46.635 issued rwts: total=1404,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:46.635 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:46.635 filename2: (groupid=0, jobs=1): err= 0: pid=3406780: Sun Jul 14 05:49:52 2024 00:35:46.635 read: IOPS=137, BW=549KiB/s (562kB/s)(5504KiB/10027msec) 00:35:46.635 slat (nsec): min=7682, max=95917, avg=31325.39, stdev=16824.78 00:35:46.635 clat (msec): min=32, max=441, avg=116.29, stdev=113.24 00:35:46.635 lat (msec): min=32, max=441, avg=116.33, stdev=113.23 00:35:46.635 clat percentiles (msec): 00:35:46.635 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:35:46.635 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 36], 60.00th=[ 37], 00:35:46.635 | 70.00th=[ 234], 80.00th=[ 253], 90.00th=[ 266], 95.00th=[ 330], 00:35:46.635 | 99.00th=[ 359], 99.50th=[ 359], 99.90th=[ 443], 99.95th=[ 443], 00:35:46.635 | 99.99th=[ 443] 00:35:46.635 bw ( KiB/s): min= 128, max= 1920, per=4.14%, avg=543.95, stdev=647.00, samples=20 00:35:46.635 iops : min= 32, max= 480, avg=135.95, stdev=161.69, samples=20 00:35:46.635 lat (msec) : 50=62.79%, 100=2.33%, 250=14.10%, 500=20.78% 00:35:46.635 cpu : usr=98.20%, sys=1.38%, ctx=21, majf=0, minf=49 00:35:46.635 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:46.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:46.635 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:46.635 issued rwts: total=1376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:46.635 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:46.635 filename2: (groupid=0, jobs=1): err= 0: pid=3406781: Sun Jul 14 05:49:52 2024 00:35:46.635 read: IOPS=124, BW=497KiB/s (509kB/s)(5000KiB/10063msec) 00:35:46.635 slat (nsec): min=8204, max=89882, avg=26061.46, stdev=15483.48 00:35:46.635 clat (msec): min=18, max=492, avg=128.61, stdev=150.87 00:35:46.636 lat (msec): min=18, max=492, avg=128.63, stdev=150.87 00:35:46.636 clat percentiles (msec): 00:35:46.636 | 1.00th=[ 30], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:35:46.636 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 35], 60.00th=[ 37], 00:35:46.636 | 70.00th=[ 61], 80.00th=[ 317], 90.00th=[ 359], 95.00th=[ 443], 00:35:46.636 | 99.00th=[ 493], 99.50th=[ 493], 99.90th=[ 493], 99.95th=[ 493], 00:35:46.636 | 99.99th=[ 493] 00:35:46.636 bw ( KiB/s): min= 128, max= 1872, per=3.76%, avg=493.60, stdev=655.77, samples=20 00:35:46.636 iops : min= 32, max= 468, avg=123.40, stdev=163.94, samples=20 00:35:46.636 lat (msec) : 20=0.32%, 50=68.32%, 100=1.92%, 250=1.60%, 500=27.84% 00:35:46.636 cpu : usr=97.75%, sys=1.60%, ctx=100, majf=0, minf=43 00:35:46.636 IO depths : 1=2.0%, 2=7.4%, 4=21.8%, 8=57.8%, 16=11.0%, 32=0.0%, >=64=0.0% 00:35:46.636 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:46.636 complete : 0=0.0%, 4=93.5%, 8=1.4%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:46.636 issued rwts: total=1250,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:46.636 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:46.636 filename2: (groupid=0, jobs=1): err= 0: pid=3406782: Sun Jul 14 05:49:52 2024 00:35:46.636 read: IOPS=125, BW=502KiB/s (514kB/s)(5056KiB/10063msec) 00:35:46.636 slat (usec): min=8, max=111, avg=64.65, stdev=19.94 00:35:46.636 clat (msec): min=32, max=510, avg=126.81, stdev=148.92 00:35:46.636 lat (msec): min=32, max=510, avg=126.87, stdev=148.91 00:35:46.636 clat percentiles (msec): 00:35:46.636 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:35:46.636 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 35], 60.00th=[ 37], 00:35:46.636 | 70.00th=[ 92], 80.00th=[ 317], 90.00th=[ 359], 95.00th=[ 443], 00:35:46.636 | 99.00th=[ 472], 99.50th=[ 493], 99.90th=[ 510], 99.95th=[ 510], 00:35:46.636 | 99.99th=[ 510] 00:35:46.636 bw ( KiB/s): min= 128, max= 1920, per=3.80%, avg=499.20, stdev=653.29, samples=20 00:35:46.636 iops : min= 32, max= 480, avg=124.80, stdev=163.32, samples=20 00:35:46.636 lat (msec) : 50=68.35%, 100=2.53%, 250=3.32%, 500=25.32%, 750=0.47% 00:35:46.636 cpu : usr=98.21%, sys=1.29%, ctx=21, majf=0, minf=37 00:35:46.636 IO depths : 1=5.4%, 2=11.6%, 4=25.0%, 8=50.9%, 16=7.1%, 32=0.0%, >=64=0.0% 00:35:46.636 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:46.636 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:46.636 issued rwts: total=1264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:46.636 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:46.636 filename2: (groupid=0, jobs=1): err= 0: pid=3406783: Sun Jul 14 05:49:52 2024 00:35:46.636 read: IOPS=144, BW=579KiB/s (592kB/s)(5824KiB/10066msec) 00:35:46.636 slat (usec): min=6, max=108, avg=45.31, stdev=27.74 00:35:46.636 clat (msec): min=19, max=345, avg=110.23, stdev=105.14 00:35:46.636 lat (msec): min=19, max=345, avg=110.27, stdev=105.12 00:35:46.636 clat percentiles (msec): 00:35:46.636 | 1.00th=[ 20], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:35:46.636 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 36], 60.00th=[ 37], 00:35:46.636 | 70.00th=[ 232], 80.00th=[ 241], 90.00th=[ 255], 95.00th=[ 266], 00:35:46.636 | 99.00th=[ 347], 99.50th=[ 347], 99.90th=[ 347], 99.95th=[ 347], 00:35:46.636 | 99.99th=[ 347] 00:35:46.636 bw ( KiB/s): min= 256, max= 1920, per=4.38%, avg=575.80, stdev=656.85, samples=20 00:35:46.636 iops : min= 64, max= 480, avg=143.95, stdev=164.21, samples=20 00:35:46.636 lat (msec) : 20=2.20%, 50=61.54%, 100=1.10%, 250=19.78%, 500=15.38% 00:35:46.636 cpu : usr=98.53%, sys=1.07%, ctx=19, majf=0, minf=52 00:35:46.636 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:46.636 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:46.636 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:46.636 issued rwts: total=1456,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:46.636 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:46.636 filename2: (groupid=0, jobs=1): err= 0: pid=3406784: Sun Jul 14 05:49:52 2024 00:35:46.636 read: IOPS=143, BW=574KiB/s (588kB/s)(5808KiB/10111msec) 00:35:46.636 slat (usec): min=8, max=102, avg=47.75, stdev=27.87 00:35:46.636 clat (msec): min=10, max=388, avg=110.62, stdev=110.57 00:35:46.636 lat (msec): min=10, max=388, avg=110.67, stdev=110.55 00:35:46.636 clat percentiles (msec): 00:35:46.636 | 1.00th=[ 12], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:35:46.636 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 36], 60.00th=[ 37], 00:35:46.636 | 70.00th=[ 224], 80.00th=[ 249], 90.00th=[ 262], 95.00th=[ 300], 00:35:46.636 | 99.00th=[ 376], 99.50th=[ 384], 99.90th=[ 388], 99.95th=[ 388], 00:35:46.636 | 99.99th=[ 388] 00:35:46.636 bw ( KiB/s): min= 128, max= 1920, per=4.37%, avg=574.40, stdev=675.94, samples=20 00:35:46.636 iops : min= 32, max= 480, avg=143.60, stdev=168.98, samples=20 00:35:46.636 lat (msec) : 20=3.17%, 50=61.85%, 100=1.10%, 250=15.29%, 500=18.60% 00:35:46.636 cpu : usr=98.29%, sys=1.29%, ctx=44, majf=0, minf=48 00:35:46.636 IO depths : 1=4.4%, 2=9.4%, 4=20.9%, 8=57.1%, 16=8.3%, 32=0.0%, >=64=0.0% 00:35:46.636 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:46.636 complete : 0=0.0%, 4=93.0%, 8=1.4%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:46.636 issued rwts: total=1452,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:46.636 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:46.636 00:35:46.636 Run status group 0 (all jobs): 00:35:46.636 READ: bw=12.8MiB/s (13.4MB/s), 477KiB/s-601KiB/s (488kB/s-615kB/s), io=130MiB (136MB), run=10027-10118msec 00:35:46.636 05:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:35:46.636 05:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:46.636 05:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:46.636 05:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:46.636 05:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:46.636 05:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:46.636 05:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.636 05:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:46.636 05:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.636 05:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:46.636 05:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.636 05:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:46.636 05:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.636 05:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:46.636 05:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:46.636 05:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:46.636 05:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:46.636 05:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.636 05:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:46.636 05:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.636 05:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:46.636 05:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.636 05:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:46.636 05:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.636 05:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:46.636 05:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:35:46.636 05:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:35:46.636 05:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:35:46.636 05:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.636 05:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:46.636 05:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.636 05:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:35:46.636 05:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.636 05:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:46.636 05:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.636 05:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:35:46.636 05:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:35:46.636 05:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:35:46.636 05:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:35:46.636 05:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:35:46.636 05:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:35:46.636 05:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:35:46.636 05:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:46.636 05:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:46.636 05:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:46.636 05:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:46.636 05:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:46.636 05:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.636 05:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:46.636 bdev_null0 00:35:46.636 05:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.636 05:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:46.636 05:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.636 05:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:46.636 05:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.636 05:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:46.636 05:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.636 05:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:46.636 05:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.636 05:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:46.636 05:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.636 05:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:46.636 [2024-07-14 05:49:52.433810] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:46.636 05:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.636 05:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:46.636 05:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:46.636 05:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:46.636 05:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:46.636 05:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.637 05:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:46.637 bdev_null1 00:35:46.637 05:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.637 05:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:46.637 05:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.637 05:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:46.637 05:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.637 05:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:46.637 05:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.637 05:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:46.637 05:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.637 05:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:46.637 05:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.637 05:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:46.637 05:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.637 05:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:35:46.637 05:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:35:46.637 05:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:46.637 05:49:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:46.637 05:49:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:46.637 05:49:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:46.637 05:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:46.637 05:49:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:46.637 { 00:35:46.637 "params": { 00:35:46.637 "name": "Nvme$subsystem", 00:35:46.637 "trtype": "$TEST_TRANSPORT", 00:35:46.637 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:46.637 "adrfam": "ipv4", 00:35:46.637 "trsvcid": "$NVMF_PORT", 00:35:46.637 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:46.637 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:46.637 "hdgst": ${hdgst:-false}, 00:35:46.637 "ddgst": ${ddgst:-false} 00:35:46.637 }, 00:35:46.637 "method": "bdev_nvme_attach_controller" 00:35:46.637 } 00:35:46.637 EOF 00:35:46.637 )") 00:35:46.637 05:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:46.637 05:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:46.637 05:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:46.637 05:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:46.637 05:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:46.637 05:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:46.637 05:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:35:46.637 05:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:46.637 05:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:46.637 05:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:46.637 05:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:46.637 05:49:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:46.637 05:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:46.637 05:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:35:46.637 05:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:46.637 05:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:46.637 05:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:46.637 05:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:46.637 05:49:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:46.637 05:49:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:46.637 { 00:35:46.637 "params": { 00:35:46.637 "name": "Nvme$subsystem", 00:35:46.637 "trtype": "$TEST_TRANSPORT", 00:35:46.637 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:46.637 "adrfam": "ipv4", 00:35:46.637 "trsvcid": "$NVMF_PORT", 00:35:46.637 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:46.637 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:46.637 "hdgst": ${hdgst:-false}, 00:35:46.637 "ddgst": ${ddgst:-false} 00:35:46.637 }, 00:35:46.637 "method": "bdev_nvme_attach_controller" 00:35:46.637 } 00:35:46.637 EOF 00:35:46.637 )") 00:35:46.637 05:49:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:46.637 05:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:46.637 05:49:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:46.637 05:49:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:46.637 05:49:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:46.637 05:49:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:46.637 "params": { 00:35:46.637 "name": "Nvme0", 00:35:46.637 "trtype": "tcp", 00:35:46.637 "traddr": "10.0.0.2", 00:35:46.637 "adrfam": "ipv4", 00:35:46.637 "trsvcid": "4420", 00:35:46.637 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:46.637 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:46.637 "hdgst": false, 00:35:46.637 "ddgst": false 00:35:46.637 }, 00:35:46.637 "method": "bdev_nvme_attach_controller" 00:35:46.637 },{ 00:35:46.637 "params": { 00:35:46.637 "name": "Nvme1", 00:35:46.637 "trtype": "tcp", 00:35:46.637 "traddr": "10.0.0.2", 00:35:46.637 "adrfam": "ipv4", 00:35:46.637 "trsvcid": "4420", 00:35:46.637 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:46.637 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:46.637 "hdgst": false, 00:35:46.637 "ddgst": false 00:35:46.637 }, 00:35:46.637 "method": "bdev_nvme_attach_controller" 00:35:46.637 }' 00:35:46.637 05:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:46.637 05:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:46.637 05:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:46.637 05:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:46.637 05:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:46.637 05:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:46.637 05:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:46.637 05:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:46.637 05:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:46.637 05:49:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:46.637 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:46.637 ... 00:35:46.637 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:46.637 ... 00:35:46.637 fio-3.35 00:35:46.637 Starting 4 threads 00:35:46.637 EAL: No free 2048 kB hugepages reported on node 1 00:35:51.907 00:35:51.907 filename0: (groupid=0, jobs=1): err= 0: pid=3408160: Sun Jul 14 05:49:58 2024 00:35:51.907 read: IOPS=1757, BW=13.7MiB/s (14.4MB/s)(68.7MiB/5002msec) 00:35:51.907 slat (nsec): min=3892, max=58566, avg=17824.46, stdev=7926.76 00:35:51.907 clat (usec): min=1901, max=6922, avg=4507.32, stdev=533.65 00:35:51.907 lat (usec): min=1910, max=6930, avg=4525.15, stdev=534.57 00:35:51.907 clat percentiles (usec): 00:35:51.907 | 1.00th=[ 3195], 5.00th=[ 3687], 10.00th=[ 3916], 20.00th=[ 4146], 00:35:51.907 | 30.00th=[ 4293], 40.00th=[ 4359], 50.00th=[ 4490], 60.00th=[ 4621], 00:35:51.907 | 70.00th=[ 4686], 80.00th=[ 4883], 90.00th=[ 5145], 95.00th=[ 5342], 00:35:51.907 | 99.00th=[ 6194], 99.50th=[ 6521], 99.90th=[ 6849], 99.95th=[ 6915], 00:35:51.907 | 99.99th=[ 6915] 00:35:51.907 bw ( KiB/s): min=12976, max=15120, per=23.53%, avg=14053.33, stdev=765.58, samples=9 00:35:51.907 iops : min= 1622, max= 1890, avg=1756.67, stdev=95.70, samples=9 00:35:51.907 lat (msec) : 2=0.03%, 4=13.21%, 10=86.76% 00:35:51.907 cpu : usr=94.44%, sys=4.62%, ctx=117, majf=0, minf=37 00:35:51.907 IO depths : 1=0.2%, 2=1.5%, 4=64.6%, 8=33.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:51.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.907 complete : 0=0.0%, 4=97.4%, 8=2.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.907 issued rwts: total=8791,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:51.907 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:51.907 filename0: (groupid=0, jobs=1): err= 0: pid=3408161: Sun Jul 14 05:49:58 2024 00:35:51.907 read: IOPS=1915, BW=15.0MiB/s (15.7MB/s)(74.8MiB/5002msec) 00:35:51.907 slat (nsec): min=3881, max=70264, avg=13192.35, stdev=6560.41 00:35:51.907 clat (usec): min=2435, max=48399, avg=4136.64, stdev=1443.17 00:35:51.907 lat (usec): min=2444, max=48426, avg=4149.83, stdev=1443.04 00:35:51.907 clat percentiles (usec): 00:35:51.907 | 1.00th=[ 3097], 5.00th=[ 3359], 10.00th=[ 3458], 20.00th=[ 3621], 00:35:51.907 | 30.00th=[ 3720], 40.00th=[ 3818], 50.00th=[ 3949], 60.00th=[ 4080], 00:35:51.907 | 70.00th=[ 4228], 80.00th=[ 4424], 90.00th=[ 5211], 95.00th=[ 5538], 00:35:51.907 | 99.00th=[ 6259], 99.50th=[ 6390], 99.90th=[ 7177], 99.95th=[48497], 00:35:51.907 | 99.99th=[48497] 00:35:51.907 bw ( KiB/s): min=13488, max=16192, per=25.50%, avg=15232.00, stdev=1025.41, samples=9 00:35:51.907 iops : min= 1686, max= 2024, avg=1904.00, stdev=128.18, samples=9 00:35:51.907 lat (msec) : 4=54.80%, 10=45.11%, 50=0.08% 00:35:51.907 cpu : usr=94.18%, sys=5.32%, ctx=6, majf=0, minf=32 00:35:51.907 IO depths : 1=0.1%, 2=1.5%, 4=70.6%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:51.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.907 complete : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.907 issued rwts: total=9580,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:51.908 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:51.908 filename1: (groupid=0, jobs=1): err= 0: pid=3408162: Sun Jul 14 05:49:58 2024 00:35:51.908 read: IOPS=1931, BW=15.1MiB/s (15.8MB/s)(75.5MiB/5002msec) 00:35:51.908 slat (nsec): min=3890, max=69078, avg=12938.39, stdev=6738.89 00:35:51.908 clat (usec): min=1665, max=6962, avg=4103.45, stdev=705.40 00:35:51.908 lat (usec): min=1678, max=6970, avg=4116.38, stdev=704.65 00:35:51.908 clat percentiles (usec): 00:35:51.908 | 1.00th=[ 3032], 5.00th=[ 3326], 10.00th=[ 3425], 20.00th=[ 3621], 00:35:51.908 | 30.00th=[ 3720], 40.00th=[ 3818], 50.00th=[ 3916], 60.00th=[ 4047], 00:35:51.908 | 70.00th=[ 4228], 80.00th=[ 4424], 90.00th=[ 5276], 95.00th=[ 5604], 00:35:51.908 | 99.00th=[ 6325], 99.50th=[ 6521], 99.90th=[ 6783], 99.95th=[ 6849], 00:35:51.908 | 99.99th=[ 6980] 00:35:51.908 bw ( KiB/s): min=14765, max=16224, per=25.87%, avg=15449.30, stdev=492.24, samples=10 00:35:51.908 iops : min= 1845, max= 2028, avg=1931.10, stdev=61.63, samples=10 00:35:51.908 lat (msec) : 2=0.01%, 4=55.45%, 10=44.54% 00:35:51.908 cpu : usr=94.88%, sys=4.52%, ctx=16, majf=0, minf=82 00:35:51.908 IO depths : 1=0.1%, 2=1.0%, 4=71.2%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:51.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.908 complete : 0=0.0%, 4=93.0%, 8=7.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.908 issued rwts: total=9659,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:51.908 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:51.908 filename1: (groupid=0, jobs=1): err= 0: pid=3408163: Sun Jul 14 05:49:58 2024 00:35:51.908 read: IOPS=1862, BW=14.5MiB/s (15.3MB/s)(72.8MiB/5002msec) 00:35:51.908 slat (nsec): min=3820, max=57443, avg=13033.71, stdev=6553.89 00:35:51.908 clat (usec): min=1262, max=8824, avg=4258.81, stdev=619.36 00:35:51.908 lat (usec): min=1271, max=8837, avg=4271.84, stdev=619.22 00:35:51.908 clat percentiles (usec): 00:35:51.908 | 1.00th=[ 3097], 5.00th=[ 3425], 10.00th=[ 3621], 20.00th=[ 3785], 00:35:51.908 | 30.00th=[ 3949], 40.00th=[ 4047], 50.00th=[ 4178], 60.00th=[ 4293], 00:35:51.908 | 70.00th=[ 4424], 80.00th=[ 4621], 90.00th=[ 4883], 95.00th=[ 5604], 00:35:51.908 | 99.00th=[ 6259], 99.50th=[ 6456], 99.90th=[ 7046], 99.95th=[ 7898], 00:35:51.908 | 99.99th=[ 8848] 00:35:51.908 bw ( KiB/s): min=14112, max=15856, per=25.06%, avg=14968.89, stdev=534.45, samples=9 00:35:51.908 iops : min= 1764, max= 1982, avg=1871.11, stdev=66.81, samples=9 00:35:51.908 lat (msec) : 2=0.12%, 4=34.66%, 10=65.22% 00:35:51.908 cpu : usr=94.62%, sys=4.88%, ctx=7, majf=0, minf=36 00:35:51.908 IO depths : 1=0.5%, 2=1.9%, 4=67.9%, 8=29.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:51.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.908 complete : 0=0.0%, 4=94.6%, 8=5.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.908 issued rwts: total=9314,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:51.908 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:51.908 00:35:51.908 Run status group 0 (all jobs): 00:35:51.908 READ: bw=58.3MiB/s (61.2MB/s), 13.7MiB/s-15.1MiB/s (14.4MB/s-15.8MB/s), io=292MiB (306MB), run=5002-5002msec 00:35:51.908 05:49:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:35:51.908 05:49:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:51.908 05:49:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:51.908 05:49:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:51.908 05:49:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:51.908 05:49:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:51.908 05:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.908 05:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:51.908 05:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.908 05:49:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:51.908 05:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.908 05:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:51.908 05:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.908 05:49:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:51.908 05:49:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:51.908 05:49:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:51.908 05:49:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:51.908 05:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.908 05:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:51.908 05:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.908 05:49:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:51.908 05:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.908 05:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:51.908 05:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.908 00:35:51.908 real 0m24.286s 00:35:51.908 user 4m33.772s 00:35:51.908 sys 0m6.671s 00:35:51.908 05:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:51.908 05:49:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:51.908 ************************************ 00:35:51.908 END TEST fio_dif_rand_params 00:35:51.908 ************************************ 00:35:51.908 05:49:58 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:35:51.908 05:49:58 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:51.908 05:49:58 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:51.908 05:49:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:51.908 ************************************ 00:35:51.908 START TEST fio_dif_digest 00:35:51.908 ************************************ 00:35:51.908 05:49:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1121 -- # fio_dif_digest 00:35:51.908 05:49:58 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:35:51.908 05:49:58 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:35:51.908 05:49:58 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:35:51.908 05:49:58 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:35:51.908 05:49:58 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:35:51.908 05:49:58 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:35:51.908 05:49:58 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:35:51.908 05:49:58 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:35:51.908 05:49:58 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:35:51.908 05:49:58 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:35:51.908 05:49:58 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:35:51.908 05:49:58 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:35:51.908 05:49:58 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:35:51.908 05:49:58 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:35:51.908 05:49:58 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:35:51.908 05:49:58 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:51.908 05:49:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.908 05:49:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:51.908 bdev_null0 00:35:51.908 05:49:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.908 05:49:58 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:51.908 05:49:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.908 05:49:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:51.908 05:49:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.908 05:49:58 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:51.908 05:49:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.908 05:49:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:51.908 05:49:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.908 05:49:58 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:51.908 05:49:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.908 05:49:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:51.908 [2024-07-14 05:49:58.803571] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:51.908 05:49:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.908 05:49:58 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:35:51.908 05:49:58 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:35:51.908 05:49:58 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:51.908 05:49:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:35:51.908 05:49:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:35:51.908 05:49:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:51.908 05:49:58 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:51.908 05:49:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:51.908 { 00:35:51.908 "params": { 00:35:51.908 "name": "Nvme$subsystem", 00:35:51.908 "trtype": "$TEST_TRANSPORT", 00:35:51.908 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:51.908 "adrfam": "ipv4", 00:35:51.908 "trsvcid": "$NVMF_PORT", 00:35:51.908 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:51.908 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:51.908 "hdgst": ${hdgst:-false}, 00:35:51.908 "ddgst": ${ddgst:-false} 00:35:51.908 }, 00:35:51.908 "method": "bdev_nvme_attach_controller" 00:35:51.908 } 00:35:51.908 EOF 00:35:51.908 )") 00:35:51.908 05:49:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:51.908 05:49:58 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:35:51.908 05:49:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:51.908 05:49:58 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:35:51.908 05:49:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:51.908 05:49:58 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:35:51.908 05:49:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:51.908 05:49:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:51.908 05:49:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # shift 00:35:51.908 05:49:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:51.909 05:49:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:35:51.909 05:49:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:51.909 05:49:58 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:35:51.909 05:49:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:51.909 05:49:58 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:35:51.909 05:49:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libasan 00:35:51.909 05:49:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:51.909 05:49:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:35:51.909 05:49:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:35:51.909 05:49:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:51.909 "params": { 00:35:51.909 "name": "Nvme0", 00:35:51.909 "trtype": "tcp", 00:35:51.909 "traddr": "10.0.0.2", 00:35:51.909 "adrfam": "ipv4", 00:35:51.909 "trsvcid": "4420", 00:35:51.909 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:51.909 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:51.909 "hdgst": true, 00:35:51.909 "ddgst": true 00:35:51.909 }, 00:35:51.909 "method": "bdev_nvme_attach_controller" 00:35:51.909 }' 00:35:51.909 05:49:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:51.909 05:49:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:51.909 05:49:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:51.909 05:49:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:51.909 05:49:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:51.909 05:49:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:51.909 05:49:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:51.909 05:49:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:51.909 05:49:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:51.909 05:49:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:52.167 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:52.167 ... 00:35:52.167 fio-3.35 00:35:52.167 Starting 3 threads 00:35:52.167 EAL: No free 2048 kB hugepages reported on node 1 00:36:04.362 00:36:04.362 filename0: (groupid=0, jobs=1): err= 0: pid=3408910: Sun Jul 14 05:50:09 2024 00:36:04.362 read: IOPS=223, BW=28.0MiB/s (29.3MB/s)(281MiB/10047msec) 00:36:04.362 slat (nsec): min=5054, max=26621, avg=14428.70, stdev=1370.67 00:36:04.362 clat (usec): min=6155, max=55846, avg=13377.01, stdev=5209.69 00:36:04.362 lat (usec): min=6169, max=55860, avg=13391.43, stdev=5209.73 00:36:04.362 clat percentiles (usec): 00:36:04.362 | 1.00th=[ 8160], 5.00th=[ 9372], 10.00th=[ 9896], 20.00th=[10552], 00:36:04.362 | 30.00th=[11994], 40.00th=[12911], 50.00th=[13435], 60.00th=[13829], 00:36:04.362 | 70.00th=[14091], 80.00th=[14484], 90.00th=[15008], 95.00th=[15533], 00:36:04.362 | 99.00th=[52167], 99.50th=[54789], 99.90th=[55837], 99.95th=[55837], 00:36:04.362 | 99.99th=[55837] 00:36:04.362 bw ( KiB/s): min=25856, max=30720, per=37.15%, avg=28736.00, stdev=1208.98, samples=20 00:36:04.362 iops : min= 202, max= 240, avg=224.50, stdev= 9.45, samples=20 00:36:04.362 lat (msec) : 10=11.66%, 20=86.78%, 50=0.22%, 100=1.34% 00:36:04.362 cpu : usr=92.58%, sys=6.84%, ctx=40, majf=0, minf=137 00:36:04.362 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:04.362 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.362 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.362 issued rwts: total=2247,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:04.362 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:04.362 filename0: (groupid=0, jobs=1): err= 0: pid=3408911: Sun Jul 14 05:50:09 2024 00:36:04.362 read: IOPS=156, BW=19.6MiB/s (20.5MB/s)(197MiB/10041msec) 00:36:04.362 slat (nsec): min=5203, max=31004, avg=14820.36, stdev=1686.72 00:36:04.362 clat (usec): min=6768, max=99201, avg=19141.93, stdev=12515.59 00:36:04.362 lat (usec): min=6782, max=99216, avg=19156.75, stdev=12515.61 00:36:04.362 clat percentiles (usec): 00:36:04.362 | 1.00th=[ 9765], 5.00th=[11469], 10.00th=[13173], 20.00th=[14353], 00:36:04.362 | 30.00th=[14877], 40.00th=[15270], 50.00th=[15664], 60.00th=[16057], 00:36:04.362 | 70.00th=[16581], 80.00th=[17171], 90.00th=[19006], 95.00th=[56361], 00:36:04.362 | 99.00th=[58459], 99.50th=[59507], 99.90th=[98042], 99.95th=[99091], 00:36:04.362 | 99.99th=[99091] 00:36:04.362 bw ( KiB/s): min=13568, max=23296, per=25.96%, avg=20081.25, stdev=2583.95, samples=20 00:36:04.362 iops : min= 106, max= 182, avg=156.85, stdev=20.20, samples=20 00:36:04.362 lat (msec) : 10=1.15%, 20=89.57%, 50=0.32%, 100=8.97% 00:36:04.362 cpu : usr=93.26%, sys=6.25%, ctx=15, majf=0, minf=88 00:36:04.362 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:04.362 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.362 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.362 issued rwts: total=1572,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:04.362 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:04.362 filename0: (groupid=0, jobs=1): err= 0: pid=3408912: Sun Jul 14 05:50:09 2024 00:36:04.362 read: IOPS=224, BW=28.0MiB/s (29.4MB/s)(282MiB/10047msec) 00:36:04.362 slat (nsec): min=5085, max=38932, avg=14128.40, stdev=1602.05 00:36:04.362 clat (usec): min=6065, max=55788, avg=13342.07, stdev=5520.22 00:36:04.362 lat (usec): min=6077, max=55802, avg=13356.20, stdev=5520.29 00:36:04.362 clat percentiles (usec): 00:36:04.362 | 1.00th=[ 6980], 5.00th=[ 9110], 10.00th=[ 9634], 20.00th=[10290], 00:36:04.362 | 30.00th=[11731], 40.00th=[12780], 50.00th=[13304], 60.00th=[13829], 00:36:04.362 | 70.00th=[14091], 80.00th=[14615], 90.00th=[15139], 95.00th=[15664], 00:36:04.362 | 99.00th=[53740], 99.50th=[54789], 99.90th=[55313], 99.95th=[55837], 00:36:04.362 | 99.99th=[55837] 00:36:04.362 bw ( KiB/s): min=25344, max=33024, per=37.25%, avg=28812.55, stdev=2332.28, samples=20 00:36:04.362 iops : min= 198, max= 258, avg=225.05, stdev=18.22, samples=20 00:36:04.362 lat (msec) : 10=15.36%, 20=82.96%, 50=0.18%, 100=1.51% 00:36:04.362 cpu : usr=92.44%, sys=7.03%, ctx=18, majf=0, minf=161 00:36:04.362 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:04.362 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.362 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.362 issued rwts: total=2253,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:04.362 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:04.362 00:36:04.362 Run status group 0 (all jobs): 00:36:04.362 READ: bw=75.5MiB/s (79.2MB/s), 19.6MiB/s-28.0MiB/s (20.5MB/s-29.4MB/s), io=759MiB (796MB), run=10041-10047msec 00:36:04.362 05:50:09 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:36:04.362 05:50:09 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:36:04.363 05:50:09 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:36:04.363 05:50:09 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:04.363 05:50:09 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:36:04.363 05:50:09 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:04.363 05:50:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.363 05:50:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:04.363 05:50:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.363 05:50:09 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:04.363 05:50:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.363 05:50:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:04.363 05:50:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.363 00:36:04.363 real 0m11.090s 00:36:04.363 user 0m29.180s 00:36:04.363 sys 0m2.304s 00:36:04.363 05:50:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:04.363 05:50:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:04.363 ************************************ 00:36:04.363 END TEST fio_dif_digest 00:36:04.363 ************************************ 00:36:04.363 05:50:09 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:36:04.363 05:50:09 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:36:04.363 05:50:09 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:04.363 05:50:09 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:36:04.363 05:50:09 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:04.363 05:50:09 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:36:04.363 05:50:09 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:04.363 05:50:09 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:04.363 rmmod nvme_tcp 00:36:04.363 rmmod nvme_fabrics 00:36:04.363 rmmod nvme_keyring 00:36:04.363 05:50:09 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:04.363 05:50:09 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:36:04.363 05:50:09 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:36:04.363 05:50:09 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 3402880 ']' 00:36:04.363 05:50:09 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 3402880 00:36:04.363 05:50:09 nvmf_dif -- common/autotest_common.sh@946 -- # '[' -z 3402880 ']' 00:36:04.363 05:50:09 nvmf_dif -- common/autotest_common.sh@950 -- # kill -0 3402880 00:36:04.363 05:50:09 nvmf_dif -- common/autotest_common.sh@951 -- # uname 00:36:04.363 05:50:09 nvmf_dif -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:04.363 05:50:09 nvmf_dif -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3402880 00:36:04.363 05:50:09 nvmf_dif -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:36:04.363 05:50:09 nvmf_dif -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:36:04.363 05:50:09 nvmf_dif -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3402880' 00:36:04.363 killing process with pid 3402880 00:36:04.363 05:50:09 nvmf_dif -- common/autotest_common.sh@965 -- # kill 3402880 00:36:04.363 05:50:09 nvmf_dif -- common/autotest_common.sh@970 -- # wait 3402880 00:36:04.363 05:50:10 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:36:04.363 05:50:10 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:04.363 Waiting for block devices as requested 00:36:04.363 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:36:04.363 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:04.629 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:04.629 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:04.629 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:04.931 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:04.931 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:04.931 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:04.931 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:04.931 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:05.190 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:05.190 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:05.190 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:05.190 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:05.448 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:05.448 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:05.448 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:05.707 05:50:12 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:05.707 05:50:12 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:05.707 05:50:12 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:05.707 05:50:12 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:05.707 05:50:12 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:05.707 05:50:12 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:05.707 05:50:12 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:07.624 05:50:14 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:07.624 00:36:07.624 real 1m6.456s 00:36:07.624 user 6m30.158s 00:36:07.624 sys 0m18.038s 00:36:07.624 05:50:14 nvmf_dif -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:07.624 05:50:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:07.624 ************************************ 00:36:07.624 END TEST nvmf_dif 00:36:07.624 ************************************ 00:36:07.624 05:50:14 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:07.624 05:50:14 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:07.624 05:50:14 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:07.624 05:50:14 -- common/autotest_common.sh@10 -- # set +x 00:36:07.624 ************************************ 00:36:07.624 START TEST nvmf_abort_qd_sizes 00:36:07.624 ************************************ 00:36:07.624 05:50:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:07.624 * Looking for test storage... 00:36:07.624 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:07.624 05:50:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:07.624 05:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:36:07.624 05:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:07.624 05:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:07.624 05:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:07.624 05:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:07.624 05:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:07.624 05:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:07.624 05:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:07.624 05:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:07.624 05:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:07.624 05:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:07.883 05:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:07.883 05:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:07.883 05:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:07.883 05:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:07.883 05:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:07.883 05:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:07.883 05:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:07.883 05:50:14 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:07.883 05:50:14 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:07.883 05:50:14 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:07.883 05:50:14 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:07.883 05:50:14 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:07.883 05:50:14 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:07.883 05:50:14 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:36:07.883 05:50:14 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:07.883 05:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:36:07.883 05:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:07.883 05:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:07.883 05:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:07.883 05:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:07.883 05:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:07.883 05:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:07.883 05:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:07.883 05:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:07.883 05:50:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:36:07.883 05:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:07.883 05:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:07.883 05:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:07.883 05:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:07.883 05:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:07.883 05:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:07.883 05:50:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:07.883 05:50:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:07.883 05:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:07.883 05:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:07.883 05:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:36:07.883 05:50:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:09.787 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:09.787 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:36:09.787 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:09.787 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:09.787 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:09.787 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:09.787 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:09.787 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:36:09.787 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:09.787 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:36:09.787 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:36:09.787 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:36:09.787 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:36:09.787 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:36:09.787 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:36:09.787 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:09.787 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:09.787 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:09.787 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:09.787 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:09.787 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:09.787 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:09.787 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:09.788 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:09.788 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:09.788 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:09.788 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:09.788 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:09.788 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:09.788 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:09.788 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:09.788 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:09.788 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:09.788 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:09.788 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:09.788 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:09.788 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:09.788 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:09.788 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:09.788 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:09.788 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:09.788 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:09.788 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:09.788 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:09.788 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:09.788 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:09.788 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:09.788 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:09.788 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:09.788 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:09.788 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:09.788 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:09.788 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:09.788 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:09.788 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:09.788 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:09.788 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:09.788 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:09.788 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:09.788 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:09.788 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:09.788 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:09.788 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:09.788 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:09.788 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:09.788 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:09.788 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:09.788 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:09.788 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:09.788 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:09.788 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:09.788 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:09.788 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:36:09.788 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:09.788 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:09.788 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:09.788 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:09.788 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:09.788 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:09.788 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:09.788 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:09.788 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:09.788 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:09.788 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:09.788 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:09.788 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:09.788 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:09.788 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:09.788 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:09.788 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:09.788 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:09.788 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:09.788 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:09.788 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:09.788 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:09.788 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:09.788 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:09.788 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:36:09.788 00:36:09.788 --- 10.0.0.2 ping statistics --- 00:36:09.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:09.788 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:36:09.788 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:09.788 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:09.788 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:36:09.788 00:36:09.788 --- 10.0.0.1 ping statistics --- 00:36:09.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:09.788 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:36:09.788 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:09.788 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:36:09.788 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:36:09.788 05:50:16 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:10.723 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:10.723 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:10.723 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:10.982 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:10.982 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:10.982 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:10.982 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:10.982 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:10.982 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:10.982 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:10.982 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:10.982 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:10.982 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:10.982 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:10.982 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:10.982 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:11.917 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:36:11.917 05:50:19 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:11.917 05:50:19 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:11.917 05:50:19 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:11.917 05:50:19 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:11.917 05:50:19 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:11.917 05:50:19 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:12.175 05:50:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:36:12.175 05:50:19 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:12.175 05:50:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@720 -- # xtrace_disable 00:36:12.175 05:50:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:12.175 05:50:19 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=3413702 00:36:12.175 05:50:19 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:36:12.175 05:50:19 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 3413702 00:36:12.175 05:50:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@827 -- # '[' -z 3413702 ']' 00:36:12.175 05:50:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:12.175 05:50:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:12.175 05:50:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:12.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:12.175 05:50:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:12.175 05:50:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:12.175 [2024-07-14 05:50:19.088160] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:36:12.175 [2024-07-14 05:50:19.088249] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:12.175 EAL: No free 2048 kB hugepages reported on node 1 00:36:12.175 [2024-07-14 05:50:19.158302] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:12.175 [2024-07-14 05:50:19.251072] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:12.175 [2024-07-14 05:50:19.251133] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:12.175 [2024-07-14 05:50:19.251151] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:12.175 [2024-07-14 05:50:19.251165] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:12.175 [2024-07-14 05:50:19.251176] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:12.175 [2024-07-14 05:50:19.251303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:12.175 [2024-07-14 05:50:19.251519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:36:12.175 [2024-07-14 05:50:19.251574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:36:12.175 [2024-07-14 05:50:19.251576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:12.433 05:50:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:12.433 05:50:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # return 0 00:36:12.433 05:50:19 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:12.433 05:50:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:12.433 05:50:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:12.433 05:50:19 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:12.433 05:50:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:36:12.433 05:50:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:36:12.433 05:50:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:36:12.433 05:50:19 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:36:12.433 05:50:19 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:36:12.433 05:50:19 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:36:12.433 05:50:19 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:36:12.433 05:50:19 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:36:12.433 05:50:19 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:36:12.433 05:50:19 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:36:12.433 05:50:19 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:36:12.433 05:50:19 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:36:12.433 05:50:19 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:36:12.433 05:50:19 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:36:12.433 05:50:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:36:12.433 05:50:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:36:12.433 05:50:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:36:12.433 05:50:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:12.433 05:50:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:12.433 05:50:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:12.433 ************************************ 00:36:12.433 START TEST spdk_target_abort 00:36:12.433 ************************************ 00:36:12.433 05:50:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1121 -- # spdk_target 00:36:12.433 05:50:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:36:12.433 05:50:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:36:12.433 05:50:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:12.433 05:50:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:15.712 spdk_targetn1 00:36:15.712 05:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:15.712 05:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:15.712 05:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:15.712 05:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:15.712 [2024-07-14 05:50:22.251979] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:15.712 05:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:15.712 05:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:36:15.712 05:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:15.712 05:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:15.712 05:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:15.712 05:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:36:15.712 05:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:15.712 05:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:15.712 05:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:15.712 05:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:36:15.712 05:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:15.712 05:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:15.712 [2024-07-14 05:50:22.284261] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:15.712 05:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:15.712 05:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:36:15.712 05:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:15.712 05:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:15.712 05:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:36:15.712 05:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:15.712 05:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:15.712 05:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:15.712 05:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:15.712 05:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:15.712 05:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:15.712 05:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:15.712 05:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:15.712 05:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:15.712 05:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:15.712 05:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:36:15.712 05:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:15.712 05:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:15.712 05:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:15.712 05:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:15.712 05:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:15.713 05:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:15.713 EAL: No free 2048 kB hugepages reported on node 1 00:36:18.987 Initializing NVMe Controllers 00:36:18.987 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:18.987 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:18.987 Initialization complete. Launching workers. 00:36:18.987 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9274, failed: 0 00:36:18.987 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1208, failed to submit 8066 00:36:18.987 success 800, unsuccess 408, failed 0 00:36:18.987 05:50:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:18.987 05:50:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:18.987 EAL: No free 2048 kB hugepages reported on node 1 00:36:22.265 Initializing NVMe Controllers 00:36:22.265 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:22.265 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:22.265 Initialization complete. Launching workers. 00:36:22.265 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8602, failed: 0 00:36:22.265 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1230, failed to submit 7372 00:36:22.265 success 331, unsuccess 899, failed 0 00:36:22.265 05:50:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:22.265 05:50:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:22.265 EAL: No free 2048 kB hugepages reported on node 1 00:36:24.822 Initializing NVMe Controllers 00:36:24.822 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:24.822 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:24.822 Initialization complete. Launching workers. 00:36:24.822 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31447, failed: 0 00:36:24.823 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2710, failed to submit 28737 00:36:24.823 success 545, unsuccess 2165, failed 0 00:36:24.823 05:50:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:36:24.823 05:50:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:24.823 05:50:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:24.823 05:50:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:24.823 05:50:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:36:24.823 05:50:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:24.823 05:50:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:26.195 05:50:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:26.195 05:50:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3413702 00:36:26.195 05:50:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@946 -- # '[' -z 3413702 ']' 00:36:26.195 05:50:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # kill -0 3413702 00:36:26.195 05:50:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # uname 00:36:26.195 05:50:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:26.195 05:50:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3413702 00:36:26.195 05:50:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:36:26.195 05:50:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:36:26.195 05:50:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3413702' 00:36:26.195 killing process with pid 3413702 00:36:26.195 05:50:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@965 -- # kill 3413702 00:36:26.195 05:50:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # wait 3413702 00:36:26.453 00:36:26.453 real 0m14.054s 00:36:26.453 user 0m53.216s 00:36:26.453 sys 0m2.542s 00:36:26.453 05:50:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:26.453 05:50:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:26.453 ************************************ 00:36:26.453 END TEST spdk_target_abort 00:36:26.453 ************************************ 00:36:26.453 05:50:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:36:26.453 05:50:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:26.453 05:50:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:26.453 05:50:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:26.453 ************************************ 00:36:26.453 START TEST kernel_target_abort 00:36:26.453 ************************************ 00:36:26.453 05:50:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1121 -- # kernel_target 00:36:26.453 05:50:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:36:26.453 05:50:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:36:26.453 05:50:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:26.453 05:50:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:26.453 05:50:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:26.453 05:50:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:26.453 05:50:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:26.453 05:50:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:26.453 05:50:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:26.453 05:50:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:26.453 05:50:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:26.453 05:50:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:36:26.453 05:50:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:36:26.453 05:50:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:36:26.453 05:50:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:26.453 05:50:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:26.453 05:50:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:26.453 05:50:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:36:26.453 05:50:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:36:26.453 05:50:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:36:26.453 05:50:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:26.453 05:50:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:27.825 Waiting for block devices as requested 00:36:27.825 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:36:27.825 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:27.825 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:27.825 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:28.083 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:28.083 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:28.083 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:28.083 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:28.341 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:28.341 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:28.341 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:28.341 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:28.600 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:28.600 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:28.600 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:28.858 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:28.858 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:28.858 05:50:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:36:28.858 05:50:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:28.858 05:50:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:36:28.858 05:50:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:36:28.858 05:50:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:28.858 05:50:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:36:28.858 05:50:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:36:28.858 05:50:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:36:28.858 05:50:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:28.858 No valid GPT data, bailing 00:36:29.117 05:50:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:29.117 05:50:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:36:29.117 05:50:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:36:29.117 05:50:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:36:29.117 05:50:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:36:29.117 05:50:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:29.117 05:50:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:29.117 05:50:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:29.117 05:50:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:36:29.117 05:50:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:36:29.117 05:50:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:36:29.117 05:50:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:36:29.117 05:50:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:36:29.117 05:50:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:36:29.117 05:50:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:36:29.117 05:50:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:36:29.117 05:50:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:29.117 05:50:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:36:29.117 00:36:29.117 Discovery Log Number of Records 2, Generation counter 2 00:36:29.117 =====Discovery Log Entry 0====== 00:36:29.117 trtype: tcp 00:36:29.117 adrfam: ipv4 00:36:29.117 subtype: current discovery subsystem 00:36:29.117 treq: not specified, sq flow control disable supported 00:36:29.117 portid: 1 00:36:29.117 trsvcid: 4420 00:36:29.117 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:29.117 traddr: 10.0.0.1 00:36:29.117 eflags: none 00:36:29.117 sectype: none 00:36:29.117 =====Discovery Log Entry 1====== 00:36:29.117 trtype: tcp 00:36:29.117 adrfam: ipv4 00:36:29.117 subtype: nvme subsystem 00:36:29.117 treq: not specified, sq flow control disable supported 00:36:29.117 portid: 1 00:36:29.117 trsvcid: 4420 00:36:29.117 subnqn: nqn.2016-06.io.spdk:testnqn 00:36:29.117 traddr: 10.0.0.1 00:36:29.117 eflags: none 00:36:29.117 sectype: none 00:36:29.117 05:50:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:36:29.117 05:50:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:29.117 05:50:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:29.117 05:50:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:36:29.117 05:50:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:29.117 05:50:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:29.117 05:50:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:29.117 05:50:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:29.117 05:50:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:29.117 05:50:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:29.117 05:50:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:29.117 05:50:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:29.117 05:50:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:29.117 05:50:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:29.117 05:50:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:36:29.117 05:50:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:29.117 05:50:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:36:29.117 05:50:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:29.117 05:50:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:29.117 05:50:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:29.117 05:50:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:29.117 EAL: No free 2048 kB hugepages reported on node 1 00:36:32.407 Initializing NVMe Controllers 00:36:32.407 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:32.407 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:32.407 Initialization complete. Launching workers. 00:36:32.407 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 29010, failed: 0 00:36:32.407 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29010, failed to submit 0 00:36:32.407 success 0, unsuccess 29010, failed 0 00:36:32.407 05:50:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:32.407 05:50:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:32.407 EAL: No free 2048 kB hugepages reported on node 1 00:36:35.690 Initializing NVMe Controllers 00:36:35.690 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:35.690 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:35.691 Initialization complete. Launching workers. 00:36:35.691 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 55898, failed: 0 00:36:35.691 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 14070, failed to submit 41828 00:36:35.691 success 0, unsuccess 14070, failed 0 00:36:35.691 05:50:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:35.691 05:50:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:35.691 EAL: No free 2048 kB hugepages reported on node 1 00:36:38.988 Initializing NVMe Controllers 00:36:38.988 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:38.988 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:38.988 Initialization complete. Launching workers. 00:36:38.988 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 55102, failed: 0 00:36:38.988 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 13750, failed to submit 41352 00:36:38.988 success 0, unsuccess 13750, failed 0 00:36:38.988 05:50:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:36:38.988 05:50:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:36:38.988 05:50:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:36:38.988 05:50:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:38.988 05:50:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:38.988 05:50:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:38.988 05:50:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:38.988 05:50:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:36:38.988 05:50:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:36:38.988 05:50:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:39.555 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:39.555 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:39.555 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:39.555 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:39.555 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:39.555 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:39.555 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:39.555 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:39.813 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:39.813 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:39.813 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:39.813 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:39.813 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:39.813 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:39.813 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:39.813 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:40.747 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:36:40.747 00:36:40.747 real 0m14.206s 00:36:40.747 user 0m4.548s 00:36:40.747 sys 0m3.367s 00:36:40.747 05:50:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:40.747 05:50:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:40.747 ************************************ 00:36:40.747 END TEST kernel_target_abort 00:36:40.747 ************************************ 00:36:40.747 05:50:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:36:40.747 05:50:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:36:40.747 05:50:47 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:40.747 05:50:47 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:36:40.747 05:50:47 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:40.747 05:50:47 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:36:40.748 05:50:47 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:40.748 05:50:47 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:40.748 rmmod nvme_tcp 00:36:40.748 rmmod nvme_fabrics 00:36:40.748 rmmod nvme_keyring 00:36:40.748 05:50:47 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:40.748 05:50:47 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:36:40.748 05:50:47 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:36:40.748 05:50:47 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 3413702 ']' 00:36:40.748 05:50:47 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 3413702 00:36:40.748 05:50:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@946 -- # '[' -z 3413702 ']' 00:36:40.748 05:50:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # kill -0 3413702 00:36:40.748 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3413702) - No such process 00:36:40.748 05:50:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@973 -- # echo 'Process with pid 3413702 is not found' 00:36:40.748 Process with pid 3413702 is not found 00:36:40.748 05:50:47 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:36:40.748 05:50:47 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:42.124 Waiting for block devices as requested 00:36:42.124 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:36:42.124 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:42.124 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:42.124 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:42.382 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:42.382 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:42.382 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:42.382 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:42.640 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:42.640 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:42.640 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:42.640 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:42.640 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:42.928 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:42.929 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:42.929 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:42.929 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:43.187 05:50:50 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:43.187 05:50:50 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:43.187 05:50:50 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:43.187 05:50:50 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:43.187 05:50:50 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:43.187 05:50:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:43.187 05:50:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:45.085 05:50:52 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:45.085 00:36:45.085 real 0m37.468s 00:36:45.085 user 0m59.825s 00:36:45.085 sys 0m9.127s 00:36:45.085 05:50:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:45.085 05:50:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:45.085 ************************************ 00:36:45.085 END TEST nvmf_abort_qd_sizes 00:36:45.085 ************************************ 00:36:45.085 05:50:52 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:45.085 05:50:52 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:45.085 05:50:52 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:45.085 05:50:52 -- common/autotest_common.sh@10 -- # set +x 00:36:45.085 ************************************ 00:36:45.085 START TEST keyring_file 00:36:45.085 ************************************ 00:36:45.085 05:50:52 keyring_file -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:45.343 * Looking for test storage... 00:36:45.343 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:45.343 05:50:52 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:45.343 05:50:52 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:45.343 05:50:52 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:36:45.343 05:50:52 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:45.343 05:50:52 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:45.343 05:50:52 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:45.343 05:50:52 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:45.343 05:50:52 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:45.343 05:50:52 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:45.343 05:50:52 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:45.343 05:50:52 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:45.343 05:50:52 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:45.343 05:50:52 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:45.343 05:50:52 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:45.343 05:50:52 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:45.343 05:50:52 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:45.343 05:50:52 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:45.343 05:50:52 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:45.343 05:50:52 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:45.343 05:50:52 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:45.343 05:50:52 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:45.343 05:50:52 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:45.343 05:50:52 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:45.343 05:50:52 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:45.343 05:50:52 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:45.343 05:50:52 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:45.343 05:50:52 keyring_file -- paths/export.sh@5 -- # export PATH 00:36:45.343 05:50:52 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:45.343 05:50:52 keyring_file -- nvmf/common.sh@47 -- # : 0 00:36:45.343 05:50:52 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:45.343 05:50:52 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:45.343 05:50:52 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:45.343 05:50:52 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:45.343 05:50:52 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:45.343 05:50:52 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:45.343 05:50:52 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:45.343 05:50:52 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:45.343 05:50:52 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:45.343 05:50:52 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:45.343 05:50:52 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:45.343 05:50:52 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:36:45.343 05:50:52 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:36:45.343 05:50:52 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:36:45.343 05:50:52 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:45.343 05:50:52 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:45.343 05:50:52 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:45.343 05:50:52 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:45.343 05:50:52 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:45.343 05:50:52 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:45.343 05:50:52 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.eO6Pq5gsnX 00:36:45.343 05:50:52 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:45.343 05:50:52 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:45.344 05:50:52 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:45.344 05:50:52 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:45.344 05:50:52 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:36:45.344 05:50:52 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:45.344 05:50:52 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:45.344 05:50:52 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.eO6Pq5gsnX 00:36:45.344 05:50:52 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.eO6Pq5gsnX 00:36:45.344 05:50:52 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.eO6Pq5gsnX 00:36:45.344 05:50:52 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:36:45.344 05:50:52 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:45.344 05:50:52 keyring_file -- keyring/common.sh@17 -- # name=key1 00:36:45.344 05:50:52 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:45.344 05:50:52 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:45.344 05:50:52 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:45.344 05:50:52 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.8SS1LC6O5E 00:36:45.344 05:50:52 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:45.344 05:50:52 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:45.344 05:50:52 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:45.344 05:50:52 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:45.344 05:50:52 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:36:45.344 05:50:52 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:45.344 05:50:52 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:45.344 05:50:52 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.8SS1LC6O5E 00:36:45.344 05:50:52 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.8SS1LC6O5E 00:36:45.344 05:50:52 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.8SS1LC6O5E 00:36:45.344 05:50:52 keyring_file -- keyring/file.sh@30 -- # tgtpid=3419451 00:36:45.344 05:50:52 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:45.344 05:50:52 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3419451 00:36:45.344 05:50:52 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 3419451 ']' 00:36:45.344 05:50:52 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:45.344 05:50:52 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:45.344 05:50:52 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:45.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:45.344 05:50:52 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:45.344 05:50:52 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:45.344 [2024-07-14 05:50:52.387891] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:36:45.344 [2024-07-14 05:50:52.387994] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3419451 ] 00:36:45.344 EAL: No free 2048 kB hugepages reported on node 1 00:36:45.602 [2024-07-14 05:50:52.449159] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:45.602 [2024-07-14 05:50:52.538827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:45.860 05:50:52 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:45.860 05:50:52 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:36:45.860 05:50:52 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:36:45.860 05:50:52 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:45.860 05:50:52 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:45.860 [2024-07-14 05:50:52.802259] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:45.860 null0 00:36:45.860 [2024-07-14 05:50:52.834305] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:45.860 [2024-07-14 05:50:52.834779] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:45.860 [2024-07-14 05:50:52.842318] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:36:45.860 05:50:52 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:45.860 05:50:52 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:45.860 05:50:52 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:45.860 05:50:52 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:45.860 05:50:52 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:36:45.860 05:50:52 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:45.860 05:50:52 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:36:45.860 05:50:52 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:45.860 05:50:52 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:45.860 05:50:52 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:45.860 05:50:52 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:45.860 [2024-07-14 05:50:52.854340] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:36:45.860 request: 00:36:45.860 { 00:36:45.860 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:36:45.860 "secure_channel": false, 00:36:45.860 "listen_address": { 00:36:45.860 "trtype": "tcp", 00:36:45.860 "traddr": "127.0.0.1", 00:36:45.860 "trsvcid": "4420" 00:36:45.860 }, 00:36:45.860 "method": "nvmf_subsystem_add_listener", 00:36:45.860 "req_id": 1 00:36:45.860 } 00:36:45.860 Got JSON-RPC error response 00:36:45.860 response: 00:36:45.860 { 00:36:45.860 "code": -32602, 00:36:45.860 "message": "Invalid parameters" 00:36:45.860 } 00:36:45.860 05:50:52 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:36:45.860 05:50:52 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:45.860 05:50:52 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:45.860 05:50:52 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:45.860 05:50:52 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:45.860 05:50:52 keyring_file -- keyring/file.sh@46 -- # bperfpid=3419459 00:36:45.860 05:50:52 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:36:45.860 05:50:52 keyring_file -- keyring/file.sh@48 -- # waitforlisten 3419459 /var/tmp/bperf.sock 00:36:45.860 05:50:52 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 3419459 ']' 00:36:45.860 05:50:52 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:45.860 05:50:52 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:45.860 05:50:52 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:45.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:45.860 05:50:52 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:45.860 05:50:52 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:45.860 [2024-07-14 05:50:52.900749] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:36:45.860 [2024-07-14 05:50:52.900810] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3419459 ] 00:36:45.860 EAL: No free 2048 kB hugepages reported on node 1 00:36:45.860 [2024-07-14 05:50:52.961526] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:46.118 [2024-07-14 05:50:53.052089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:46.118 05:50:53 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:46.118 05:50:53 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:36:46.118 05:50:53 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.eO6Pq5gsnX 00:36:46.118 05:50:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.eO6Pq5gsnX 00:36:46.375 05:50:53 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.8SS1LC6O5E 00:36:46.375 05:50:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.8SS1LC6O5E 00:36:46.632 05:50:53 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:36:46.632 05:50:53 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:36:46.632 05:50:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:46.632 05:50:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:46.632 05:50:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:46.890 05:50:53 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.eO6Pq5gsnX == \/\t\m\p\/\t\m\p\.\e\O\6\P\q\5\g\s\n\X ]] 00:36:46.890 05:50:53 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:36:46.890 05:50:53 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:36:46.890 05:50:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:46.890 05:50:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:46.890 05:50:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:47.148 05:50:54 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.8SS1LC6O5E == \/\t\m\p\/\t\m\p\.\8\S\S\1\L\C\6\O\5\E ]] 00:36:47.148 05:50:54 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:36:47.148 05:50:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:47.148 05:50:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:47.148 05:50:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:47.148 05:50:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:47.148 05:50:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:47.406 05:50:54 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:36:47.406 05:50:54 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:36:47.406 05:50:54 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:47.406 05:50:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:47.406 05:50:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:47.406 05:50:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:47.406 05:50:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:47.662 05:50:54 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:36:47.662 05:50:54 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:47.662 05:50:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:47.918 [2024-07-14 05:50:54.888815] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:47.918 nvme0n1 00:36:47.918 05:50:54 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:36:47.918 05:50:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:47.918 05:50:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:47.918 05:50:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:47.918 05:50:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:47.918 05:50:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:48.175 05:50:55 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:36:48.175 05:50:55 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:36:48.175 05:50:55 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:48.175 05:50:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:48.175 05:50:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:48.175 05:50:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:48.175 05:50:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:48.433 05:50:55 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:36:48.433 05:50:55 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:48.691 Running I/O for 1 seconds... 00:36:49.625 00:36:49.625 Latency(us) 00:36:49.625 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:49.625 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:36:49.625 nvme0n1 : 1.03 4666.51 18.23 0.00 0.00 27099.47 7233.23 41360.50 00:36:49.625 =================================================================================================================== 00:36:49.625 Total : 4666.51 18.23 0.00 0.00 27099.47 7233.23 41360.50 00:36:49.625 0 00:36:49.625 05:50:56 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:49.625 05:50:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:49.883 05:50:56 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:36:49.883 05:50:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:49.883 05:50:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:49.883 05:50:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:49.883 05:50:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:49.883 05:50:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:50.142 05:50:57 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:36:50.142 05:50:57 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:36:50.142 05:50:57 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:50.142 05:50:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:50.142 05:50:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:50.142 05:50:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:50.142 05:50:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:50.400 05:50:57 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:36:50.400 05:50:57 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:50.400 05:50:57 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:50.400 05:50:57 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:50.400 05:50:57 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:50.400 05:50:57 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:50.400 05:50:57 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:50.400 05:50:57 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:50.400 05:50:57 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:50.400 05:50:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:50.658 [2024-07-14 05:50:57.610219] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:50.658 [2024-07-14 05:50:57.610275] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd20730 (107): Transport endpoint is not connected 00:36:50.658 [2024-07-14 05:50:57.611265] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd20730 (9): Bad file descriptor 00:36:50.658 [2024-07-14 05:50:57.612264] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:50.658 [2024-07-14 05:50:57.612304] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:50.658 [2024-07-14 05:50:57.612321] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:50.658 request: 00:36:50.658 { 00:36:50.658 "name": "nvme0", 00:36:50.658 "trtype": "tcp", 00:36:50.658 "traddr": "127.0.0.1", 00:36:50.658 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:50.658 "adrfam": "ipv4", 00:36:50.658 "trsvcid": "4420", 00:36:50.658 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:50.658 "psk": "key1", 00:36:50.658 "method": "bdev_nvme_attach_controller", 00:36:50.658 "req_id": 1 00:36:50.658 } 00:36:50.658 Got JSON-RPC error response 00:36:50.658 response: 00:36:50.658 { 00:36:50.658 "code": -5, 00:36:50.658 "message": "Input/output error" 00:36:50.658 } 00:36:50.658 05:50:57 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:50.658 05:50:57 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:50.658 05:50:57 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:50.658 05:50:57 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:50.658 05:50:57 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:36:50.658 05:50:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:50.658 05:50:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:50.658 05:50:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:50.658 05:50:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:50.658 05:50:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:50.916 05:50:57 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:36:50.916 05:50:57 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:36:50.916 05:50:57 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:50.916 05:50:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:50.916 05:50:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:50.916 05:50:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:50.916 05:50:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:51.174 05:50:58 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:36:51.174 05:50:58 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:36:51.174 05:50:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:51.432 05:50:58 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:36:51.432 05:50:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:36:51.690 05:50:58 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:36:51.690 05:50:58 keyring_file -- keyring/file.sh@77 -- # jq length 00:36:51.690 05:50:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:51.949 05:50:58 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:36:51.949 05:50:58 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.eO6Pq5gsnX 00:36:51.949 05:50:58 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.eO6Pq5gsnX 00:36:51.949 05:50:58 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:51.949 05:50:58 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.eO6Pq5gsnX 00:36:51.949 05:50:58 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:51.949 05:50:58 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:51.949 05:50:58 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:51.949 05:50:58 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:51.949 05:50:58 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.eO6Pq5gsnX 00:36:51.949 05:50:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.eO6Pq5gsnX 00:36:52.207 [2024-07-14 05:50:59.087133] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.eO6Pq5gsnX': 0100660 00:36:52.207 [2024-07-14 05:50:59.087195] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:36:52.207 request: 00:36:52.207 { 00:36:52.207 "name": "key0", 00:36:52.207 "path": "/tmp/tmp.eO6Pq5gsnX", 00:36:52.207 "method": "keyring_file_add_key", 00:36:52.207 "req_id": 1 00:36:52.207 } 00:36:52.207 Got JSON-RPC error response 00:36:52.207 response: 00:36:52.207 { 00:36:52.207 "code": -1, 00:36:52.207 "message": "Operation not permitted" 00:36:52.207 } 00:36:52.207 05:50:59 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:52.207 05:50:59 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:52.207 05:50:59 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:52.207 05:50:59 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:52.207 05:50:59 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.eO6Pq5gsnX 00:36:52.207 05:50:59 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.eO6Pq5gsnX 00:36:52.207 05:50:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.eO6Pq5gsnX 00:36:52.465 05:50:59 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.eO6Pq5gsnX 00:36:52.465 05:50:59 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:36:52.465 05:50:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:52.465 05:50:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:52.465 05:50:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:52.465 05:50:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:52.465 05:50:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:52.723 05:50:59 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:36:52.723 05:50:59 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:52.723 05:50:59 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:52.723 05:50:59 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:52.723 05:50:59 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:52.723 05:50:59 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:52.723 05:50:59 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:52.723 05:50:59 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:52.723 05:50:59 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:52.723 05:50:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:52.987 [2024-07-14 05:50:59.829174] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.eO6Pq5gsnX': No such file or directory 00:36:52.987 [2024-07-14 05:50:59.829222] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:36:52.987 [2024-07-14 05:50:59.829264] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:36:52.987 [2024-07-14 05:50:59.829277] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:52.987 [2024-07-14 05:50:59.829290] bdev_nvme.c:6269:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:36:52.987 request: 00:36:52.987 { 00:36:52.987 "name": "nvme0", 00:36:52.987 "trtype": "tcp", 00:36:52.987 "traddr": "127.0.0.1", 00:36:52.987 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:52.987 "adrfam": "ipv4", 00:36:52.987 "trsvcid": "4420", 00:36:52.987 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:52.987 "psk": "key0", 00:36:52.987 "method": "bdev_nvme_attach_controller", 00:36:52.987 "req_id": 1 00:36:52.987 } 00:36:52.987 Got JSON-RPC error response 00:36:52.987 response: 00:36:52.987 { 00:36:52.987 "code": -19, 00:36:52.987 "message": "No such device" 00:36:52.987 } 00:36:52.987 05:50:59 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:52.987 05:50:59 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:52.987 05:50:59 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:52.987 05:50:59 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:52.987 05:50:59 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:36:52.987 05:50:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:53.246 05:51:00 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:53.246 05:51:00 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:53.246 05:51:00 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:53.246 05:51:00 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:53.246 05:51:00 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:53.246 05:51:00 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:53.246 05:51:00 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.LuVslqY8Wk 00:36:53.246 05:51:00 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:53.246 05:51:00 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:53.246 05:51:00 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:53.246 05:51:00 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:53.246 05:51:00 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:36:53.246 05:51:00 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:53.246 05:51:00 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:53.246 05:51:00 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.LuVslqY8Wk 00:36:53.246 05:51:00 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.LuVslqY8Wk 00:36:53.246 05:51:00 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.LuVslqY8Wk 00:36:53.246 05:51:00 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.LuVslqY8Wk 00:36:53.246 05:51:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.LuVslqY8Wk 00:36:53.504 05:51:00 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:53.504 05:51:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:53.762 nvme0n1 00:36:53.762 05:51:00 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:36:53.762 05:51:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:53.762 05:51:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:53.762 05:51:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:53.762 05:51:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:53.762 05:51:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:54.020 05:51:00 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:36:54.020 05:51:00 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:36:54.020 05:51:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:54.278 05:51:01 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:36:54.278 05:51:01 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:36:54.278 05:51:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:54.278 05:51:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:54.278 05:51:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:54.537 05:51:01 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:36:54.537 05:51:01 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:36:54.537 05:51:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:54.537 05:51:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:54.537 05:51:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:54.537 05:51:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:54.537 05:51:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:54.795 05:51:01 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:36:54.795 05:51:01 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:54.795 05:51:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:55.053 05:51:02 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:36:55.053 05:51:02 keyring_file -- keyring/file.sh@104 -- # jq length 00:36:55.053 05:51:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:55.311 05:51:02 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:36:55.311 05:51:02 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.LuVslqY8Wk 00:36:55.311 05:51:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.LuVslqY8Wk 00:36:55.569 05:51:02 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.8SS1LC6O5E 00:36:55.569 05:51:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.8SS1LC6O5E 00:36:55.827 05:51:02 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:55.827 05:51:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:56.116 nvme0n1 00:36:56.116 05:51:03 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:36:56.116 05:51:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:36:56.378 05:51:03 keyring_file -- keyring/file.sh@112 -- # config='{ 00:36:56.378 "subsystems": [ 00:36:56.378 { 00:36:56.378 "subsystem": "keyring", 00:36:56.378 "config": [ 00:36:56.378 { 00:36:56.378 "method": "keyring_file_add_key", 00:36:56.378 "params": { 00:36:56.378 "name": "key0", 00:36:56.378 "path": "/tmp/tmp.LuVslqY8Wk" 00:36:56.378 } 00:36:56.378 }, 00:36:56.378 { 00:36:56.378 "method": "keyring_file_add_key", 00:36:56.378 "params": { 00:36:56.378 "name": "key1", 00:36:56.378 "path": "/tmp/tmp.8SS1LC6O5E" 00:36:56.378 } 00:36:56.378 } 00:36:56.378 ] 00:36:56.378 }, 00:36:56.378 { 00:36:56.378 "subsystem": "iobuf", 00:36:56.378 "config": [ 00:36:56.378 { 00:36:56.378 "method": "iobuf_set_options", 00:36:56.378 "params": { 00:36:56.378 "small_pool_count": 8192, 00:36:56.378 "large_pool_count": 1024, 00:36:56.378 "small_bufsize": 8192, 00:36:56.378 "large_bufsize": 135168 00:36:56.378 } 00:36:56.378 } 00:36:56.378 ] 00:36:56.378 }, 00:36:56.378 { 00:36:56.378 "subsystem": "sock", 00:36:56.378 "config": [ 00:36:56.378 { 00:36:56.378 "method": "sock_set_default_impl", 00:36:56.378 "params": { 00:36:56.378 "impl_name": "posix" 00:36:56.378 } 00:36:56.378 }, 00:36:56.378 { 00:36:56.378 "method": "sock_impl_set_options", 00:36:56.378 "params": { 00:36:56.378 "impl_name": "ssl", 00:36:56.378 "recv_buf_size": 4096, 00:36:56.378 "send_buf_size": 4096, 00:36:56.378 "enable_recv_pipe": true, 00:36:56.378 "enable_quickack": false, 00:36:56.378 "enable_placement_id": 0, 00:36:56.378 "enable_zerocopy_send_server": true, 00:36:56.378 "enable_zerocopy_send_client": false, 00:36:56.378 "zerocopy_threshold": 0, 00:36:56.378 "tls_version": 0, 00:36:56.378 "enable_ktls": false 00:36:56.378 } 00:36:56.378 }, 00:36:56.378 { 00:36:56.378 "method": "sock_impl_set_options", 00:36:56.378 "params": { 00:36:56.378 "impl_name": "posix", 00:36:56.378 "recv_buf_size": 2097152, 00:36:56.378 "send_buf_size": 2097152, 00:36:56.378 "enable_recv_pipe": true, 00:36:56.378 "enable_quickack": false, 00:36:56.378 "enable_placement_id": 0, 00:36:56.378 "enable_zerocopy_send_server": true, 00:36:56.378 "enable_zerocopy_send_client": false, 00:36:56.378 "zerocopy_threshold": 0, 00:36:56.378 "tls_version": 0, 00:36:56.378 "enable_ktls": false 00:36:56.378 } 00:36:56.378 } 00:36:56.378 ] 00:36:56.378 }, 00:36:56.378 { 00:36:56.378 "subsystem": "vmd", 00:36:56.378 "config": [] 00:36:56.378 }, 00:36:56.378 { 00:36:56.378 "subsystem": "accel", 00:36:56.378 "config": [ 00:36:56.378 { 00:36:56.378 "method": "accel_set_options", 00:36:56.378 "params": { 00:36:56.378 "small_cache_size": 128, 00:36:56.378 "large_cache_size": 16, 00:36:56.378 "task_count": 2048, 00:36:56.378 "sequence_count": 2048, 00:36:56.378 "buf_count": 2048 00:36:56.378 } 00:36:56.378 } 00:36:56.378 ] 00:36:56.378 }, 00:36:56.378 { 00:36:56.378 "subsystem": "bdev", 00:36:56.378 "config": [ 00:36:56.378 { 00:36:56.378 "method": "bdev_set_options", 00:36:56.378 "params": { 00:36:56.378 "bdev_io_pool_size": 65535, 00:36:56.378 "bdev_io_cache_size": 256, 00:36:56.378 "bdev_auto_examine": true, 00:36:56.378 "iobuf_small_cache_size": 128, 00:36:56.378 "iobuf_large_cache_size": 16 00:36:56.378 } 00:36:56.378 }, 00:36:56.378 { 00:36:56.378 "method": "bdev_raid_set_options", 00:36:56.378 "params": { 00:36:56.378 "process_window_size_kb": 1024 00:36:56.378 } 00:36:56.378 }, 00:36:56.378 { 00:36:56.378 "method": "bdev_iscsi_set_options", 00:36:56.378 "params": { 00:36:56.378 "timeout_sec": 30 00:36:56.378 } 00:36:56.378 }, 00:36:56.378 { 00:36:56.378 "method": "bdev_nvme_set_options", 00:36:56.378 "params": { 00:36:56.378 "action_on_timeout": "none", 00:36:56.378 "timeout_us": 0, 00:36:56.378 "timeout_admin_us": 0, 00:36:56.378 "keep_alive_timeout_ms": 10000, 00:36:56.378 "arbitration_burst": 0, 00:36:56.378 "low_priority_weight": 0, 00:36:56.378 "medium_priority_weight": 0, 00:36:56.378 "high_priority_weight": 0, 00:36:56.378 "nvme_adminq_poll_period_us": 10000, 00:36:56.378 "nvme_ioq_poll_period_us": 0, 00:36:56.378 "io_queue_requests": 512, 00:36:56.378 "delay_cmd_submit": true, 00:36:56.378 "transport_retry_count": 4, 00:36:56.378 "bdev_retry_count": 3, 00:36:56.378 "transport_ack_timeout": 0, 00:36:56.378 "ctrlr_loss_timeout_sec": 0, 00:36:56.378 "reconnect_delay_sec": 0, 00:36:56.378 "fast_io_fail_timeout_sec": 0, 00:36:56.378 "disable_auto_failback": false, 00:36:56.378 "generate_uuids": false, 00:36:56.378 "transport_tos": 0, 00:36:56.378 "nvme_error_stat": false, 00:36:56.378 "rdma_srq_size": 0, 00:36:56.378 "io_path_stat": false, 00:36:56.378 "allow_accel_sequence": false, 00:36:56.378 "rdma_max_cq_size": 0, 00:36:56.378 "rdma_cm_event_timeout_ms": 0, 00:36:56.378 "dhchap_digests": [ 00:36:56.378 "sha256", 00:36:56.378 "sha384", 00:36:56.378 "sha512" 00:36:56.378 ], 00:36:56.378 "dhchap_dhgroups": [ 00:36:56.378 "null", 00:36:56.378 "ffdhe2048", 00:36:56.378 "ffdhe3072", 00:36:56.378 "ffdhe4096", 00:36:56.378 "ffdhe6144", 00:36:56.378 "ffdhe8192" 00:36:56.378 ] 00:36:56.378 } 00:36:56.378 }, 00:36:56.378 { 00:36:56.378 "method": "bdev_nvme_attach_controller", 00:36:56.378 "params": { 00:36:56.378 "name": "nvme0", 00:36:56.378 "trtype": "TCP", 00:36:56.378 "adrfam": "IPv4", 00:36:56.378 "traddr": "127.0.0.1", 00:36:56.378 "trsvcid": "4420", 00:36:56.378 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:56.378 "prchk_reftag": false, 00:36:56.378 "prchk_guard": false, 00:36:56.378 "ctrlr_loss_timeout_sec": 0, 00:36:56.378 "reconnect_delay_sec": 0, 00:36:56.378 "fast_io_fail_timeout_sec": 0, 00:36:56.378 "psk": "key0", 00:36:56.378 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:56.378 "hdgst": false, 00:36:56.379 "ddgst": false 00:36:56.379 } 00:36:56.379 }, 00:36:56.379 { 00:36:56.379 "method": "bdev_nvme_set_hotplug", 00:36:56.379 "params": { 00:36:56.379 "period_us": 100000, 00:36:56.379 "enable": false 00:36:56.379 } 00:36:56.379 }, 00:36:56.379 { 00:36:56.379 "method": "bdev_wait_for_examine" 00:36:56.379 } 00:36:56.379 ] 00:36:56.379 }, 00:36:56.379 { 00:36:56.379 "subsystem": "nbd", 00:36:56.379 "config": [] 00:36:56.379 } 00:36:56.379 ] 00:36:56.379 }' 00:36:56.379 05:51:03 keyring_file -- keyring/file.sh@114 -- # killprocess 3419459 00:36:56.379 05:51:03 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 3419459 ']' 00:36:56.379 05:51:03 keyring_file -- common/autotest_common.sh@950 -- # kill -0 3419459 00:36:56.379 05:51:03 keyring_file -- common/autotest_common.sh@951 -- # uname 00:36:56.379 05:51:03 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:56.379 05:51:03 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3419459 00:36:56.379 05:51:03 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:36:56.379 05:51:03 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:36:56.379 05:51:03 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3419459' 00:36:56.379 killing process with pid 3419459 00:36:56.379 05:51:03 keyring_file -- common/autotest_common.sh@965 -- # kill 3419459 00:36:56.379 Received shutdown signal, test time was about 1.000000 seconds 00:36:56.379 00:36:56.379 Latency(us) 00:36:56.379 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:56.379 =================================================================================================================== 00:36:56.379 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:56.379 05:51:03 keyring_file -- common/autotest_common.sh@970 -- # wait 3419459 00:36:56.636 05:51:03 keyring_file -- keyring/file.sh@117 -- # bperfpid=3421028 00:36:56.636 05:51:03 keyring_file -- keyring/file.sh@119 -- # waitforlisten 3421028 /var/tmp/bperf.sock 00:36:56.636 05:51:03 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 3421028 ']' 00:36:56.636 05:51:03 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:56.636 05:51:03 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:36:56.636 05:51:03 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:56.636 05:51:03 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:56.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:56.636 05:51:03 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:56.636 05:51:03 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:56.636 05:51:03 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:36:56.637 "subsystems": [ 00:36:56.637 { 00:36:56.637 "subsystem": "keyring", 00:36:56.637 "config": [ 00:36:56.637 { 00:36:56.637 "method": "keyring_file_add_key", 00:36:56.637 "params": { 00:36:56.637 "name": "key0", 00:36:56.637 "path": "/tmp/tmp.LuVslqY8Wk" 00:36:56.637 } 00:36:56.637 }, 00:36:56.637 { 00:36:56.637 "method": "keyring_file_add_key", 00:36:56.637 "params": { 00:36:56.637 "name": "key1", 00:36:56.637 "path": "/tmp/tmp.8SS1LC6O5E" 00:36:56.637 } 00:36:56.637 } 00:36:56.637 ] 00:36:56.637 }, 00:36:56.637 { 00:36:56.637 "subsystem": "iobuf", 00:36:56.637 "config": [ 00:36:56.637 { 00:36:56.637 "method": "iobuf_set_options", 00:36:56.637 "params": { 00:36:56.637 "small_pool_count": 8192, 00:36:56.637 "large_pool_count": 1024, 00:36:56.637 "small_bufsize": 8192, 00:36:56.637 "large_bufsize": 135168 00:36:56.637 } 00:36:56.637 } 00:36:56.637 ] 00:36:56.637 }, 00:36:56.637 { 00:36:56.637 "subsystem": "sock", 00:36:56.637 "config": [ 00:36:56.637 { 00:36:56.637 "method": "sock_set_default_impl", 00:36:56.637 "params": { 00:36:56.637 "impl_name": "posix" 00:36:56.637 } 00:36:56.637 }, 00:36:56.637 { 00:36:56.637 "method": "sock_impl_set_options", 00:36:56.637 "params": { 00:36:56.637 "impl_name": "ssl", 00:36:56.637 "recv_buf_size": 4096, 00:36:56.637 "send_buf_size": 4096, 00:36:56.637 "enable_recv_pipe": true, 00:36:56.637 "enable_quickack": false, 00:36:56.637 "enable_placement_id": 0, 00:36:56.637 "enable_zerocopy_send_server": true, 00:36:56.637 "enable_zerocopy_send_client": false, 00:36:56.637 "zerocopy_threshold": 0, 00:36:56.637 "tls_version": 0, 00:36:56.637 "enable_ktls": false 00:36:56.637 } 00:36:56.637 }, 00:36:56.637 { 00:36:56.637 "method": "sock_impl_set_options", 00:36:56.637 "params": { 00:36:56.637 "impl_name": "posix", 00:36:56.637 "recv_buf_size": 2097152, 00:36:56.637 "send_buf_size": 2097152, 00:36:56.637 "enable_recv_pipe": true, 00:36:56.637 "enable_quickack": false, 00:36:56.637 "enable_placement_id": 0, 00:36:56.637 "enable_zerocopy_send_server": true, 00:36:56.637 "enable_zerocopy_send_client": false, 00:36:56.637 "zerocopy_threshold": 0, 00:36:56.637 "tls_version": 0, 00:36:56.637 "enable_ktls": false 00:36:56.637 } 00:36:56.637 } 00:36:56.637 ] 00:36:56.637 }, 00:36:56.637 { 00:36:56.637 "subsystem": "vmd", 00:36:56.637 "config": [] 00:36:56.637 }, 00:36:56.637 { 00:36:56.637 "subsystem": "accel", 00:36:56.637 "config": [ 00:36:56.637 { 00:36:56.637 "method": "accel_set_options", 00:36:56.637 "params": { 00:36:56.637 "small_cache_size": 128, 00:36:56.637 "large_cache_size": 16, 00:36:56.637 "task_count": 2048, 00:36:56.637 "sequence_count": 2048, 00:36:56.637 "buf_count": 2048 00:36:56.637 } 00:36:56.637 } 00:36:56.637 ] 00:36:56.637 }, 00:36:56.637 { 00:36:56.637 "subsystem": "bdev", 00:36:56.637 "config": [ 00:36:56.637 { 00:36:56.637 "method": "bdev_set_options", 00:36:56.637 "params": { 00:36:56.637 "bdev_io_pool_size": 65535, 00:36:56.637 "bdev_io_cache_size": 256, 00:36:56.637 "bdev_auto_examine": true, 00:36:56.637 "iobuf_small_cache_size": 128, 00:36:56.637 "iobuf_large_cache_size": 16 00:36:56.637 } 00:36:56.637 }, 00:36:56.637 { 00:36:56.637 "method": "bdev_raid_set_options", 00:36:56.637 "params": { 00:36:56.637 "process_window_size_kb": 1024 00:36:56.637 } 00:36:56.637 }, 00:36:56.637 { 00:36:56.637 "method": "bdev_iscsi_set_options", 00:36:56.637 "params": { 00:36:56.637 "timeout_sec": 30 00:36:56.637 } 00:36:56.637 }, 00:36:56.637 { 00:36:56.637 "method": "bdev_nvme_set_options", 00:36:56.637 "params": { 00:36:56.637 "action_on_timeout": "none", 00:36:56.637 "timeout_us": 0, 00:36:56.637 "timeout_admin_us": 0, 00:36:56.637 "keep_alive_timeout_ms": 10000, 00:36:56.637 "arbitration_burst": 0, 00:36:56.637 "low_priority_weight": 0, 00:36:56.637 "medium_priority_weight": 0, 00:36:56.637 "high_priority_weight": 0, 00:36:56.637 "nvme_adminq_poll_period_us": 10000, 00:36:56.637 "nvme_ioq_poll_period_us": 0, 00:36:56.637 "io_queue_requests": 512, 00:36:56.637 "delay_cmd_submit": true, 00:36:56.637 "transport_retry_count": 4, 00:36:56.637 "bdev_retry_count": 3, 00:36:56.637 "transport_ack_timeout": 0, 00:36:56.637 "ctrlr_loss_timeout_sec": 0, 00:36:56.637 "reconnect_delay_sec": 0, 00:36:56.637 "fast_io_fail_timeout_sec": 0, 00:36:56.637 "disable_auto_failback": false, 00:36:56.637 "generate_uuids": false, 00:36:56.637 "transport_tos": 0, 00:36:56.637 "nvme_error_stat": false, 00:36:56.637 "rdma_srq_size": 0, 00:36:56.637 "io_path_stat": false, 00:36:56.637 "allow_accel_sequence": false, 00:36:56.637 "rdma_max_cq_size": 0, 00:36:56.637 "rdma_cm_event_timeout_ms": 0, 00:36:56.637 "dhchap_digests": [ 00:36:56.637 "sha256", 00:36:56.637 "sha384", 00:36:56.637 "sha512" 00:36:56.637 ], 00:36:56.637 "dhchap_dhgroups": [ 00:36:56.637 "null", 00:36:56.637 "ffdhe2048", 00:36:56.637 "ffdhe3072", 00:36:56.637 "ffdhe4096", 00:36:56.637 "ffdhe6144", 00:36:56.637 "ffdhe8192" 00:36:56.637 ] 00:36:56.637 } 00:36:56.637 }, 00:36:56.637 { 00:36:56.637 "method": "bdev_nvme_attach_controller", 00:36:56.637 "params": { 00:36:56.637 "name": "nvme0", 00:36:56.637 "trtype": "TCP", 00:36:56.637 "adrfam": "IPv4", 00:36:56.637 "traddr": "127.0.0.1", 00:36:56.637 "trsvcid": "4420", 00:36:56.637 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:56.637 "prchk_reftag": false, 00:36:56.637 "prchk_guard": false, 00:36:56.637 "ctrlr_loss_timeout_sec": 0, 00:36:56.637 "reconnect_delay_sec": 0, 00:36:56.637 "fast_io_fail_timeout_sec": 0, 00:36:56.637 "psk": "key0", 00:36:56.637 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:56.637 "hdgst": false, 00:36:56.637 "ddgst": false 00:36:56.637 } 00:36:56.637 }, 00:36:56.637 { 00:36:56.637 "method": "bdev_nvme_set_hotplug", 00:36:56.637 "params": { 00:36:56.637 "period_us": 100000, 00:36:56.637 "enable": false 00:36:56.637 } 00:36:56.637 }, 00:36:56.637 { 00:36:56.637 "method": "bdev_wait_for_examine" 00:36:56.637 } 00:36:56.637 ] 00:36:56.637 }, 00:36:56.637 { 00:36:56.637 "subsystem": "nbd", 00:36:56.637 "config": [] 00:36:56.637 } 00:36:56.637 ] 00:36:56.637 }' 00:36:56.637 [2024-07-14 05:51:03.712917] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:36:56.637 [2024-07-14 05:51:03.713016] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3421028 ] 00:36:56.637 EAL: No free 2048 kB hugepages reported on node 1 00:36:56.895 [2024-07-14 05:51:03.773389] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:56.895 [2024-07-14 05:51:03.862894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:57.153 [2024-07-14 05:51:04.051310] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:57.719 05:51:04 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:57.719 05:51:04 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:36:57.719 05:51:04 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:36:57.719 05:51:04 keyring_file -- keyring/file.sh@120 -- # jq length 00:36:57.719 05:51:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:57.976 05:51:04 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:36:57.976 05:51:04 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:36:57.976 05:51:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:57.976 05:51:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:57.976 05:51:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:57.976 05:51:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:57.976 05:51:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:58.233 05:51:05 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:36:58.233 05:51:05 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:36:58.233 05:51:05 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:58.233 05:51:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:58.233 05:51:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:58.233 05:51:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:58.233 05:51:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:58.491 05:51:05 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:36:58.491 05:51:05 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:36:58.491 05:51:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:36:58.491 05:51:05 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:36:58.748 05:51:05 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:36:58.748 05:51:05 keyring_file -- keyring/file.sh@1 -- # cleanup 00:36:58.748 05:51:05 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.LuVslqY8Wk /tmp/tmp.8SS1LC6O5E 00:36:58.748 05:51:05 keyring_file -- keyring/file.sh@20 -- # killprocess 3421028 00:36:58.748 05:51:05 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 3421028 ']' 00:36:58.748 05:51:05 keyring_file -- common/autotest_common.sh@950 -- # kill -0 3421028 00:36:58.748 05:51:05 keyring_file -- common/autotest_common.sh@951 -- # uname 00:36:58.748 05:51:05 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:58.748 05:51:05 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3421028 00:36:58.748 05:51:05 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:36:58.748 05:51:05 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:36:58.748 05:51:05 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3421028' 00:36:58.748 killing process with pid 3421028 00:36:58.748 05:51:05 keyring_file -- common/autotest_common.sh@965 -- # kill 3421028 00:36:58.748 Received shutdown signal, test time was about 1.000000 seconds 00:36:58.748 00:36:58.748 Latency(us) 00:36:58.748 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:58.748 =================================================================================================================== 00:36:58.748 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:36:58.748 05:51:05 keyring_file -- common/autotest_common.sh@970 -- # wait 3421028 00:36:59.007 05:51:05 keyring_file -- keyring/file.sh@21 -- # killprocess 3419451 00:36:59.007 05:51:05 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 3419451 ']' 00:36:59.007 05:51:05 keyring_file -- common/autotest_common.sh@950 -- # kill -0 3419451 00:36:59.007 05:51:05 keyring_file -- common/autotest_common.sh@951 -- # uname 00:36:59.007 05:51:05 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:59.007 05:51:05 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3419451 00:36:59.007 05:51:05 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:36:59.007 05:51:05 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:36:59.007 05:51:05 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3419451' 00:36:59.007 killing process with pid 3419451 00:36:59.007 05:51:05 keyring_file -- common/autotest_common.sh@965 -- # kill 3419451 00:36:59.007 [2024-07-14 05:51:05.999745] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:36:59.007 05:51:05 keyring_file -- common/autotest_common.sh@970 -- # wait 3419451 00:36:59.574 00:36:59.574 real 0m14.227s 00:36:59.574 user 0m34.895s 00:36:59.574 sys 0m3.386s 00:36:59.574 05:51:06 keyring_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:59.574 05:51:06 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:59.574 ************************************ 00:36:59.574 END TEST keyring_file 00:36:59.574 ************************************ 00:36:59.574 05:51:06 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:36:59.574 05:51:06 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:36:59.574 05:51:06 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:59.574 05:51:06 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:59.574 05:51:06 -- common/autotest_common.sh@10 -- # set +x 00:36:59.574 ************************************ 00:36:59.574 START TEST keyring_linux 00:36:59.574 ************************************ 00:36:59.574 05:51:06 keyring_linux -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:36:59.574 * Looking for test storage... 00:36:59.574 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:59.574 05:51:06 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:59.574 05:51:06 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:59.574 05:51:06 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:36:59.574 05:51:06 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:59.574 05:51:06 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:59.574 05:51:06 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:59.574 05:51:06 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:59.574 05:51:06 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:59.574 05:51:06 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:59.574 05:51:06 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:59.574 05:51:06 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:59.574 05:51:06 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:59.574 05:51:06 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:59.574 05:51:06 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:59.574 05:51:06 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:59.574 05:51:06 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:59.574 05:51:06 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:59.574 05:51:06 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:59.574 05:51:06 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:59.574 05:51:06 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:59.574 05:51:06 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:59.574 05:51:06 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:59.574 05:51:06 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:59.574 05:51:06 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:59.574 05:51:06 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:59.574 05:51:06 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:59.574 05:51:06 keyring_linux -- paths/export.sh@5 -- # export PATH 00:36:59.574 05:51:06 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:59.574 05:51:06 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:36:59.574 05:51:06 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:59.574 05:51:06 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:59.574 05:51:06 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:59.574 05:51:06 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:59.574 05:51:06 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:59.574 05:51:06 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:59.574 05:51:06 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:59.574 05:51:06 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:59.574 05:51:06 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:59.574 05:51:06 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:59.574 05:51:06 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:59.574 05:51:06 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:36:59.574 05:51:06 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:36:59.574 05:51:06 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:36:59.574 05:51:06 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:36:59.574 05:51:06 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:59.574 05:51:06 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:36:59.574 05:51:06 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:59.574 05:51:06 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:59.574 05:51:06 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:36:59.574 05:51:06 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:59.574 05:51:06 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:59.574 05:51:06 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:36:59.574 05:51:06 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:59.574 05:51:06 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:36:59.574 05:51:06 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:36:59.574 05:51:06 keyring_linux -- nvmf/common.sh@705 -- # python - 00:36:59.574 05:51:06 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:36:59.574 05:51:06 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:36:59.574 /tmp/:spdk-test:key0 00:36:59.574 05:51:06 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:36:59.574 05:51:06 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:59.574 05:51:06 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:36:59.574 05:51:06 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:59.574 05:51:06 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:59.574 05:51:06 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:36:59.574 05:51:06 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:59.574 05:51:06 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:59.574 05:51:06 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:36:59.574 05:51:06 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:59.574 05:51:06 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:36:59.574 05:51:06 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:36:59.574 05:51:06 keyring_linux -- nvmf/common.sh@705 -- # python - 00:36:59.574 05:51:06 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:36:59.574 05:51:06 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:36:59.574 /tmp/:spdk-test:key1 00:36:59.574 05:51:06 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3421393 00:36:59.574 05:51:06 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:59.574 05:51:06 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3421393 00:36:59.575 05:51:06 keyring_linux -- common/autotest_common.sh@827 -- # '[' -z 3421393 ']' 00:36:59.575 05:51:06 keyring_linux -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:59.575 05:51:06 keyring_linux -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:59.575 05:51:06 keyring_linux -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:59.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:59.575 05:51:06 keyring_linux -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:59.575 05:51:06 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:59.575 [2024-07-14 05:51:06.671177] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:36:59.575 [2024-07-14 05:51:06.671270] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3421393 ] 00:36:59.833 EAL: No free 2048 kB hugepages reported on node 1 00:36:59.833 [2024-07-14 05:51:06.730069] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:59.833 [2024-07-14 05:51:06.818369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:00.091 05:51:07 keyring_linux -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:00.091 05:51:07 keyring_linux -- common/autotest_common.sh@860 -- # return 0 00:37:00.091 05:51:07 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:37:00.091 05:51:07 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:00.091 05:51:07 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:00.091 [2024-07-14 05:51:07.075608] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:00.091 null0 00:37:00.091 [2024-07-14 05:51:07.107694] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:00.091 [2024-07-14 05:51:07.108190] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:00.091 05:51:07 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:00.091 05:51:07 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:37:00.091 74703961 00:37:00.091 05:51:07 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:37:00.091 385544526 00:37:00.091 05:51:07 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3421521 00:37:00.091 05:51:07 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:37:00.091 05:51:07 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3421521 /var/tmp/bperf.sock 00:37:00.091 05:51:07 keyring_linux -- common/autotest_common.sh@827 -- # '[' -z 3421521 ']' 00:37:00.091 05:51:07 keyring_linux -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:00.091 05:51:07 keyring_linux -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:00.091 05:51:07 keyring_linux -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:00.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:00.091 05:51:07 keyring_linux -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:00.091 05:51:07 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:00.091 [2024-07-14 05:51:07.172333] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:37:00.091 [2024-07-14 05:51:07.172397] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3421521 ] 00:37:00.349 EAL: No free 2048 kB hugepages reported on node 1 00:37:00.349 [2024-07-14 05:51:07.233670] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:00.349 [2024-07-14 05:51:07.335043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:00.349 05:51:07 keyring_linux -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:00.349 05:51:07 keyring_linux -- common/autotest_common.sh@860 -- # return 0 00:37:00.349 05:51:07 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:37:00.349 05:51:07 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:37:00.607 05:51:07 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:37:00.607 05:51:07 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:01.174 05:51:07 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:01.174 05:51:07 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:01.174 [2024-07-14 05:51:08.236290] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:01.432 nvme0n1 00:37:01.432 05:51:08 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:37:01.432 05:51:08 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:37:01.432 05:51:08 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:01.432 05:51:08 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:01.432 05:51:08 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:01.432 05:51:08 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:01.690 05:51:08 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:37:01.690 05:51:08 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:01.690 05:51:08 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:37:01.690 05:51:08 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:37:01.690 05:51:08 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:01.690 05:51:08 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:01.690 05:51:08 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:37:01.949 05:51:08 keyring_linux -- keyring/linux.sh@25 -- # sn=74703961 00:37:01.949 05:51:08 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:37:01.949 05:51:08 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:01.949 05:51:08 keyring_linux -- keyring/linux.sh@26 -- # [[ 74703961 == \7\4\7\0\3\9\6\1 ]] 00:37:01.949 05:51:08 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 74703961 00:37:01.949 05:51:08 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:37:01.949 05:51:08 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:01.949 Running I/O for 1 seconds... 00:37:02.882 00:37:02.882 Latency(us) 00:37:02.882 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:02.882 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:37:02.882 nvme0n1 : 1.03 3204.61 12.52 0.00 0.00 39418.69 13981.01 55535.69 00:37:02.882 =================================================================================================================== 00:37:02.882 Total : 3204.61 12.52 0.00 0.00 39418.69 13981.01 55535.69 00:37:02.882 0 00:37:02.882 05:51:09 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:02.882 05:51:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:03.141 05:51:10 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:37:03.141 05:51:10 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:37:03.141 05:51:10 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:03.141 05:51:10 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:03.141 05:51:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:03.141 05:51:10 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:03.399 05:51:10 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:37:03.399 05:51:10 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:03.399 05:51:10 keyring_linux -- keyring/linux.sh@23 -- # return 00:37:03.399 05:51:10 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:03.399 05:51:10 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:37:03.399 05:51:10 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:03.399 05:51:10 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:03.399 05:51:10 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:03.399 05:51:10 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:03.399 05:51:10 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:03.399 05:51:10 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:03.399 05:51:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:03.656 [2024-07-14 05:51:10.719358] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:03.656 [2024-07-14 05:51:10.719863] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abaea0 (107): Transport endpoint is not connected 00:37:03.656 [2024-07-14 05:51:10.720852] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abaea0 (9): Bad file descriptor 00:37:03.656 [2024-07-14 05:51:10.721849] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:03.656 [2024-07-14 05:51:10.721876] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:03.656 [2024-07-14 05:51:10.721893] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:03.656 request: 00:37:03.656 { 00:37:03.656 "name": "nvme0", 00:37:03.656 "trtype": "tcp", 00:37:03.656 "traddr": "127.0.0.1", 00:37:03.656 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:03.656 "adrfam": "ipv4", 00:37:03.656 "trsvcid": "4420", 00:37:03.656 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:03.656 "psk": ":spdk-test:key1", 00:37:03.656 "method": "bdev_nvme_attach_controller", 00:37:03.656 "req_id": 1 00:37:03.656 } 00:37:03.656 Got JSON-RPC error response 00:37:03.656 response: 00:37:03.656 { 00:37:03.656 "code": -5, 00:37:03.656 "message": "Input/output error" 00:37:03.656 } 00:37:03.656 05:51:10 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:37:03.656 05:51:10 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:03.656 05:51:10 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:03.656 05:51:10 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:03.657 05:51:10 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:37:03.657 05:51:10 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:03.657 05:51:10 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:37:03.657 05:51:10 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:37:03.657 05:51:10 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:37:03.657 05:51:10 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:03.657 05:51:10 keyring_linux -- keyring/linux.sh@33 -- # sn=74703961 00:37:03.657 05:51:10 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 74703961 00:37:03.657 1 links removed 00:37:03.657 05:51:10 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:03.657 05:51:10 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:37:03.657 05:51:10 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:37:03.657 05:51:10 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:37:03.657 05:51:10 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:37:03.657 05:51:10 keyring_linux -- keyring/linux.sh@33 -- # sn=385544526 00:37:03.657 05:51:10 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 385544526 00:37:03.657 1 links removed 00:37:03.657 05:51:10 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3421521 00:37:03.657 05:51:10 keyring_linux -- common/autotest_common.sh@946 -- # '[' -z 3421521 ']' 00:37:03.657 05:51:10 keyring_linux -- common/autotest_common.sh@950 -- # kill -0 3421521 00:37:03.657 05:51:10 keyring_linux -- common/autotest_common.sh@951 -- # uname 00:37:03.657 05:51:10 keyring_linux -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:03.657 05:51:10 keyring_linux -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3421521 00:37:03.914 05:51:10 keyring_linux -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:37:03.914 05:51:10 keyring_linux -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:37:03.914 05:51:10 keyring_linux -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3421521' 00:37:03.914 killing process with pid 3421521 00:37:03.914 05:51:10 keyring_linux -- common/autotest_common.sh@965 -- # kill 3421521 00:37:03.914 Received shutdown signal, test time was about 1.000000 seconds 00:37:03.914 00:37:03.914 Latency(us) 00:37:03.914 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:03.914 =================================================================================================================== 00:37:03.914 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:03.914 05:51:10 keyring_linux -- common/autotest_common.sh@970 -- # wait 3421521 00:37:03.914 05:51:11 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3421393 00:37:03.914 05:51:11 keyring_linux -- common/autotest_common.sh@946 -- # '[' -z 3421393 ']' 00:37:03.914 05:51:11 keyring_linux -- common/autotest_common.sh@950 -- # kill -0 3421393 00:37:03.914 05:51:11 keyring_linux -- common/autotest_common.sh@951 -- # uname 00:37:03.914 05:51:11 keyring_linux -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:03.914 05:51:11 keyring_linux -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3421393 00:37:04.172 05:51:11 keyring_linux -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:37:04.172 05:51:11 keyring_linux -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:37:04.172 05:51:11 keyring_linux -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3421393' 00:37:04.172 killing process with pid 3421393 00:37:04.172 05:51:11 keyring_linux -- common/autotest_common.sh@965 -- # kill 3421393 00:37:04.172 05:51:11 keyring_linux -- common/autotest_common.sh@970 -- # wait 3421393 00:37:04.430 00:37:04.430 real 0m4.980s 00:37:04.430 user 0m9.295s 00:37:04.430 sys 0m1.508s 00:37:04.430 05:51:11 keyring_linux -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:04.430 05:51:11 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:04.430 ************************************ 00:37:04.430 END TEST keyring_linux 00:37:04.430 ************************************ 00:37:04.430 05:51:11 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:37:04.430 05:51:11 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:37:04.430 05:51:11 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:37:04.430 05:51:11 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:37:04.430 05:51:11 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:37:04.430 05:51:11 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:37:04.430 05:51:11 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:37:04.430 05:51:11 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:37:04.430 05:51:11 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:37:04.430 05:51:11 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:37:04.430 05:51:11 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:37:04.430 05:51:11 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:37:04.430 05:51:11 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:37:04.430 05:51:11 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:37:04.430 05:51:11 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:37:04.430 05:51:11 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:37:04.430 05:51:11 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:37:04.430 05:51:11 -- common/autotest_common.sh@720 -- # xtrace_disable 00:37:04.430 05:51:11 -- common/autotest_common.sh@10 -- # set +x 00:37:04.430 05:51:11 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:37:04.430 05:51:11 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:37:04.430 05:51:11 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:37:04.430 05:51:11 -- common/autotest_common.sh@10 -- # set +x 00:37:06.327 INFO: APP EXITING 00:37:06.327 INFO: killing all VMs 00:37:06.327 INFO: killing vhost app 00:37:06.327 INFO: EXIT DONE 00:37:07.699 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:37:07.699 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:37:07.699 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:37:07.699 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:37:07.699 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:37:07.699 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:37:07.699 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:37:07.699 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:37:07.699 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:37:07.699 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:37:07.699 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:37:07.699 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:37:07.699 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:37:07.699 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:37:07.699 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:37:07.699 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:37:07.699 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:37:09.074 Cleaning 00:37:09.074 Removing: /var/run/dpdk/spdk0/config 00:37:09.074 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:37:09.074 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:37:09.074 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:37:09.074 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:37:09.074 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:37:09.074 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:37:09.074 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:37:09.074 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:37:09.074 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:37:09.074 Removing: /var/run/dpdk/spdk0/hugepage_info 00:37:09.074 Removing: /var/run/dpdk/spdk1/config 00:37:09.074 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:37:09.074 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:37:09.074 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:37:09.074 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:37:09.074 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:37:09.074 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:37:09.074 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:37:09.074 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:37:09.074 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:37:09.074 Removing: /var/run/dpdk/spdk1/hugepage_info 00:37:09.074 Removing: /var/run/dpdk/spdk1/mp_socket 00:37:09.074 Removing: /var/run/dpdk/spdk2/config 00:37:09.074 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:37:09.074 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:37:09.074 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:37:09.074 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:37:09.074 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:37:09.074 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:37:09.074 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:37:09.074 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:37:09.074 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:37:09.074 Removing: /var/run/dpdk/spdk2/hugepage_info 00:37:09.074 Removing: /var/run/dpdk/spdk3/config 00:37:09.074 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:37:09.074 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:37:09.074 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:37:09.074 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:37:09.074 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:37:09.074 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:37:09.074 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:37:09.074 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:37:09.074 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:37:09.074 Removing: /var/run/dpdk/spdk3/hugepage_info 00:37:09.074 Removing: /var/run/dpdk/spdk4/config 00:37:09.074 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:37:09.074 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:37:09.074 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:37:09.074 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:37:09.074 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:37:09.074 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:37:09.074 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:37:09.074 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:37:09.074 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:37:09.074 Removing: /var/run/dpdk/spdk4/hugepage_info 00:37:09.074 Removing: /dev/shm/bdev_svc_trace.1 00:37:09.074 Removing: /dev/shm/nvmf_trace.0 00:37:09.074 Removing: /dev/shm/spdk_tgt_trace.pid3101342 00:37:09.074 Removing: /var/run/dpdk/spdk0 00:37:09.074 Removing: /var/run/dpdk/spdk1 00:37:09.074 Removing: /var/run/dpdk/spdk2 00:37:09.074 Removing: /var/run/dpdk/spdk3 00:37:09.074 Removing: /var/run/dpdk/spdk4 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3099790 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3100522 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3101342 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3101773 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3102458 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3102598 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3103322 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3103331 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3103573 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3104792 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3105808 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3105992 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3106294 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3106508 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3106697 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3106853 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3107013 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3107191 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3107641 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3109989 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3110153 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3110315 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3110327 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3110747 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3110761 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3111187 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3111195 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3111485 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3111491 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3111657 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3111784 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3112153 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3112312 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3112505 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3112674 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3112726 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3112885 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3113040 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3113276 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3113474 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3113633 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3113829 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3114170 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3114332 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3114485 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3114707 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3115163 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3115574 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3115736 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3116003 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3116171 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3116326 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3116487 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3116756 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3116926 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3117077 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3117352 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3117424 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3117628 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3119696 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3173098 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3176096 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3183049 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3186332 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3188807 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3189212 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3196336 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3196338 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3196988 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3197624 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3198183 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3198585 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3198602 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3198849 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3198975 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3198988 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3199632 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3200181 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3200839 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3201240 00:37:09.074 Removing: /var/run/dpdk/spdk_pid3201243 00:37:09.075 Removing: /var/run/dpdk/spdk_pid3201498 00:37:09.075 Removing: /var/run/dpdk/spdk_pid3202344 00:37:09.075 Removing: /var/run/dpdk/spdk_pid3203097 00:37:09.075 Removing: /var/run/dpdk/spdk_pid3208955 00:37:09.075 Removing: /var/run/dpdk/spdk_pid3209226 00:37:09.075 Removing: /var/run/dpdk/spdk_pid3211724 00:37:09.075 Removing: /var/run/dpdk/spdk_pid3215423 00:37:09.075 Removing: /var/run/dpdk/spdk_pid3217585 00:37:09.075 Removing: /var/run/dpdk/spdk_pid3223839 00:37:09.075 Removing: /var/run/dpdk/spdk_pid3229027 00:37:09.075 Removing: /var/run/dpdk/spdk_pid3230218 00:37:09.075 Removing: /var/run/dpdk/spdk_pid3230886 00:37:09.075 Removing: /var/run/dpdk/spdk_pid3241180 00:37:09.075 Removing: /var/run/dpdk/spdk_pid3243780 00:37:09.075 Removing: /var/run/dpdk/spdk_pid3269161 00:37:09.075 Removing: /var/run/dpdk/spdk_pid3271936 00:37:09.075 Removing: /var/run/dpdk/spdk_pid3273003 00:37:09.075 Removing: /var/run/dpdk/spdk_pid3274317 00:37:09.075 Removing: /var/run/dpdk/spdk_pid3274447 00:37:09.075 Removing: /var/run/dpdk/spdk_pid3274537 00:37:09.075 Removing: /var/run/dpdk/spdk_pid3274610 00:37:09.075 Removing: /var/run/dpdk/spdk_pid3275044 00:37:09.075 Removing: /var/run/dpdk/spdk_pid3276356 00:37:09.075 Removing: /var/run/dpdk/spdk_pid3276957 00:37:09.075 Removing: /var/run/dpdk/spdk_pid3277384 00:37:09.075 Removing: /var/run/dpdk/spdk_pid3278992 00:37:09.075 Removing: /var/run/dpdk/spdk_pid3279298 00:37:09.075 Removing: /var/run/dpdk/spdk_pid3279855 00:37:09.075 Removing: /var/run/dpdk/spdk_pid3282248 00:37:09.075 Removing: /var/run/dpdk/spdk_pid3285500 00:37:09.075 Removing: /var/run/dpdk/spdk_pid3289043 00:37:09.075 Removing: /var/run/dpdk/spdk_pid3312607 00:37:09.075 Removing: /var/run/dpdk/spdk_pid3315296 00:37:09.075 Removing: /var/run/dpdk/spdk_pid3319067 00:37:09.075 Removing: /var/run/dpdk/spdk_pid3320134 00:37:09.075 Removing: /var/run/dpdk/spdk_pid3321221 00:37:09.075 Removing: /var/run/dpdk/spdk_pid3323877 00:37:09.075 Removing: /var/run/dpdk/spdk_pid3326621 00:37:09.333 Removing: /var/run/dpdk/spdk_pid3330824 00:37:09.333 Removing: /var/run/dpdk/spdk_pid3330828 00:37:09.333 Removing: /var/run/dpdk/spdk_pid3333592 00:37:09.333 Removing: /var/run/dpdk/spdk_pid3333722 00:37:09.333 Removing: /var/run/dpdk/spdk_pid3333867 00:37:09.333 Removing: /var/run/dpdk/spdk_pid3334221 00:37:09.333 Removing: /var/run/dpdk/spdk_pid3334253 00:37:09.333 Removing: /var/run/dpdk/spdk_pid3335331 00:37:09.333 Removing: /var/run/dpdk/spdk_pid3336511 00:37:09.333 Removing: /var/run/dpdk/spdk_pid3337685 00:37:09.333 Removing: /var/run/dpdk/spdk_pid3338865 00:37:09.333 Removing: /var/run/dpdk/spdk_pid3340039 00:37:09.333 Removing: /var/run/dpdk/spdk_pid3341235 00:37:09.333 Removing: /var/run/dpdk/spdk_pid3345033 00:37:09.333 Removing: /var/run/dpdk/spdk_pid3345369 00:37:09.333 Removing: /var/run/dpdk/spdk_pid3346765 00:37:09.333 Removing: /var/run/dpdk/spdk_pid3347498 00:37:09.333 Removing: /var/run/dpdk/spdk_pid3351195 00:37:09.333 Removing: /var/run/dpdk/spdk_pid3353062 00:37:09.333 Removing: /var/run/dpdk/spdk_pid3357082 00:37:09.333 Removing: /var/run/dpdk/spdk_pid3360528 00:37:09.333 Removing: /var/run/dpdk/spdk_pid3366632 00:37:09.333 Removing: /var/run/dpdk/spdk_pid3370968 00:37:09.333 Removing: /var/run/dpdk/spdk_pid3370970 00:37:09.333 Removing: /var/run/dpdk/spdk_pid3383161 00:37:09.333 Removing: /var/run/dpdk/spdk_pid3383571 00:37:09.333 Removing: /var/run/dpdk/spdk_pid3384091 00:37:09.333 Removing: /var/run/dpdk/spdk_pid3384496 00:37:09.333 Removing: /var/run/dpdk/spdk_pid3385081 00:37:09.333 Removing: /var/run/dpdk/spdk_pid3385485 00:37:09.333 Removing: /var/run/dpdk/spdk_pid3385894 00:37:09.333 Removing: /var/run/dpdk/spdk_pid3386297 00:37:09.334 Removing: /var/run/dpdk/spdk_pid3388791 00:37:09.334 Removing: /var/run/dpdk/spdk_pid3388946 00:37:09.334 Removing: /var/run/dpdk/spdk_pid3393328 00:37:09.334 Removing: /var/run/dpdk/spdk_pid3393472 00:37:09.334 Removing: /var/run/dpdk/spdk_pid3395126 00:37:09.334 Removing: /var/run/dpdk/spdk_pid3400081 00:37:09.334 Removing: /var/run/dpdk/spdk_pid3400161 00:37:09.334 Removing: /var/run/dpdk/spdk_pid3402991 00:37:09.334 Removing: /var/run/dpdk/spdk_pid3404327 00:37:09.334 Removing: /var/run/dpdk/spdk_pid3405723 00:37:09.334 Removing: /var/run/dpdk/spdk_pid3406580 00:37:09.334 Removing: /var/run/dpdk/spdk_pid3407980 00:37:09.334 Removing: /var/run/dpdk/spdk_pid3408809 00:37:09.334 Removing: /var/run/dpdk/spdk_pid3414048 00:37:09.334 Removing: /var/run/dpdk/spdk_pid3414393 00:37:09.334 Removing: /var/run/dpdk/spdk_pid3414781 00:37:09.334 Removing: /var/run/dpdk/spdk_pid3416341 00:37:09.334 Removing: /var/run/dpdk/spdk_pid3416620 00:37:09.334 Removing: /var/run/dpdk/spdk_pid3417017 00:37:09.334 Removing: /var/run/dpdk/spdk_pid3419451 00:37:09.334 Removing: /var/run/dpdk/spdk_pid3419459 00:37:09.334 Removing: /var/run/dpdk/spdk_pid3421028 00:37:09.334 Removing: /var/run/dpdk/spdk_pid3421393 00:37:09.334 Removing: /var/run/dpdk/spdk_pid3421521 00:37:09.334 Clean 00:37:09.334 05:51:16 -- common/autotest_common.sh@1447 -- # return 0 00:37:09.334 05:51:16 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:37:09.334 05:51:16 -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:09.334 05:51:16 -- common/autotest_common.sh@10 -- # set +x 00:37:09.334 05:51:16 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:37:09.334 05:51:16 -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:09.334 05:51:16 -- common/autotest_common.sh@10 -- # set +x 00:37:09.334 05:51:16 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:09.334 05:51:16 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:37:09.334 05:51:16 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:37:09.334 05:51:16 -- spdk/autotest.sh@391 -- # hash lcov 00:37:09.334 05:51:16 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:37:09.334 05:51:16 -- spdk/autotest.sh@393 -- # hostname 00:37:09.334 05:51:16 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:37:09.592 geninfo: WARNING: invalid characters removed from testname! 00:37:41.712 05:51:44 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:41.712 05:51:48 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:44.988 05:51:51 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:47.511 05:51:54 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:50.790 05:51:57 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:54.069 05:52:00 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:56.623 05:52:03 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:37:56.623 05:52:03 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:56.623 05:52:03 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:37:56.623 05:52:03 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:56.623 05:52:03 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:56.623 05:52:03 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:56.623 05:52:03 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:56.623 05:52:03 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:56.623 05:52:03 -- paths/export.sh@5 -- $ export PATH 00:37:56.623 05:52:03 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:56.623 05:52:03 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:37:56.623 05:52:03 -- common/autobuild_common.sh@437 -- $ date +%s 00:37:56.623 05:52:03 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1720929123.XXXXXX 00:37:56.623 05:52:03 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1720929123.mQFSTY 00:37:56.623 05:52:03 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:37:56.623 05:52:03 -- common/autobuild_common.sh@443 -- $ '[' -n v23.11 ']' 00:37:56.623 05:52:03 -- common/autobuild_common.sh@444 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:37:56.623 05:52:03 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:37:56.623 05:52:03 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:37:56.623 05:52:03 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:37:56.623 05:52:03 -- common/autobuild_common.sh@453 -- $ get_config_params 00:37:56.623 05:52:03 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:37:56.623 05:52:03 -- common/autotest_common.sh@10 -- $ set +x 00:37:56.623 05:52:03 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:37:56.623 05:52:03 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:37:56.623 05:52:03 -- pm/common@17 -- $ local monitor 00:37:56.623 05:52:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:56.623 05:52:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:56.623 05:52:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:56.623 05:52:03 -- pm/common@21 -- $ date +%s 00:37:56.623 05:52:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:56.623 05:52:03 -- pm/common@25 -- $ sleep 1 00:37:56.623 05:52:03 -- pm/common@21 -- $ date +%s 00:37:56.623 05:52:03 -- pm/common@21 -- $ date +%s 00:37:56.623 05:52:03 -- pm/common@21 -- $ date +%s 00:37:56.623 05:52:03 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720929123 00:37:56.623 05:52:03 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720929123 00:37:56.623 05:52:03 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720929123 00:37:56.623 05:52:03 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720929123 00:37:56.623 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720929123_collect-vmstat.pm.log 00:37:56.623 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720929123_collect-cpu-load.pm.log 00:37:56.623 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720929123_collect-cpu-temp.pm.log 00:37:56.623 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720929123_collect-bmc-pm.bmc.pm.log 00:37:57.590 05:52:04 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:37:57.590 05:52:04 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:37:57.590 05:52:04 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:57.590 05:52:04 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:37:57.590 05:52:04 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:37:57.590 05:52:04 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:37:57.590 05:52:04 -- spdk/autopackage.sh@19 -- $ timing_finish 00:37:57.590 05:52:04 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:37:57.590 05:52:04 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:37:57.590 05:52:04 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:57.590 05:52:04 -- spdk/autopackage.sh@20 -- $ exit 0 00:37:57.590 05:52:04 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:37:57.590 05:52:04 -- pm/common@29 -- $ signal_monitor_resources TERM 00:37:57.590 05:52:04 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:37:57.590 05:52:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:57.590 05:52:04 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:37:57.590 05:52:04 -- pm/common@44 -- $ pid=3433271 00:37:57.590 05:52:04 -- pm/common@50 -- $ kill -TERM 3433271 00:37:57.590 05:52:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:57.590 05:52:04 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:37:57.590 05:52:04 -- pm/common@44 -- $ pid=3433273 00:37:57.590 05:52:04 -- pm/common@50 -- $ kill -TERM 3433273 00:37:57.590 05:52:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:57.590 05:52:04 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:37:57.590 05:52:04 -- pm/common@44 -- $ pid=3433274 00:37:57.590 05:52:04 -- pm/common@50 -- $ kill -TERM 3433274 00:37:57.590 05:52:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:57.590 05:52:04 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:37:57.590 05:52:04 -- pm/common@44 -- $ pid=3433305 00:37:57.590 05:52:04 -- pm/common@50 -- $ sudo -E kill -TERM 3433305 00:37:57.590 + [[ -n 2994994 ]] 00:37:57.590 + sudo kill 2994994 00:37:57.599 [Pipeline] } 00:37:57.619 [Pipeline] // stage 00:37:57.626 [Pipeline] } 00:37:57.642 [Pipeline] // timeout 00:37:57.649 [Pipeline] } 00:37:57.667 [Pipeline] // catchError 00:37:57.674 [Pipeline] } 00:37:57.693 [Pipeline] // wrap 00:37:57.701 [Pipeline] } 00:37:57.713 [Pipeline] // catchError 00:37:57.723 [Pipeline] stage 00:37:57.724 [Pipeline] { (Epilogue) 00:37:57.738 [Pipeline] catchError 00:37:57.740 [Pipeline] { 00:37:57.754 [Pipeline] echo 00:37:57.755 Cleanup processes 00:37:57.759 [Pipeline] sh 00:37:58.036 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:58.036 3433410 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:37:58.036 3433538 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:58.049 [Pipeline] sh 00:37:58.328 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:58.328 ++ grep -v 'sudo pgrep' 00:37:58.328 ++ awk '{print $1}' 00:37:58.328 + sudo kill -9 3433410 00:37:58.339 [Pipeline] sh 00:37:58.618 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:38:08.592 [Pipeline] sh 00:38:08.873 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:38:08.873 Artifacts sizes are good 00:38:08.889 [Pipeline] archiveArtifacts 00:38:08.895 Archiving artifacts 00:38:09.114 [Pipeline] sh 00:38:09.396 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:38:09.411 [Pipeline] cleanWs 00:38:09.420 [WS-CLEANUP] Deleting project workspace... 00:38:09.420 [WS-CLEANUP] Deferred wipeout is used... 00:38:09.427 [WS-CLEANUP] done 00:38:09.429 [Pipeline] } 00:38:09.450 [Pipeline] // catchError 00:38:09.463 [Pipeline] sh 00:38:09.741 + logger -p user.info -t JENKINS-CI 00:38:09.749 [Pipeline] } 00:38:09.765 [Pipeline] // stage 00:38:09.770 [Pipeline] } 00:38:09.786 [Pipeline] // node 00:38:09.792 [Pipeline] End of Pipeline 00:38:09.825 Finished: SUCCESS